From patchwork Mon Sep 14 20:15:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774765 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EB626CA for ; Mon, 14 Sep 2020 20:16:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A36C218AC for ; Mon, 14 Sep 2020 20:16:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="gBLiaaMb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726061AbgINUQS (ORCPT ); Mon, 14 Sep 2020 16:16:18 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725951AbgINUQM (ORCPT ); Mon, 14 Sep 2020 16:16:12 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lvoa25wjsKmaISFOrLl+ev23gmzJhtHBrJ7h1s16wHGwoSmpuOhXmMEmlUOnf6AZF2C+7OD8OiXLGyc1X55FO9u5aFZNYq7+eVlMTeTU2EjqwlhFHNtB+HfMPXo0dFpg+LvcAteAqg0y97NaojF+WYcloESeb9MsAHFIKOSTSRY44420JiiiVjg2ZXUlVIGLh+ltFLRoGcmNnK4Gt+e5IqjM2yXC/MX8WRKqBTjVYE5bi1QqUQ3UYmFngMvVqFgbfMMO8Z5NjvhxP5DePOQy4HSQ1dj2TH8id3lGEbgoJUTE4cnyzraEouFaXVa72e4A5808KPeyMgbBEkgwP7GUuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jRWauB7/1o/5aY0SFVNgxzYOKpGt/WCPlw70I4UfuFo=; b=RLhGChOqkZ8aKDQocOmLDZbxi68P8kTNqzg/sTZzhhek/ztNwyrYTDSkSYCAznIx8zSULk+IJNF0a1v/y5Ux5n6WnMWxIV9PjrwtDlg0samk5HzFiDefmD95zLNFCQg7DAp3zwnwcuThQEWGf1ZktLICSmOXy8YCZDT9/pY4m+gDtDFBtTdoCMt7cjNrWRMfhhpsceqdd2ygMlN+kPuni+jGH9x4OlfovRv7t5XllVKA3II8hslRXMeOfh49OcB01ZG8tgx0UisKj5lrTQiEms9n3M13YHxmGu4FvDJ3AuK+jYEZJVgkvJq2AlIiKPzcii2epZLrYAdSywxAoS01Kw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jRWauB7/1o/5aY0SFVNgxzYOKpGt/WCPlw70I4UfuFo=; b=gBLiaaMbVvTRajGZtgXP9k12Lk+05T9yb5dIxkcvSHstOaLY2a/R8kwGAprwCY28Aby9D7IXsQHrAiA43J3KsTQWi6bX2OaMmqneJp2TPEDuHRgddG5PNQDV9nPNQdXH8LsVwJ7xAZ1bE3nsJkMAhvFrJ5UwDF6vl8G4tN/FPJo= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:16:07 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:16:07 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 01/35] KVM: SVM: Remove the call to sev_platform_status() during setup Date: Mon, 14 Sep 2020 15:15:15 -0500 Message-Id: <266ec828918d0e4a77b52b15aaa457b2df01773b.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SA9PR03CA0019.namprd03.prod.outlook.com (2603:10b6:806:20::24) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SA9PR03CA0019.namprd03.prod.outlook.com (2603:10b6:806:20::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:16:06 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: abffb001-326c-4f68-ae54-08d858eb03b6 X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eO9o3m9SMfSrnBq0OtXqSTSBQhrzBAlfIWpaVVFbHT6RbOsRSQWu/kNHOLE8uz56e7DOAV9c6TqHQ0vuX9tbOQL9jNVw9PCz0ICAwp31yQmMGnT5FU5xudSAjBZTsJT9ZScMxnqskaiD2tnY5L0f+F+sDg7Uy1pjtvQndNVvGGK/vDSBS2QKi+oetnI4azacgyiXmJqWmRgv9IYJmRxDQ3LDaP8vM3Pqw3/13KNp8En3Z6BavZNdoHy/sKFflnLjUihRxXYqz1dR2rf/1TwISAMU5wZHEI+OPXk2JTWg0Sh2gp4Zrgv1HU5mrrwBwJ7C3TJv2KASaiQP/t/i0CFbsQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: Jg+lTZDz9rmRb8LzxB8G0sCCKidlFhT04AIS8LLj+84nvxncj6uIMi+9AQ2tVXDp6X3FgdCdZgzaH+s5QxdD9mqz5pVw6/6ivESqaIPrLqDAIClJli8y+I7saICKKms4Ma7um5JjgbIx1UxmO/sk9OZ6vPj5GPZjG+o5o68bCH8ydQFyj73EGK32LVJNVdE3yetRj1FYFQosndcHIto6Pt9BgKdnqRV2ktypZhY+NnEr/HxlWUIIwn5XaRt0O3dbTY/DgJoVk6ergb5H3mwh1ONUzYASdJnSYyjSLdBj0nFtvnB1ifM1v2KnG5SZhLjtbRoYu+ogmUiOEwZ/xwJuuHWP/f1M1A3n9CJ0ws8Q1eXfOhOyPqn87/c4zgPSk4CYHXe9wWh5e8LGUxSX4QLd0uU+CsXycCNAI34NShIOjbVrqu2lTQdO7/YV8IQqnv4ZGmzUsDi1T3PVE3gvhOqiWi/gpTY696cWf5LOYIu/VQq0Vtn7Ur13V3htFevsaX+ABdc4Z94chr4vmQFzv1MalhEnNhXoeU9OCLP8kxP3KURPWoWLHovhCwx/60zjI0LMM4gGO+ovbH/NVoYAFN2sjJhyKJqhbTjI+XjdsMkF9N6bPF7VAJylmbtoaCT9ppy/XDfHjLByh2FgSLjYjVtCiw== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: abffb001-326c-4f68-ae54-08d858eb03b6 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:16:07.7417 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: IxiRu/w+UsYCl7tASdbSJtP25OSo7o06prUcfU+9k1REvyFiIVVYR9x+ycBO2AvmlUJZTwsgf8mSd0poMOCtfQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky When both KVM support and the CCP driver are built into the kernel instead of as modules, KVM initialization happens before CCP initialization. As a result, sev_platform_status() will return a failure when it is called from sev_hardware_setup(), when this isn't really an error condition. Since sev_platform_status() doesn't need to be called at this time anyway, remove the invocation from sev_hardware_setup(). Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 22 +--------------------- 1 file changed, 1 insertion(+), 21 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 402dc4234e39..fab382e2dad2 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1117,9 +1117,6 @@ void sev_vm_destroy(struct kvm *kvm) int __init sev_hardware_setup(void) { - struct sev_user_data_status *status; - int rc; - /* Maximum number of encrypted guests supported simultaneously */ max_sev_asid = cpuid_ecx(0x8000001F); @@ -1138,26 +1135,9 @@ int __init sev_hardware_setup(void) if (!sev_reclaim_asid_bitmap) return 1; - status = kmalloc(sizeof(*status), GFP_KERNEL); - if (!status) - return 1; - - /* - * Check SEV platform status. - * - * PLATFORM_STATUS can be called in any state, if we failed to query - * the PLATFORM status then either PSP firmware does not support SEV - * feature or SEV firmware is dead. - */ - rc = sev_platform_status(status, NULL); - if (rc) - goto err; - pr_info("SEV supported\n"); -err: - kfree(status); - return rc; + return 0; } void sev_hardware_teardown(void) From patchwork Mon Sep 14 20:15:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774767 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD2526CA for ; Mon, 14 Sep 2020 20:16:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98355218AC for ; Mon, 14 Sep 2020 20:16:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="jkH6cBGe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726082AbgINUQe (ORCPT ); Mon, 14 Sep 2020 16:16:34 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725978AbgINUQT (ORCPT ); Mon, 14 Sep 2020 16:16:19 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TRLVxRU/fJ+ytH/QY5kkd3k5x0GdxHofngmbM8qNnKOoylLtnRwE2Au+h6AELtJhjWcGWcG6WQdTyfl0g03VR4hVG5kNiC4go+W2NqZhTdIcx4GeR+1gbSfKN7G+6vNfCAhdC2WrakxNsXEh65UEt7cGk58yVKqJHEtL1wvhZmpPZ0dsIgYjBVtB+xlOkeCjF9WbL0VPoB9C0amARAfy3F+SxLwr+SPLrQe7wd1+j+4VYbkMILQ1jJsyPNjz4pCO0pNrqT2OpZ62zJNVp8xJC1LtHsFkS+d6M1fb6FYLEO7Rvr8fn9L45ld/1LsfNVBM6gwMUOiaMi7yqWD+OMtTfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Gmq+mpXqk0HGSWqUpipOJTbuNJZJEm7PwIIOMoMXHoE=; b=l1O84dKBYj1u0XGGFBfXCQCp1yN3OzpPrzwnOwBnbOB5b9XePcEbZk3c7kxI8TpgqykY519r8Ancxix25GU7oxIK1zUm2+UsIyHNKKUPOAwUoDcpLTW1PcVIPRif/bZWrYSRF5z9GCjw2rpzP5idOiCAzRqjICNWDvu1XEaumSrAJrS80hqeGJGXgtP4iO43EvHIQkqLfNYnB7p5sVXw11CVG/BCJYX/F7YyIQp3iojc7KQQT6Jw0wxGIZEOYaGzUzpgq/twFIZV89/Aw7PbsDKDy5YxlToD4VH5SuRHQHzxfSVLZXt4DGelyNn81Cn7mox4QfJchlL6XCYIw/SWwQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Gmq+mpXqk0HGSWqUpipOJTbuNJZJEm7PwIIOMoMXHoE=; b=jkH6cBGeVWH35eIcf4klcxvdX16qPxQd71D9onIsXJ0wKVT5fIyG/sNyZv1zBJySMs6ddagiqbs4groAg3HwMWecYylHCWjaM+eEZdpC6BE2tgf/rCtwl8D+ATKIIxBQeCK/fKJ8byOtkjI2r1xWw1JaWZyi2Q0kqvPZHvobqbo= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:16:15 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:16:15 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 02/35] KVM: SVM: Add support for SEV-ES capability in KVM Date: Mon, 14 Sep 2020 15:15:16 -0500 Message-Id: <9a8eab4f685abe3544bb73128ab068b18bc6c454.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR20CA0032.namprd20.prod.outlook.com (2603:10b6:3:13d::18) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR20CA0032.namprd20.prod.outlook.com (2603:10b6:3:13d::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:16:14 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 05c7b9ec-8597-4c19-af7b-08d858eb0858 X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2958; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZwAIDjcmyc4SDCwTSPQMd17GdgshSpFITsIKps9e1feD7w4sLw2Ld9KoJTECQW/mwT/KV6vJmBVcdrjkL4qmQUxerMeqNdwtpF8HmT8dwNEldsQ1n9PZqqvPlKNcPKTGVzFA8JtMwa9HHT0jjI193n4RsTqg64SWDFJSIitDrI0t7LEoz7roEJaUB4gSD7wnRXKAgaOYexPP71EhOtokLwk1fOCZK00mPR6VC2PUxneoEf64pV0THyzNa9qX4KGTQF12Tz89vhWTBlnpAtCHUQQnhKP2T2DDLA3Otx/qfTrtzVoCrtdSSQ+2FUEHatBiW1eSm0WXqEtbanxnraJc6A== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: uYtebeG7ab7xQfpzj5oR4HQWl9qEhD0m1/nwf21p86rQYFtoa0BTgrbTJGOGP1p/+PEcnWZ49d9oEiP5vyJATHa1xaflNIeTtNI06JyGG6ykCvcHYnZGtEvdaF6U0uhsUoLbQBqq+zZGfaXnvqJzyEW1lD6nvKVqVaV7B/O//Ld0i+ZFk8VyfHWLP6EEdcnm6S9tYhnL7qSFTj6w0FGAMxwjvrgXN3en3tfR1EPcidGIq224X9ouAFEW55SdU2pY0yPMUnbdJ8eUtcJO3FtOp7H7DddHFZRmiMV30cw9NdH63swambCzOhpJMReGhI6XX/g0ZZ6HoMjTiqXUR7S+PwzpzfVRFGeVp6+NzswECs+Dlx8m5xrizVjFAhY4MHG+gBBUgYaCVjH6ZQIifUngs1Mva+lPBbeP9ByOB6O/CQWAA0dNDvRqoPWE9KQ4bedSscD4+w8LKmdMPJJPAnlj7BQ9KpnZ6RcDcLVFHll3cwSQpqOW735OGCEB2yICW8cBtP/YBy/2b3SNyJOv4m5SDeGAe8dguoSeV/lTLT9I5nchcp2sUldHkpMvWXLddpBo4y0itdZnHESxRkjuRSOVgS0NjaBvwL1nsDpNYMv+zRI+rjRjdjRnpjILz0zLvHo12eHn5wBg+jg8aGpRmmBRPw== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 05c7b9ec-8597-4c19-af7b-08d858eb0858 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:16:15.5323 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: OrjthO15fMEKZ/e26XiHyID3oZ8+flNMqp4yvGydEE01Hk0rz8kkxW+EX33ycxhuKMyWX+Z1tKacxl6V87Dihg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky Add support to KVM for determining if a system is capable of supporting SEV-ES as well as determining if a guest is an SEV-ES guest. Signed-off-by: Tom Lendacky --- arch/x86/kvm/Kconfig | 3 ++- arch/x86/kvm/svm/sev.c | 47 ++++++++++++++++++++++++++++++++++-------- arch/x86/kvm/svm/svm.c | 20 +++++++++--------- arch/x86/kvm/svm/svm.h | 17 ++++++++++++++- 4 files changed, 66 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fbd5bd7a945a..4e8924aab05e 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -99,7 +99,8 @@ config KVM_AMD_SEV depends on KVM_AMD && X86_64 depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) help - Provides support for launching Encrypted VMs on AMD processors. + Provides support for launching Encrypted VMs (SEV) and Encrypted VMs + with Encrypted State (SEV-ES) on AMD processors. config KVM_MMU_AUDIT bool "Audit KVM MMU" diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index fab382e2dad2..48379e21ed43 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -923,7 +923,7 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp) struct kvm_sev_cmd sev_cmd; int r; - if (!svm_sev_enabled()) + if (!svm_sev_enabled() || !sev) return -ENOTTY; if (!argp) @@ -1115,29 +1115,58 @@ void sev_vm_destroy(struct kvm *kvm) sev_asid_free(sev->asid); } -int __init sev_hardware_setup(void) +void __init sev_hardware_setup(void) { + unsigned int eax, ebx, ecx, edx; + bool sev_es_supported = false; + bool sev_supported = false; + + /* Does the CPU support SEV? */ + if (!boot_cpu_has(X86_FEATURE_SEV)) + goto out; + + /* Retrieve SEV CPUID information */ + cpuid(0x8000001f, &eax, &ebx, &ecx, &edx); + /* Maximum number of encrypted guests supported simultaneously */ - max_sev_asid = cpuid_ecx(0x8000001F); + max_sev_asid = ecx; if (!svm_sev_enabled()) - return 1; + goto out; /* Minimum ASID value that should be used for SEV guest */ - min_sev_asid = cpuid_edx(0x8000001F); + min_sev_asid = edx; /* Initialize SEV ASID bitmaps */ sev_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL); if (!sev_asid_bitmap) - return 1; + goto out; sev_reclaim_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL); if (!sev_reclaim_asid_bitmap) - return 1; + goto out; - pr_info("SEV supported\n"); + pr_info("SEV supported: %u ASIDs\n", max_sev_asid - min_sev_asid + 1); + sev_supported = true; - return 0; + /* SEV-ES support requested? */ + if (!sev_es) + goto out; + + /* Does the CPU support SEV-ES? */ + if (!boot_cpu_has(X86_FEATURE_SEV_ES)) + goto out; + + /* Has the system been allocated ASIDs for SEV-ES? */ + if (min_sev_asid == 1) + goto out; + + pr_info("SEV-ES supported: %u ASIDs\n", min_sev_asid - 1); + sev_es_supported = true; + +out: + sev = sev_supported; + sev_es = sev_es_supported; } void sev_hardware_teardown(void) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4368b66615c1..83292fc44b4e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -187,9 +187,13 @@ static int vgif = true; module_param(vgif, int, 0444); /* enable/disable SEV support */ -static int sev = IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT); +int sev = IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT); module_param(sev, int, 0444); +/* enable/disable SEV-ES support */ +int sev_es = IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT); +module_param(sev_es, int, 0444); + static bool __read_mostly dump_invalid_vmcb = 0; module_param(dump_invalid_vmcb, bool, 0644); @@ -860,15 +864,11 @@ static __init int svm_hardware_setup(void) kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE); } - if (sev) { - if (boot_cpu_has(X86_FEATURE_SEV) && - IS_ENABLED(CONFIG_KVM_AMD_SEV)) { - r = sev_hardware_setup(); - if (r) - sev = false; - } else { - sev = false; - } + if (IS_ENABLED(CONFIG_KVM_AMD_SEV) && sev) { + sev_hardware_setup(); + } else { + sev = false; + sev_es = false; } svm_adjust_mmio_mask(); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index a798e1731709..2692ddf30c8d 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -60,6 +60,7 @@ enum { struct kvm_sev_info { bool active; /* SEV enabled guest */ + bool es_active; /* SEV-ES enabled guest */ unsigned int asid; /* ASID used for this guest */ unsigned int handle; /* SEV firmware handle */ int fd; /* SEV device fd */ @@ -348,6 +349,9 @@ static inline bool gif_set(struct vcpu_svm *svm) #define MSR_CR3_LONG_RESERVED_MASK 0xfff0000000000fe7U #define MSR_INVALID 0xffffffffU +extern int sev; +extern int sev_es; + u32 svm_msrpm_offset(u32 msr); void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer); void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); @@ -474,6 +478,17 @@ static inline bool sev_guest(struct kvm *kvm) #endif } +static inline bool sev_es_guest(struct kvm *kvm) +{ +#ifdef CONFIG_KVM_AMD_SEV + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + return sev_guest(kvm) && sev->es_active; +#else + return false; +#endif +} + static inline bool svm_sev_enabled(void) { return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0; @@ -486,7 +501,7 @@ int svm_register_enc_region(struct kvm *kvm, int svm_unregister_enc_region(struct kvm *kvm, struct kvm_enc_region *range); void pre_sev_run(struct vcpu_svm *svm, int cpu); -int __init sev_hardware_setup(void); +void __init sev_hardware_setup(void); void sev_hardware_teardown(void); #endif From patchwork Mon Sep 14 20:15:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774771 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 02A786CA for ; Mon, 14 Sep 2020 20:17:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C7B47208DB for ; Mon, 14 Sep 2020 20:17:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="3A120pJ8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726128AbgINUQ7 (ORCPT ); Mon, 14 Sep 2020 16:16:59 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726069AbgINUQg (ORCPT ); Mon, 14 Sep 2020 16:16:36 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JaUMZaA9EUh+22u7gBdGWOmnO/VbJU3PvTtg0Vym+7EUn5i+trIdCvPUiygTt8/mhRUbFjZidZ2/37Cuy+6qUO0tgS7ct26GJKgwaW8hYGebXa0EpllibaPAouWDQ4O3nbkr+pym7tNXE8Edj/1vHrLb0mFOkXeNrWDIB1ihNH8g395L1Iu18TUbzDnEIjxGL/gQVBgyGmkQG0bpOW8cUAuKxdThip+rQhOsLjWuqVloYeSxkFUDjrG0ebmDuUV+WNTByib1mEhJciVl+hxVvpoOsbr/xe0GBsnR0azpFnb6i+wLkiobAj0QVijGeR4yGOR0GWw/BalauNdHUXTabQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4lSD6qJZgfFC9PrTfTWl1grZ0WpDC8H7qzL7KDXo3/k=; b=Lkdz7Q2SvcAz1S1u335MCfmIBpEUkkIHvMw688Bl1BYXr68S/b4bJmXmc/oP7TTBLb0vuZZ8CN0yWJF6tpoPcN0Qn2oCAZbhvpvg5lrhrIjNyotD2B8koF09HPeXwCiZA2gq6F2UcR97LsO+1Kr+inQE+F+3/cmAR7yq0boIDY0MD8FojYgdhFyY+pO6/dKLxY/O8cqA5gZ8O/ptoQp3/YNnEOSWs6AmwMHS8CLvUrV8CkEsIcRCIJKLBBanSbdLcw0RRB7jqA30qcnYH6IqmDvBfCPKqYImEVO+BGzu2Ti0mS2NyXCamop5b+rFd2WCIXkToTft7cOywduKABlS0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4lSD6qJZgfFC9PrTfTWl1grZ0WpDC8H7qzL7KDXo3/k=; b=3A120pJ8Xn2l/uvmDWjtyYYCYO36TbgILimA8Ht+eoGC+Ja7DRdXxYGLYHFRHr4BYlMzSFDs0JVQPTb5zy2vWwvY5m//3gQQTJgxeQm06u+qOdnVOZRQAqZN4RLD56xLSerB5IcWw5nSiIBBb0EInq0VmBm4k4AKmhj1nsIpgUg= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:16:23 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:16:23 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 03/35] KVM: SVM: Add indirect access to the VM save area Date: Mon, 14 Sep 2020 15:15:17 -0500 Message-Id: <627d74a17e37a1ac048d423dd47da9e64b62952c.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM6PR03CA0062.namprd03.prod.outlook.com (2603:10b6:5:100::39) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM6PR03CA0062.namprd03.prod.outlook.com (2603:10b6:5:100::39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:16:22 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 6816d353-c333-4fd6-65b2-08d858eb0d1c X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7v707a4DhAUPj7QNYPRJnPeG759pRCv43DRpyQGgh7ZTS6dXaxSIOYw3hpERskW6jEoraI3dEtt3W7e7IVLN8E8kf/u1UDcnteS3xnohtNW6ZChrtPEJf21w06gznsdZW/FSQfSGeedxODTnjUhehYt43/o3un7ewPYMuzIztCwYq2FtVQxESE6ai7MHmHH0l5C7YX+Cz2m3Vlmi6QauyVJiNpMD5lyog6z65Gx8d9Ri27OBSFO92RRMgrD0utxpmsNZbK/EJMxtfESsn5gQkjSVq0jp+DZiiff7+UE4gQzg1D7Y6aLcrhk+53JvZP9lIQuXAL/64/61JwJ3zpHNWg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002)(30864003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: RPYTdxs/MsmxTuf8Eq5lSPn5ytHGatu533DMIEqKrst/yWk+KFk/yNxfSnbtwW7saFsYWl4Dm5f3on9SzXsCdfHPFPIumgJ+RdLMcpwW30zgD5rgRE5InSGimWcAwXXstyM3bwSTF5+cN3hBHuKG4iEq1mOMARKlJeTYA+2GL1fkWIMEOS0A8vSxjUqSRRdWR1rkSWFLRuUo4pD6NlKnt++6uvtKQ83YF5LMjBt21RZINnvybDJQpF9taqcamMl0oeE3ArM23bwi9V8bbk9VfxKRpe0+aoPDavaY42ttTt1FZ+zm8kcIfFCfTeHcTogJGgEDvb7bxchuRgjNFLn7jzApVnD+XYl2drLDZ0pyMRdlbsbBIOsSfsjmYLO6SEBVRAo12XYGiTce1ZlEBm0dlUyNfAYfs778rpBEH7VEhMsjYz9Fw2zv6QeeTUU8cApvvbIFS66n9sHhbYIzoL4rrUHx89HvqenXbFug6TQDXpR14U3BNRQJ2NhkAr6wvgY4gJTJmqrrOz2mJfvjCcqHrSzac4pm0bHT2w6CjUwh8of6D5/+LJwXkGmwF11J9ek37oEWZxBrJe9kvulRPOO6MiwVKswfeEjbK3dPb5s2plkFHv2DIp9EyrB1LtoouMUy1pkbstDg0GbMLG0fCGeJDQ== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6816d353-c333-4fd6-65b2-08d858eb0d1c X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:16:23.6467 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: jDO5n/c7oiKS1QWnCSfqjS8HmH0dAolltgn4qeYtSMkMgdZb06kB6M0LsY3dVfRM5ViOcGJhAGVAL2q9x3QGaA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky In order to later support accessing the GHCB structure in a similar way as the VM save area (VMSA) structure, change all accesses to the VMSA into function calls. Later on, this will allow the hypervisor support to decide between accessing the VMSA or GHCB in a central location. Accesses to a nested VMCB structure save area remain as direct save area accesses. The functions are created using VMSA accessor macros. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/nested.c | 125 +++++++++++++++-------------- arch/x86/kvm/svm/svm.c | 165 +++++++++++++++++++------------------- arch/x86/kvm/svm/svm.h | 129 ++++++++++++++++++++++++++++- 3 files changed, 273 insertions(+), 146 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index d1ae94f40907..c5d18c859ded 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -367,28 +367,29 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb) { /* Load the nested guest state */ - svm->vmcb->save.es = nested_vmcb->save.es; - svm->vmcb->save.cs = nested_vmcb->save.cs; - svm->vmcb->save.ss = nested_vmcb->save.ss; - svm->vmcb->save.ds = nested_vmcb->save.ds; - svm->vmcb->save.gdtr = nested_vmcb->save.gdtr; - svm->vmcb->save.idtr = nested_vmcb->save.idtr; + svm_es_write(svm, &nested_vmcb->save.es); + svm_cs_write(svm, &nested_vmcb->save.cs); + svm_ss_write(svm, &nested_vmcb->save.ss); + svm_ds_write(svm, &nested_vmcb->save.ds); + svm_gdtr_write(svm, &nested_vmcb->save.gdtr); + svm_idtr_write(svm, &nested_vmcb->save.idtr); kvm_set_rflags(&svm->vcpu, nested_vmcb->save.rflags); svm_set_efer(&svm->vcpu, nested_vmcb->save.efer); svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0); svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4); - svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = nested_vmcb->save.cr2; + svm_cr2_write(svm, nested_vmcb->save.cr2); + svm->vcpu.arch.cr2 = nested_vmcb->save.cr2; kvm_rax_write(&svm->vcpu, nested_vmcb->save.rax); kvm_rsp_write(&svm->vcpu, nested_vmcb->save.rsp); kvm_rip_write(&svm->vcpu, nested_vmcb->save.rip); /* In case we don't even reach vcpu_run, the fields are not updated */ - svm->vmcb->save.rax = nested_vmcb->save.rax; - svm->vmcb->save.rsp = nested_vmcb->save.rsp; - svm->vmcb->save.rip = nested_vmcb->save.rip; - svm->vmcb->save.dr7 = nested_vmcb->save.dr7; + svm_rax_write(svm, nested_vmcb->save.rax); + svm_rsp_write(svm, nested_vmcb->save.rsp); + svm_rip_write(svm, nested_vmcb->save.rip); + svm_dr7_write(svm, nested_vmcb->save.dr7); svm->vcpu.arch.dr6 = nested_vmcb->save.dr6; - svm->vmcb->save.cpl = nested_vmcb->save.cpl; + svm_cpl_write(svm, nested_vmcb->save.cpl); } static void nested_prepare_vmcb_control(struct vcpu_svm *svm) @@ -451,7 +452,6 @@ int nested_svm_vmrun(struct vcpu_svm *svm) int ret; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; - struct vmcb *vmcb = svm->vmcb; struct kvm_host_map map; u64 vmcb_gpa; @@ -460,7 +460,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) return 1; } - vmcb_gpa = svm->vmcb->save.rax; + vmcb_gpa = svm_rax_read(svm); ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map); if (ret == -EINVAL) { kvm_inject_gp(&svm->vcpu, 0); @@ -481,7 +481,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) goto out; } - trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb_gpa, + trace_kvm_nested_vmrun(svm_rip_read(svm), vmcb_gpa, nested_vmcb->save.rip, nested_vmcb->control.int_ctl, nested_vmcb->control.event_inj, @@ -500,25 +500,25 @@ int nested_svm_vmrun(struct vcpu_svm *svm) * Save the old vmcb, so we don't need to pick what we save, but can * restore everything when a VMEXIT occurs */ - hsave->save.es = vmcb->save.es; - hsave->save.cs = vmcb->save.cs; - hsave->save.ss = vmcb->save.ss; - hsave->save.ds = vmcb->save.ds; - hsave->save.gdtr = vmcb->save.gdtr; - hsave->save.idtr = vmcb->save.idtr; + hsave->save.es = *svm_es_read(svm); + hsave->save.cs = *svm_cs_read(svm); + hsave->save.ss = *svm_ss_read(svm); + hsave->save.ds = *svm_ds_read(svm); + hsave->save.gdtr = *svm_gdtr_read(svm); + hsave->save.idtr = *svm_idtr_read(svm); hsave->save.efer = svm->vcpu.arch.efer; hsave->save.cr0 = kvm_read_cr0(&svm->vcpu); hsave->save.cr4 = svm->vcpu.arch.cr4; hsave->save.rflags = kvm_get_rflags(&svm->vcpu); hsave->save.rip = kvm_rip_read(&svm->vcpu); - hsave->save.rsp = vmcb->save.rsp; - hsave->save.rax = vmcb->save.rax; + hsave->save.rsp = svm_rsp_read(svm); + hsave->save.rax = svm_rax_read(svm); if (npt_enabled) - hsave->save.cr3 = vmcb->save.cr3; + hsave->save.cr3 = svm_cr3_read(svm); else hsave->save.cr3 = kvm_read_cr3(&svm->vcpu); - copy_vmcb_control_area(&hsave->control, &vmcb->control); + copy_vmcb_control_area(&hsave->control, &svm->vmcb->control); svm->nested.nested_run_pending = 1; @@ -544,20 +544,21 @@ int nested_svm_vmrun(struct vcpu_svm *svm) return ret; } -void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) +void nested_svm_vmloadsave(struct vmcb_save_area *from_vmsa, + struct vmcb_save_area *to_vmsa) { - to_vmcb->save.fs = from_vmcb->save.fs; - to_vmcb->save.gs = from_vmcb->save.gs; - to_vmcb->save.tr = from_vmcb->save.tr; - to_vmcb->save.ldtr = from_vmcb->save.ldtr; - to_vmcb->save.kernel_gs_base = from_vmcb->save.kernel_gs_base; - to_vmcb->save.star = from_vmcb->save.star; - to_vmcb->save.lstar = from_vmcb->save.lstar; - to_vmcb->save.cstar = from_vmcb->save.cstar; - to_vmcb->save.sfmask = from_vmcb->save.sfmask; - to_vmcb->save.sysenter_cs = from_vmcb->save.sysenter_cs; - to_vmcb->save.sysenter_esp = from_vmcb->save.sysenter_esp; - to_vmcb->save.sysenter_eip = from_vmcb->save.sysenter_eip; + to_vmsa->fs = from_vmsa->fs; + to_vmsa->gs = from_vmsa->gs; + to_vmsa->tr = from_vmsa->tr; + to_vmsa->ldtr = from_vmsa->ldtr; + to_vmsa->kernel_gs_base = from_vmsa->kernel_gs_base; + to_vmsa->star = from_vmsa->star; + to_vmsa->lstar = from_vmsa->lstar; + to_vmsa->cstar = from_vmsa->cstar; + to_vmsa->sfmask = from_vmsa->sfmask; + to_vmsa->sysenter_cs = from_vmsa->sysenter_cs; + to_vmsa->sysenter_esp = from_vmsa->sysenter_esp; + to_vmsa->sysenter_eip = from_vmsa->sysenter_eip; } int nested_svm_vmexit(struct vcpu_svm *svm) @@ -588,24 +589,24 @@ int nested_svm_vmexit(struct vcpu_svm *svm) /* Give the current vmcb to the guest */ svm_set_gif(svm, false); - nested_vmcb->save.es = vmcb->save.es; - nested_vmcb->save.cs = vmcb->save.cs; - nested_vmcb->save.ss = vmcb->save.ss; - nested_vmcb->save.ds = vmcb->save.ds; - nested_vmcb->save.gdtr = vmcb->save.gdtr; - nested_vmcb->save.idtr = vmcb->save.idtr; + nested_vmcb->save.es = *svm_es_read(svm); + nested_vmcb->save.cs = *svm_cs_read(svm); + nested_vmcb->save.ss = *svm_ss_read(svm); + nested_vmcb->save.ds = *svm_ds_read(svm); + nested_vmcb->save.gdtr = *svm_gdtr_read(svm); + nested_vmcb->save.idtr = *svm_idtr_read(svm); nested_vmcb->save.efer = svm->vcpu.arch.efer; nested_vmcb->save.cr0 = kvm_read_cr0(&svm->vcpu); nested_vmcb->save.cr3 = kvm_read_cr3(&svm->vcpu); - nested_vmcb->save.cr2 = vmcb->save.cr2; + nested_vmcb->save.cr2 = svm_cr2_read(svm); nested_vmcb->save.cr4 = svm->vcpu.arch.cr4; nested_vmcb->save.rflags = kvm_get_rflags(&svm->vcpu); nested_vmcb->save.rip = kvm_rip_read(&svm->vcpu); nested_vmcb->save.rsp = kvm_rsp_read(&svm->vcpu); nested_vmcb->save.rax = kvm_rax_read(&svm->vcpu); - nested_vmcb->save.dr7 = vmcb->save.dr7; + nested_vmcb->save.dr7 = svm_dr7_read(svm); nested_vmcb->save.dr6 = svm->vcpu.arch.dr6; - nested_vmcb->save.cpl = vmcb->save.cpl; + nested_vmcb->save.cpl = svm_cpl_read(svm); nested_vmcb->control.int_state = vmcb->control.int_state; nested_vmcb->control.exit_code = vmcb->control.exit_code; @@ -625,9 +626,9 @@ int nested_svm_vmexit(struct vcpu_svm *svm) nested_vmcb->control.event_inj_err = svm->nested.ctl.event_inj_err; nested_vmcb->control.pause_filter_count = - svm->vmcb->control.pause_filter_count; + vmcb->control.pause_filter_count; nested_vmcb->control.pause_filter_thresh = - svm->vmcb->control.pause_filter_thresh; + vmcb->control.pause_filter_thresh; /* Restore the original control entries */ copy_vmcb_control_area(&vmcb->control, &hsave->control); @@ -638,12 +639,12 @@ int nested_svm_vmexit(struct vcpu_svm *svm) svm->nested.ctl.nested_cr3 = 0; /* Restore selected save entries */ - svm->vmcb->save.es = hsave->save.es; - svm->vmcb->save.cs = hsave->save.cs; - svm->vmcb->save.ss = hsave->save.ss; - svm->vmcb->save.ds = hsave->save.ds; - svm->vmcb->save.gdtr = hsave->save.gdtr; - svm->vmcb->save.idtr = hsave->save.idtr; + svm_es_write(svm, &hsave->save.es); + svm_cs_write(svm, &hsave->save.cs); + svm_ss_write(svm, &hsave->save.ss); + svm_ds_write(svm, &hsave->save.ds); + svm_gdtr_write(svm, &hsave->save.gdtr); + svm_idtr_write(svm, &hsave->save.idtr); kvm_set_rflags(&svm->vcpu, hsave->save.rflags); svm_set_efer(&svm->vcpu, hsave->save.efer); svm_set_cr0(&svm->vcpu, hsave->save.cr0 | X86_CR0_PE); @@ -651,11 +652,11 @@ int nested_svm_vmexit(struct vcpu_svm *svm) kvm_rax_write(&svm->vcpu, hsave->save.rax); kvm_rsp_write(&svm->vcpu, hsave->save.rsp); kvm_rip_write(&svm->vcpu, hsave->save.rip); - svm->vmcb->save.dr7 = 0; - svm->vmcb->save.cpl = 0; - svm->vmcb->control.exit_int_info = 0; + svm_dr7_write(svm, 0); + svm_cpl_write(svm, 0); + vmcb->control.exit_int_info = 0; - vmcb_mark_all_dirty(svm->vmcb); + vmcb_mark_all_dirty(vmcb); trace_kvm_nested_vmexit_inject(nested_vmcb->control.exit_code, nested_vmcb->control.exit_info_1, @@ -673,7 +674,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) return 1; if (npt_enabled) - svm->vmcb->save.cr3 = hsave->save.cr3; + svm_cr3_write(svm, hsave->save.cr3); /* * Drop what we picked up for L2 via svm_complete_interrupts() so it @@ -819,7 +820,7 @@ int nested_svm_check_permissions(struct vcpu_svm *svm) return 1; } - if (svm->vmcb->save.cpl) { + if (svm_cpl_read(svm)) { kvm_inject_gp(&svm->vcpu, 0); return 1; } @@ -888,7 +889,7 @@ static void nested_svm_nmi(struct vcpu_svm *svm) static void nested_svm_intr(struct vcpu_svm *svm) { - trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip); + trace_kvm_nested_intr_vmexit(svm_rip_read(svm)); svm->vmcb->control.exit_code = SVM_EXIT_INTR; svm->vmcb->control.exit_info_1 = 0; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 83292fc44b4e..779c167e42cc 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -285,7 +285,7 @@ void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) svm_set_gif(svm, true); } - svm->vmcb->save.efer = efer | EFER_SVME; + svm_efer_write(svm, efer | EFER_SVME); vmcb_mark_dirty(svm->vmcb, VMCB_CR); } @@ -357,7 +357,7 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu) */ (void)skip_emulated_instruction(&svm->vcpu); rip = kvm_rip_read(&svm->vcpu); - svm->int3_rip = rip + svm->vmcb->save.cs.base; + svm->int3_rip = rip + svm_cs_read_base(svm); svm->int3_injected = rip - old_rip; } @@ -699,9 +699,9 @@ void disable_nmi_singlestep(struct vcpu_svm *svm) if (!(svm->vcpu.guest_debug & KVM_GUESTDBG_SINGLESTEP)) { /* Clear our flags if they were not set by the guest */ if (!(svm->nmi_singlestep_guest_rflags & X86_EFLAGS_TF)) - svm->vmcb->save.rflags &= ~X86_EFLAGS_TF; + svm_rflags_and(svm, ~X86_EFLAGS_TF); if (!(svm->nmi_singlestep_guest_rflags & X86_EFLAGS_RF)) - svm->vmcb->save.rflags &= ~X86_EFLAGS_RF; + svm_rflags_and(svm, ~X86_EFLAGS_RF); } } @@ -988,7 +988,7 @@ static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) static void init_vmcb(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; - struct vmcb_save_area *save = &svm->vmcb->save; + struct vmcb_save_area *save = get_vmsa(svm); svm->vcpu.arch.hflags = 0; @@ -1328,7 +1328,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu) static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - unsigned long rflags = svm->vmcb->save.rflags; + unsigned long rflags = svm_rflags_read(svm); if (svm->nmi_singlestep) { /* Hide our flags if they were not set by the guest */ @@ -1350,7 +1350,7 @@ static void svm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) * (caused by either a task switch or an inter-privilege IRET), * so we do not need to update the CPL here. */ - to_svm(vcpu)->vmcb->save.rflags = rflags; + svm_rflags_write(to_svm(vcpu), rflags); } static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg) @@ -1405,7 +1405,7 @@ static void svm_clear_vintr(struct vcpu_svm *svm) static struct vmcb_seg *svm_seg(struct kvm_vcpu *vcpu, int seg) { - struct vmcb_save_area *save = &to_svm(vcpu)->vmcb->save; + struct vmcb_save_area *save = get_vmsa(to_svm(vcpu)); switch (seg) { case VCPU_SREG_CS: return &save->cs; @@ -1492,32 +1492,30 @@ static void svm_get_segment(struct kvm_vcpu *vcpu, if (var->unusable) var->db = 0; /* This is symmetric with svm_set_segment() */ - var->dpl = to_svm(vcpu)->vmcb->save.cpl; + var->dpl = svm_cpl_read(to_svm(vcpu)); break; } } static int svm_get_cpl(struct kvm_vcpu *vcpu) { - struct vmcb_save_area *save = &to_svm(vcpu)->vmcb->save; - - return save->cpl; + return svm_cpl_read(to_svm(vcpu)); } static void svm_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt) { struct vcpu_svm *svm = to_svm(vcpu); - dt->size = svm->vmcb->save.idtr.limit; - dt->address = svm->vmcb->save.idtr.base; + dt->size = svm_idtr_read_limit(svm); + dt->address = svm_idtr_read_base(svm); } static void svm_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt) { struct vcpu_svm *svm = to_svm(vcpu); - svm->vmcb->save.idtr.limit = dt->size; - svm->vmcb->save.idtr.base = dt->address ; + svm_idtr_write_limit(svm, dt->size); + svm_idtr_write_base(svm, dt->address); vmcb_mark_dirty(svm->vmcb, VMCB_DT); } @@ -1525,30 +1523,31 @@ static void svm_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt) { struct vcpu_svm *svm = to_svm(vcpu); - dt->size = svm->vmcb->save.gdtr.limit; - dt->address = svm->vmcb->save.gdtr.base; + dt->size = svm_gdtr_read_limit(svm); + dt->address = svm_gdtr_read_base(svm); } static void svm_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt) { struct vcpu_svm *svm = to_svm(vcpu); - svm->vmcb->save.gdtr.limit = dt->size; - svm->vmcb->save.gdtr.base = dt->address ; + svm_gdtr_write_limit(svm, dt->size); + svm_gdtr_write_base(svm, dt->address); vmcb_mark_dirty(svm->vmcb, VMCB_DT); } static void update_cr0_intercept(struct vcpu_svm *svm) { ulong gcr0 = svm->vcpu.arch.cr0; - u64 *hcr0 = &svm->vmcb->save.cr0; + u64 hcr0; - *hcr0 = (*hcr0 & ~SVM_CR0_SELECTIVE_MASK) + hcr0 = (svm_cr0_read(svm) & ~SVM_CR0_SELECTIVE_MASK) | (gcr0 & SVM_CR0_SELECTIVE_MASK); + svm_cr0_write(svm, hcr0); vmcb_mark_dirty(svm->vmcb, VMCB_CR); - if (gcr0 == *hcr0) { + if (gcr0 == hcr0) { clr_cr_intercept(svm, INTERCEPT_CR0_READ); clr_cr_intercept(svm, INTERCEPT_CR0_WRITE); } else { @@ -1565,12 +1564,12 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) if (vcpu->arch.efer & EFER_LME) { if (!is_paging(vcpu) && (cr0 & X86_CR0_PG)) { vcpu->arch.efer |= EFER_LMA; - svm->vmcb->save.efer |= EFER_LMA | EFER_LME; + svm_efer_or(svm, EFER_LMA | EFER_LME); } if (is_paging(vcpu) && !(cr0 & X86_CR0_PG)) { vcpu->arch.efer &= ~EFER_LMA; - svm->vmcb->save.efer &= ~(EFER_LMA | EFER_LME); + svm_efer_and(svm, ~(EFER_LMA | EFER_LME)); } } #endif @@ -1586,7 +1585,7 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) */ if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) cr0 &= ~(X86_CR0_CD | X86_CR0_NW); - svm->vmcb->save.cr0 = cr0; + svm_cr0_write(svm, cr0); vmcb_mark_dirty(svm->vmcb, VMCB_CR); update_cr0_intercept(svm); } @@ -1594,7 +1593,7 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) { unsigned long host_cr4_mce = cr4_read_shadow() & X86_CR4_MCE; - unsigned long old_cr4 = to_svm(vcpu)->vmcb->save.cr4; + unsigned long old_cr4 = svm_cr4_read(to_svm(vcpu)); if (cr4 & X86_CR4_VMXE) return 1; @@ -1606,7 +1605,7 @@ int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) if (!npt_enabled) cr4 |= X86_CR4_PAE; cr4 |= host_cr4_mce; - to_svm(vcpu)->vmcb->save.cr4 = cr4; + svm_cr4_write(to_svm(vcpu), cr4); vmcb_mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR); return 0; } @@ -1637,7 +1636,7 @@ static void svm_set_segment(struct kvm_vcpu *vcpu, */ if (seg == VCPU_SREG_SS) /* This is symmetric with svm_get_segment() */ - svm->vmcb->save.cpl = (var->dpl & 3); + svm_cpl_write(svm, (var->dpl & 3)); vmcb_mark_dirty(svm->vmcb, VMCB_SEG); } @@ -1672,8 +1671,8 @@ static void svm_set_dr6(struct vcpu_svm *svm, unsigned long value) { struct vmcb *vmcb = svm->vmcb; - if (unlikely(value != vmcb->save.dr6)) { - vmcb->save.dr6 = value; + if (unlikely(value != svm_dr6_read(svm))) { + svm_dr6_write(svm, value); vmcb_mark_dirty(vmcb, VMCB_DR); } } @@ -1690,8 +1689,8 @@ static void svm_sync_dirty_debug_regs(struct kvm_vcpu *vcpu) * We cannot reset svm->vmcb->save.dr6 to DR6_FIXED_1|DR6_RTM here, * because db_interception might need it. We can do it before vmentry. */ - vcpu->arch.dr6 = svm->vmcb->save.dr6; - vcpu->arch.dr7 = svm->vmcb->save.dr7; + vcpu->arch.dr6 = svm_dr6_read(svm); + vcpu->arch.dr7 = svm_dr7_read(svm); vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_WONT_EXIT; set_dr_intercepts(svm); } @@ -1700,7 +1699,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value) { struct vcpu_svm *svm = to_svm(vcpu); - svm->vmcb->save.dr7 = value; + svm_dr7_write(svm, value); vmcb_mark_dirty(svm->vmcb, VMCB_DR); } @@ -1735,7 +1734,7 @@ static int db_interception(struct vcpu_svm *svm) if (!(svm->vcpu.guest_debug & (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) && !svm->nmi_singlestep) { - u32 payload = (svm->vmcb->save.dr6 ^ DR6_RTM) & ~DR6_FIXED_1; + u32 payload = (svm_dr6_read(svm) ^ DR6_RTM) & ~DR6_FIXED_1; kvm_queue_exception_p(&svm->vcpu, DB_VECTOR, payload); return 1; } @@ -1749,10 +1748,10 @@ static int db_interception(struct vcpu_svm *svm) if (svm->vcpu.guest_debug & (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) { kvm_run->exit_reason = KVM_EXIT_DEBUG; - kvm_run->debug.arch.dr6 = svm->vmcb->save.dr6; - kvm_run->debug.arch.dr7 = svm->vmcb->save.dr7; + kvm_run->debug.arch.dr6 = svm_dr6_read(svm); + kvm_run->debug.arch.dr7 = svm_dr7_read(svm); kvm_run->debug.arch.pc = - svm->vmcb->save.cs.base + svm->vmcb->save.rip; + svm_cs_read_base(svm) + svm_rip_read(svm); kvm_run->debug.arch.exception = DB_VECTOR; return 0; } @@ -1765,7 +1764,7 @@ static int bp_interception(struct vcpu_svm *svm) struct kvm_run *kvm_run = svm->vcpu.run; kvm_run->exit_reason = KVM_EXIT_DEBUG; - kvm_run->debug.arch.pc = svm->vmcb->save.cs.base + svm->vmcb->save.rip; + kvm_run->debug.arch.pc = svm_cs_read_base(svm) + svm_rip_read(svm); kvm_run->debug.arch.exception = BP_VECTOR; return 0; } @@ -1953,7 +1952,7 @@ static int vmload_interception(struct vcpu_svm *svm) if (nested_svm_check_permissions(svm)) return 1; - ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm->vmcb->save.rax), &map); + ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm_rax_read(svm)), &map); if (ret) { if (ret == -EINVAL) kvm_inject_gp(&svm->vcpu, 0); @@ -1964,7 +1963,7 @@ static int vmload_interception(struct vcpu_svm *svm) ret = kvm_skip_emulated_instruction(&svm->vcpu); - nested_svm_vmloadsave(nested_vmcb, svm->vmcb); + nested_svm_vmloadsave(&nested_vmcb->save, get_vmsa(svm)); kvm_vcpu_unmap(&svm->vcpu, &map, true); return ret; @@ -1979,7 +1978,7 @@ static int vmsave_interception(struct vcpu_svm *svm) if (nested_svm_check_permissions(svm)) return 1; - ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm->vmcb->save.rax), &map); + ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm_rax_read(svm)), &map); if (ret) { if (ret == -EINVAL) kvm_inject_gp(&svm->vcpu, 0); @@ -1990,7 +1989,7 @@ static int vmsave_interception(struct vcpu_svm *svm) ret = kvm_skip_emulated_instruction(&svm->vcpu); - nested_svm_vmloadsave(svm->vmcb, nested_vmcb); + nested_svm_vmloadsave(get_vmsa(svm), &nested_vmcb->save); kvm_vcpu_unmap(&svm->vcpu, &map, true); return ret; @@ -2064,7 +2063,7 @@ static int invlpga_interception(struct vcpu_svm *svm) { struct kvm_vcpu *vcpu = &svm->vcpu; - trace_kvm_invlpga(svm->vmcb->save.rip, kvm_rcx_read(&svm->vcpu), + trace_kvm_invlpga(svm_rip_read(svm), kvm_rcx_read(&svm->vcpu), kvm_rax_read(&svm->vcpu)); /* Let's treat INVLPGA the same as INVLPG (can be optimized!) */ @@ -2075,7 +2074,7 @@ static int invlpga_interception(struct vcpu_svm *svm) static int skinit_interception(struct vcpu_svm *svm) { - trace_kvm_skinit(svm->vmcb->save.rip, kvm_rax_read(&svm->vcpu)); + trace_kvm_skinit(svm_rip_read(svm), kvm_rax_read(&svm->vcpu)); kvm_queue_exception(&svm->vcpu, UD_VECTOR); return 1; @@ -2387,24 +2386,24 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) switch (msr_info->index) { case MSR_STAR: - msr_info->data = svm->vmcb->save.star; + msr_info->data = svm_star_read(svm); break; #ifdef CONFIG_X86_64 case MSR_LSTAR: - msr_info->data = svm->vmcb->save.lstar; + msr_info->data = svm_lstar_read(svm); break; case MSR_CSTAR: - msr_info->data = svm->vmcb->save.cstar; + msr_info->data = svm_cstar_read(svm); break; case MSR_KERNEL_GS_BASE: - msr_info->data = svm->vmcb->save.kernel_gs_base; + msr_info->data = svm_kernel_gs_base_read(svm); break; case MSR_SYSCALL_MASK: - msr_info->data = svm->vmcb->save.sfmask; + msr_info->data = svm_sfmask_read(svm); break; #endif case MSR_IA32_SYSENTER_CS: - msr_info->data = svm->vmcb->save.sysenter_cs; + msr_info->data = svm_sysenter_cs_read(svm); break; case MSR_IA32_SYSENTER_EIP: msr_info->data = svm->sysenter_eip; @@ -2423,19 +2422,19 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) * implemented. */ case MSR_IA32_DEBUGCTLMSR: - msr_info->data = svm->vmcb->save.dbgctl; + msr_info->data = svm_dbgctl_read(svm); break; case MSR_IA32_LASTBRANCHFROMIP: - msr_info->data = svm->vmcb->save.br_from; + msr_info->data = svm_br_from_read(svm); break; case MSR_IA32_LASTBRANCHTOIP: - msr_info->data = svm->vmcb->save.br_to; + msr_info->data = svm_br_to_read(svm); break; case MSR_IA32_LASTINTFROMIP: - msr_info->data = svm->vmcb->save.last_excp_from; + msr_info->data = svm_last_excp_from_read(svm); break; case MSR_IA32_LASTINTTOIP: - msr_info->data = svm->vmcb->save.last_excp_to; + msr_info->data = svm_last_excp_to_read(svm); break; case MSR_VM_HSAVE_PA: msr_info->data = svm->nested.hsave_msr; @@ -2527,7 +2526,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) return 1; vcpu->arch.pat = data; - svm->vmcb->save.g_pat = data; + svm_g_pat_write(svm, data); vmcb_mark_dirty(svm->vmcb, VMCB_NPT); break; case MSR_IA32_SPEC_CTRL: @@ -2584,32 +2583,32 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) svm->virt_spec_ctrl = data; break; case MSR_STAR: - svm->vmcb->save.star = data; + svm_star_write(svm, data); break; #ifdef CONFIG_X86_64 case MSR_LSTAR: - svm->vmcb->save.lstar = data; + svm_lstar_write(svm, data); break; case MSR_CSTAR: - svm->vmcb->save.cstar = data; + svm_cstar_write(svm, data); break; case MSR_KERNEL_GS_BASE: - svm->vmcb->save.kernel_gs_base = data; + svm_kernel_gs_base_write(svm, data); break; case MSR_SYSCALL_MASK: - svm->vmcb->save.sfmask = data; + svm_sfmask_write(svm, data); break; #endif case MSR_IA32_SYSENTER_CS: - svm->vmcb->save.sysenter_cs = data; + svm_sysenter_cs_write(svm, data); break; case MSR_IA32_SYSENTER_EIP: svm->sysenter_eip = data; - svm->vmcb->save.sysenter_eip = data; + svm_sysenter_eip_write(svm, data); break; case MSR_IA32_SYSENTER_ESP: svm->sysenter_esp = data; - svm->vmcb->save.sysenter_esp = data; + svm_sysenter_esp_write(svm, data); break; case MSR_TSC_AUX: if (!boot_cpu_has(X86_FEATURE_RDTSCP)) @@ -2632,7 +2631,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) if (data & DEBUGCTL_RESERVED_BITS) return 1; - svm->vmcb->save.dbgctl = data; + svm_dbgctl_write(svm, data); vmcb_mark_dirty(svm->vmcb, VMCB_LBR); if (data & (1ULL<<0)) svm_enable_lbrv(svm); @@ -2805,7 +2804,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); struct vmcb_control_area *control = &svm->vmcb->control; - struct vmcb_save_area *save = &svm->vmcb->save; + struct vmcb_save_area *save = get_vmsa(svm); if (!dump_invalid_vmcb) { pr_warn_ratelimited("set kvm_amd.dump_invalid_vmcb=1 to dump internal KVM state.\n"); @@ -2934,16 +2933,16 @@ static int handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM); if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE)) - vcpu->arch.cr0 = svm->vmcb->save.cr0; + vcpu->arch.cr0 = svm_cr0_read(svm); if (npt_enabled) - vcpu->arch.cr3 = svm->vmcb->save.cr3; + vcpu->arch.cr3 = svm_cr3_read(svm); svm_complete_interrupts(svm); if (is_guest_mode(vcpu)) { int vmexit; - trace_kvm_nested_vmexit(svm->vmcb->save.rip, exit_code, + trace_kvm_nested_vmexit(svm_rip_read(svm), exit_code, svm->vmcb->control.exit_info_1, svm->vmcb->control.exit_info_2, svm->vmcb->control.exit_int_info, @@ -3204,7 +3203,7 @@ static void enable_nmi_window(struct kvm_vcpu *vcpu) */ svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); svm->nmi_singlestep = true; - svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); + svm_rflags_or(svm, (X86_EFLAGS_TF | X86_EFLAGS_RF)); } static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr) @@ -3418,9 +3417,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) fastpath_t exit_fastpath; struct vcpu_svm *svm = to_svm(vcpu); - svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; - svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; - svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP]; + svm_rax_write(svm, vcpu->arch.regs[VCPU_REGS_RAX]); + svm_rsp_write(svm, vcpu->arch.regs[VCPU_REGS_RSP]); + svm_rip_write(svm, vcpu->arch.regs[VCPU_REGS_RIP]); /* * Disable singlestep if we're injecting an interrupt/exception. @@ -3442,7 +3441,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) sync_lapic_to_cr8(vcpu); - svm->vmcb->save.cr2 = vcpu->arch.cr2; + svm_cr2_write(svm, vcpu->arch.cr2); /* * Run with all-zero DR6 unless needed, so that we can get the exact cause @@ -3492,10 +3491,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl); - vcpu->arch.cr2 = svm->vmcb->save.cr2; - vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax; - vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp; - vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip; + vcpu->arch.cr2 = svm_cr2_read(svm); + vcpu->arch.regs[VCPU_REGS_RAX] = svm_rax_read(svm); + vcpu->arch.regs[VCPU_REGS_RSP] = svm_rsp_read(svm); + vcpu->arch.regs[VCPU_REGS_RIP] = svm_rip_read(svm); if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) kvm_before_interrupt(&svm->vcpu); @@ -3558,7 +3557,7 @@ static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root, cr3 = vcpu->arch.cr3; } - svm->vmcb->save.cr3 = cr3; + svm_cr3_write(svm, cr3); vmcb_mark_dirty(svm->vmcb, VMCB_CR); } @@ -3886,9 +3885,9 @@ static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) /* FEE0h - SVM Guest VMCB Physical Address */ put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb); - svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; - svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; - svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP]; + svm_rax_write(svm, vcpu->arch.regs[VCPU_REGS_RAX]); + svm_rsp_write(svm, vcpu->arch.regs[VCPU_REGS_RSP]); + svm_rip_write(svm, vcpu->arch.regs[VCPU_REGS_RIP]); ret = nested_svm_vmexit(svm); if (ret) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 2692ddf30c8d..f42ba9d158df 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -395,7 +395,8 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *nested_vmcb); void svm_leave_nested(struct vcpu_svm *svm); int nested_svm_vmrun(struct vcpu_svm *svm); -void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb); +void nested_svm_vmloadsave(struct vmcb_save_area *from_vmsa, + struct vmcb_save_area *to_vmsa); int nested_svm_vmexit(struct vcpu_svm *svm); int nested_svm_exit_handled(struct vcpu_svm *svm); int nested_svm_check_permissions(struct vcpu_svm *svm); @@ -504,4 +505,130 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu); void __init sev_hardware_setup(void); void sev_hardware_teardown(void); +/* VMSA Accessor functions */ + +static inline struct vmcb_save_area *get_vmsa(struct vcpu_svm *svm) +{ + return &svm->vmcb->save; +} + +#define DEFINE_VMSA_SEGMENT_ENTRY(_field, _entry, _size) \ + static inline _size \ + svm_##_field##_read_##_entry(struct vcpu_svm *svm) \ + { \ + struct vmcb_save_area *vmsa = get_vmsa(svm); \ + \ + return vmsa->_field._entry; \ + } \ + \ + static inline void \ + svm_##_field##_write_##_entry(struct vcpu_svm *svm, \ + _size value) \ + { \ + struct vmcb_save_area *vmsa = get_vmsa(svm); \ + \ + vmsa->_field._entry = value; \ + } \ + +#define DEFINE_VMSA_SEGMENT_ACCESSOR(_field) \ + DEFINE_VMSA_SEGMENT_ENTRY(_field, selector, u16) \ + DEFINE_VMSA_SEGMENT_ENTRY(_field, attrib, u16) \ + DEFINE_VMSA_SEGMENT_ENTRY(_field, limit, u32) \ + DEFINE_VMSA_SEGMENT_ENTRY(_field, base, u64) \ + \ + static inline struct vmcb_seg * \ + svm_##_field##_read(struct vcpu_svm *svm) \ + { \ + struct vmcb_save_area *vmsa = get_vmsa(svm); \ + \ + return &vmsa->_field; \ + } \ + \ + static inline void \ + svm_##_field##_write(struct vcpu_svm *svm, \ + struct vmcb_seg *seg) \ + { \ + struct vmcb_save_area *vmsa = get_vmsa(svm); \ + \ + vmsa->_field = *seg; \ + } + +DEFINE_VMSA_SEGMENT_ACCESSOR(cs) +DEFINE_VMSA_SEGMENT_ACCESSOR(ds) +DEFINE_VMSA_SEGMENT_ACCESSOR(es) +DEFINE_VMSA_SEGMENT_ACCESSOR(fs) +DEFINE_VMSA_SEGMENT_ACCESSOR(gs) +DEFINE_VMSA_SEGMENT_ACCESSOR(ss) +DEFINE_VMSA_SEGMENT_ACCESSOR(gdtr) +DEFINE_VMSA_SEGMENT_ACCESSOR(idtr) +DEFINE_VMSA_SEGMENT_ACCESSOR(ldtr) +DEFINE_VMSA_SEGMENT_ACCESSOR(tr) + +#define DEFINE_VMSA_SIZE_ACCESSOR(_field, _size) \ + static inline _size \ + svm_##_field##_read(struct vcpu_svm *svm) \ + { \ + struct vmcb_save_area *vmsa = get_vmsa(svm); \ + \ + return vmsa->_field; \ + } \ + \ + static inline void \ + svm_##_field##_write(struct vcpu_svm *svm, _size value) \ + { \ + struct vmcb_save_area *vmsa = get_vmsa(svm); \ + \ + vmsa->_field = value; \ + } \ + \ + static inline void \ + svm_##_field##_and(struct vcpu_svm *svm, _size value) \ + { \ + struct vmcb_save_area *vmsa = get_vmsa(svm); \ + \ + vmsa->_field &= value; \ + } \ + \ + static inline void \ + svm_##_field##_or(struct vcpu_svm *svm, _size value) \ + { \ + struct vmcb_save_area *vmsa = get_vmsa(svm); \ + \ + vmsa->_field |= value; \ + } + +#define DEFINE_VMSA_ACCESSOR(_field) \ + DEFINE_VMSA_SIZE_ACCESSOR(_field, u64) + +#define DEFINE_VMSA_U8_ACCESSOR(_field) \ + DEFINE_VMSA_SIZE_ACCESSOR(_field, u8) + +DEFINE_VMSA_ACCESSOR(efer) +DEFINE_VMSA_ACCESSOR(cr0) +DEFINE_VMSA_ACCESSOR(cr2) +DEFINE_VMSA_ACCESSOR(cr3) +DEFINE_VMSA_ACCESSOR(cr4) +DEFINE_VMSA_ACCESSOR(dr6) +DEFINE_VMSA_ACCESSOR(dr7) +DEFINE_VMSA_ACCESSOR(rflags) +DEFINE_VMSA_ACCESSOR(star) +DEFINE_VMSA_ACCESSOR(lstar) +DEFINE_VMSA_ACCESSOR(cstar) +DEFINE_VMSA_ACCESSOR(sfmask) +DEFINE_VMSA_ACCESSOR(kernel_gs_base) +DEFINE_VMSA_ACCESSOR(sysenter_cs) +DEFINE_VMSA_ACCESSOR(sysenter_esp) +DEFINE_VMSA_ACCESSOR(sysenter_eip) +DEFINE_VMSA_ACCESSOR(g_pat) +DEFINE_VMSA_ACCESSOR(dbgctl) +DEFINE_VMSA_ACCESSOR(br_from) +DEFINE_VMSA_ACCESSOR(br_to) +DEFINE_VMSA_ACCESSOR(last_excp_from) +DEFINE_VMSA_ACCESSOR(last_excp_to) + +DEFINE_VMSA_U8_ACCESSOR(cpl) +DEFINE_VMSA_ACCESSOR(rip) +DEFINE_VMSA_ACCESSOR(rax) +DEFINE_VMSA_ACCESSOR(rsp) + #endif From patchwork Mon Sep 14 20:15:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 769926CA for ; Mon, 14 Sep 2020 20:16:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56E8E218AC for ; Mon, 14 Sep 2020 20:16:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="CSCtcizW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726055AbgINUQy (ORCPT ); Mon, 14 Sep 2020 16:16:54 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726098AbgINUQs (ORCPT ); Mon, 14 Sep 2020 16:16:48 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YgH7ZTaPrlLlByisU/RzOOZQJRf3LaKdgo6/zFAld+Y8HLnRtQYIJE0fkjCZgBv06DGMsQIUkJWhIZYOxRqDvW25iFuIbwxzcQyvUN7dFpTi42A8XYHSs9IcOOrtCERUfCbx6GYV5wkva4y0zRzfrV3PIbYL5aSg0TH0J3FinG4bPHXAO5wqY4BZWfPu0+L8NR0bV5IsaahzuRcCc9ftOJ5mZFjdmoQt3jC2kX5Q29+LpPqJjSCdtHDZUwLQEfG1UspGEtQE3B1yvXOa0X2hP6Zruo4bB4c8eWQVVl6wdss4AV+IRghJ36YxrYAuuNoV2IgqoPduRmoA7w3qmjJjnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=p+nev1brP7xkbgiWggPjnexnEoGQCI4weDEpYkPypOA=; b=RzNPqcQgeDUh78m89QW8YdPzyWR39wFSoXndclJ79n36/pSX0aj9f+/8tTrqMiuTO3e1/QQCdnRzo8n17fEEE/xNcxCwLH/kTFRJPX65ftGse4zITL0p7ykBV2FckoDXa12dkCaVoQNYYNNfP4vng1x4j3FpMh6lA7yN10fndM9eHpPqU/c0/zihVzBLw720INJL1V7flwsC1UoiPrIR+ov293T+CSB4ZATNQ+Vc0rb4shjTg+qRSfzgGmF2ulTkwyrRq6bP8Q7gtpMbuQ193Km7xuYhIcqeGfLPsgGMc2E8YbTYpoxfz3MBw21GjUyWLygpv7yj4DDS4InBRvJXsg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=p+nev1brP7xkbgiWggPjnexnEoGQCI4weDEpYkPypOA=; b=CSCtcizW3obHpZy8nSrujHWPzxMPvaXS7FvpWbKvFID7lqE5iOmUqIXV4jZKIxUDTHF9bhc40b/83Zsp/NZ5q/coRSyQHAf5q0nDJhl+VrRHAxzDY56L1G7wwXirO4ud3Zd6OOCSS6eu3E5S5seg9sDSFg8N04lK6oBxPIFUdBA= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:16:31 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:16:31 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 04/35] KVM: SVM: Make GHCB accessor functions available to the hypervisor Date: Mon, 14 Sep 2020 15:15:18 -0500 Message-Id: <9776d4e2d20dd3580cfe070b60977ebf0707b5f4.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR2201CA0015.namprd22.prod.outlook.com (2603:10b6:4:14::25) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR2201CA0015.namprd22.prod.outlook.com (2603:10b6:4:14::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:16:30 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 106f6e15-f3b3-466b-91f0-08d858eb11c5 X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2201; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ez9/OgMXuO3g4/D0wM+PTnQIRbuwIs7TzDgB264A7xixruVrTA2j9vaIaZJqzQojeEQpL4NDrqB1/I9yeUT5XezMrGnj9wFHxC8k5KQ1Ml1DbR44jc+s8frHFS7vgu+gVQYPjfDVBwqQDtVw6inwbPG5OQGJ+DU8sdav8TNSXy/3QLSVlF4D3nI7l0ZbBpWG1nth515DYeEb1MnVCMuJNWg65i2eAM8x1oartWKo1okdgJpI1uvGysoLoohe6KZylyPUrq/126UkfgDhZVVQbEIJT1zlfW1IzNqkCpyBKTiz5dbpGasG03C/0+p03P0MZyAiox/nIpaKTXw6PKeRwQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: GBSwUkQSF7G+XkOrbj0ZbcY7GDPpsT9m5sboph6fY+i6Y5oso0C68EC46UpJS1lxF5eMi32hXawtQ3QfH6SuZLUuf8rGYrGHAj8UzYFBxTuvvR+3GRTcRTAYXwP1bCNsphR/aXE2vPE/kRIkoB4+AVkzzQU1JOE5FUMC8F4yGaT8jJ92CpyUkEFxhj5DHdsIYXuMsmbMofRitYwzMn5KMLccfFr+TAOrPJWqpqICxW5Rzs7c2i9hzpZaZC6jdUc8ocNTl3VAS5TKWrZcuQN5hWGwdZRWA3RJcPCGYTRGKfZO0uUfLysw+rWRk3LyZjQlTOIfmKigXk1qIfWrwTcr63J7/6E9C8+sb/g6M9wB0pIsXgu9snz/aNJp2JBmgZHvk9uTQu2IhitOaqZv3RA0GzZie6SJCcv1N4QofGatcRB5D/kKKq8jWFA8ZZSCnleXAxJaodwzO7Ve/jjk/iNhwfSKM8rTa5xLNrSr/XbhLfrY+tgSk8IwvBt34HH699SuDdDj+9/EDvcG/k3nCYeGdDQP1RLA5meZZDVoNXNrfZL/un+ZiwmNEE1AppOkFze2I9jTmUSlxetw4UfW6zsmtIab0ebFCqfQkLmpSa/g+cD36fZ7dylgR+YSKaOZxd1aV+854oMwsmn0JhPqrEAm7Q== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 106f6e15-f3b3-466b-91f0-08d858eb11c5 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:16:31.3353 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: JJofSMPA+DbPPV3m/jjcAse3a4g3RPxULTRy2rbpU3GfaK8MPoGm3N5IQNS8iwz5g8uBx+Y1lEOkLmENPStNrg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky Update the GHCB accessor functions so that some of the macros can be used by KVM when accessing the GHCB via the VMSA accessors. This will avoid duplicating code and make access to the GHCB somewhat transparent. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/svm.h | 15 +++++++++++++-- arch/x86/kernel/cpu/vmware.c | 12 ++++++------ 2 files changed, 19 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index da38eb195355..c112207c201b 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -349,15 +349,26 @@ struct vmcb { #define DEFINE_GHCB_ACCESSORS(field) \ static inline bool ghcb_##field##_is_valid(const struct ghcb *ghcb) \ { \ + const struct vmcb_save_area *vmsa = &ghcb->save; \ + \ return test_bit(GHCB_BITMAP_IDX(field), \ - (unsigned long *)&ghcb->save.valid_bitmap); \ + (unsigned long *)vmsa->valid_bitmap); \ + } \ + \ + static inline u64 ghcb_get_##field(struct ghcb *ghcb) \ + { \ + const struct vmcb_save_area *vmsa = &ghcb->save; \ + \ + return vmsa->field; \ } \ \ static inline void ghcb_set_##field(struct ghcb *ghcb, u64 value) \ { \ + struct vmcb_save_area *vmsa = &ghcb->save; \ + \ __set_bit(GHCB_BITMAP_IDX(field), \ (unsigned long *)&ghcb->save.valid_bitmap); \ - ghcb->save.field = value; \ + vmsa->field = value; \ } DEFINE_GHCB_ACCESSORS(cpl) diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c index 924571fe5864..c6ede3b3d302 100644 --- a/arch/x86/kernel/cpu/vmware.c +++ b/arch/x86/kernel/cpu/vmware.c @@ -501,12 +501,12 @@ static bool vmware_sev_es_hcall_finish(struct ghcb *ghcb, struct pt_regs *regs) ghcb_rbp_is_valid(ghcb))) return false; - regs->bx = ghcb->save.rbx; - regs->cx = ghcb->save.rcx; - regs->dx = ghcb->save.rdx; - regs->si = ghcb->save.rsi; - regs->di = ghcb->save.rdi; - regs->bp = ghcb->save.rbp; + regs->bx = ghcb_get_rbx(ghcb); + regs->cx = ghcb_get_rcx(ghcb); + regs->dx = ghcb_get_rdx(ghcb); + regs->si = ghcb_get_rsi(ghcb); + regs->di = ghcb_get_rdi(ghcb); + regs->bp = ghcb_get_rbp(ghcb); return true; } From patchwork Mon Sep 14 20:15:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774867 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8CDAA59D for ; Mon, 14 Sep 2020 20:34:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57B85215A4 for ; Mon, 14 Sep 2020 20:34:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="o8ZEtOCR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726152AbgINUec (ORCPT ); Mon, 14 Sep 2020 16:34:32 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726102AbgINUQx (ORCPT ); Mon, 14 Sep 2020 16:16:53 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WI5HMdFE3kO3DlD/MfzzKd5EZe9hc2f3LT2O7vAaZlOE6hb/W5/fcI3jdiZp59xpZjZ8YB4qGMxIQcqMJlJdrUURJOb9J5zwNRr53WfHGnhkHsI/V0RKxhFh106JdoIR85TGhYWAmT09AeTJ5m8dtsaHP1ND9SJb0o5ZGev7lisH0iZdbB/IkiQpKZNESl2M2M3XgTyEbcywim4JMW/jzCQtc514mdM2SQBzENdZJAqJNL8qqHtM068F/cVkD0gByuz9LfOMH+LHeapijd2G3rEjjvAsPoMi2viJzjNlHQFQ2uWUBVgLTHpTz99k7dSXKnq8L7pBV+l4QGmg7uoByQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=muMskzQ09+zMLokLB+tAsSSsCuKQWFeMBB3NVmakl/k=; b=Nc8UiaP6cGo8QqzEM/8G8ac0WzC2UuMV2BXsWGNA25pTVifNCBGXUFIuDFaKvfRaJ9ph4eKJTGT8nRYFma989rTcC3puSqHw4bTp/vrlI/FjKHhm3HNCeXyP0M1WCRkQpxoxkL79YGmVK7Gg6O09saEqoQocdfP4vMYWPJqCu8aCjr7rXF/YW0IrzunrrnfLdnpOREcd1iijC3KHd05mpGCGQ02dI41SjOJq3fiTzu6d69jMv0dUUn40ANgV3TvlhegvVc3pKauZzaHe2O4nWP4Bm3+KmMEZruxwg71bXmwNhQEG64AI2AXGBFo74TiNfEw4lBVUvL4h9iOkBmkDJg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=muMskzQ09+zMLokLB+tAsSSsCuKQWFeMBB3NVmakl/k=; b=o8ZEtOCRmd+ngcFTw5tlCtAXvKwXubLwVhwh8HVUjgmzr22weZEtGEddnBOVOp5NZHMcp8ATdFQPqa5H7UzPUxPg+WENHUcwnWqB/91nBa7Vgt33P8KaKs1o8h3WllEPG+3xE7MuH8HtwRDfEXdu2dkzWmtfafKUiPl0lJJCfnE= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:16:39 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:16:39 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 05/35] KVM: SVM: Add initial support for SEV-ES GHCB access to KVM Date: Mon, 14 Sep 2020 15:15:19 -0500 Message-Id: <9e52807342691ff0d4b116af6e147021c61a2d71.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM6PR10CA0021.namprd10.prod.outlook.com (2603:10b6:5:60::34) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM6PR10CA0021.namprd10.prod.outlook.com (2603:10b6:5:60::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:16:38 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 5ea0c62f-4666-437b-e8aa-08d858eb1687 X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4125; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UcLMpJICbyRBrFVcsB5Wk//FUxCAPLq8pB3bS3tqzL6vZ95n9g0WVa+nOFooEhvOqEwJC5ImheAhyGJ7ujGye0iMeve+0qt0eIFZBeWOnwkce3ka5wbmGdor670mKaGXAPfnOgHP/9vfPE/DqvROLeDMl4Jw7k+lffO1kucmL60SuwUSNVKteZox8z/xCB7C8rBGFyTv07pznQThgDC2F6INyf8DgUBGXknRKHzuTVE0H6Sze/ETrdUIoG0gwnUqpa67/Bct3xtKAJsicTw7V0jvZ2pq9iUivaivoI9eSiG+/vHSjvZ1hB2xg/ufEiS6bV1Fm5+ig1pS3OmMDRiMJw== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002)(30864003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: fQV2Iyn4iWyWu+T47Dqdl/qLNlaGkEgIGsd/u/F1GQ6NY30ux76rqwXfiy3C4RiYonYK3SeSt/MNhgcTt9pe/P4z/DmeKFZS+TsTeXOg4+o4N2PuTZWKPc754p9i9qS4VrvHjZG7Oc9s8Ov56jv5v0uF4dYbY9lBY1fkZ5vCI2u/WOfvkE8q94WpRTokd7eZr5QE7zd1j4DEE/0Px0g83FhMrV3Vn+NIBSymL/A0IYHbIvosbFfm78PPyC4V5UArPfMUUpy32FKod1beIlWlT67Xn/fmgo32jBHl1ht2eSEdXIkd1jK0bU5rVcSDhW0Vks7scNYmKGVsU6A89+xqOlc8pXVEbNU2NptQuLhQUhmhiqAZjVhDAsnfcV1O3xcdQF4m8aZ082i4c6cLiPX11xWtuz30mZGRg0UZbSaRYIuQVB3T72zWJGWS2eJW2Qe5v/ivJJ9/R72O47FaJcdvmCnmDwx8lcUTzNm2M4zCRMbuD0bTQeW6bpUH9vPhxwYHi3ecu0cmHi4WhUBrBcjSoPV7iLV6tGnjwcIwUvkm/9ffA0JS0pQha/c52wxbS8X7Bw5RMlCwkQu3RXkrS/vX2PiA1/Eq1RePQh2AYw85aCM/bhhynDksDjEEsHr7rq0JAUuIjv4qBTF+pQYP/lLAGA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5ea0c62f-4666-437b-e8aa-08d858eb1687 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:16:39.3658 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Ud6kf3My3EISkbi00tjT89LiCPz3p9phKWeEAP4M+wkWcZUn4d7lyiblftxRpLujseKR8zCudriXpHMlxew4Aw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky Provide initial support for accessing the GHCB when needing to access registers for an SEV-ES guest. The support consists of: - Accessing the GHCB instead of the VMSA when reading and writing guest registers (after the VMSA has been encrypted). - Creating register access override functions for reading and writing guest registers from the common KVM support. - Allocating pages for the VMSA and GHCB when creating each vCPU - The VMSA page holds the encrypted VMSA for the vCPU - The GHCB page is used to hold a copy of the guest GHCB during VMGEXIT processing. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/kvm_host.h | 7 ++ arch/x86/include/asm/msr-index.h | 1 + arch/x86/kvm/kvm_cache_regs.h | 30 +++++-- arch/x86/kvm/svm/svm.c | 138 ++++++++++++++++++++++++++++++- arch/x86/kvm/svm/svm.h | 65 ++++++++++++++- 5 files changed, 230 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5303dbc5c9bc..c900992701d6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -788,6 +788,9 @@ struct kvm_vcpu_arch { /* AMD MSRC001_0015 Hardware Configuration */ u64 msr_hwcr; + + /* SEV-ES support */ + bool vmsa_encrypted; }; struct kvm_lpage_info { @@ -1227,6 +1230,10 @@ struct kvm_x86_ops { int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu); void (*migrate_timers)(struct kvm_vcpu *vcpu); + + void (*reg_read_override)(struct kvm_vcpu *vcpu, enum kvm_reg reg); + void (*reg_write_override)(struct kvm_vcpu *vcpu, enum kvm_reg reg, + unsigned long val); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 249a4147c4b2..16f5b20bb099 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -466,6 +466,7 @@ #define MSR_AMD64_IBSBRTARGET 0xc001103b #define MSR_AMD64_IBSOPDATA4 0xc001103d #define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */ +#define MSR_AMD64_VM_PAGE_FLUSH 0xc001011e #define MSR_AMD64_SEV_ES_GHCB 0xc0010130 #define MSR_AMD64_SEV 0xc0010131 #define MSR_AMD64_SEV_ENABLED_BIT 0 diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index cfe83d4ae625..e87eb90999d5 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -9,15 +9,21 @@ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \ | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_PGE | X86_CR4_TSD) -#define BUILD_KVM_GPR_ACCESSORS(lname, uname) \ -static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\ -{ \ - return vcpu->arch.regs[VCPU_REGS_##uname]; \ -} \ -static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \ - unsigned long val) \ -{ \ - vcpu->arch.regs[VCPU_REGS_##uname] = val; \ +#define BUILD_KVM_GPR_ACCESSORS(lname, uname) \ +static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu) \ +{ \ + if (kvm_x86_ops.reg_read_override) \ + kvm_x86_ops.reg_read_override(vcpu, VCPU_REGS_##uname); \ + \ + return vcpu->arch.regs[VCPU_REGS_##uname]; \ +} \ +static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \ + unsigned long val) \ +{ \ + if (kvm_x86_ops.reg_write_override) \ + kvm_x86_ops.reg_write_override(vcpu, VCPU_REGS_##uname, val); \ + \ + vcpu->arch.regs[VCPU_REGS_##uname] = val; \ } BUILD_KVM_GPR_ACCESSORS(rax, RAX) BUILD_KVM_GPR_ACCESSORS(rbx, RBX) @@ -67,6 +73,9 @@ static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu, int reg) if (WARN_ON_ONCE((unsigned int)reg >= NR_VCPU_REGS)) return 0; + if (kvm_x86_ops.reg_read_override) + kvm_x86_ops.reg_read_override(vcpu, reg); + if (!kvm_register_is_available(vcpu, reg)) kvm_x86_ops.cache_reg(vcpu, reg); @@ -79,6 +88,9 @@ static inline void kvm_register_write(struct kvm_vcpu *vcpu, int reg, if (WARN_ON_ONCE((unsigned int)reg >= NR_VCPU_REGS)) return; + if (kvm_x86_ops.reg_write_override) + kvm_x86_ops.reg_write_override(vcpu, reg, val); + vcpu->arch.regs[reg] = val; kvm_register_mark_dirty(vcpu, reg); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 779c167e42cc..d1f52211627a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1175,6 +1175,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) struct page *msrpm_pages; struct page *hsave_page; struct page *nested_msrpm_pages; + struct page *vmsa_page = NULL; int err; BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0); @@ -1197,9 +1198,19 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) if (!hsave_page) goto free_page3; + if (sev_es_guest(svm->vcpu.kvm)) { + /* + * SEV-ES guests require a separate VMSA page used to contain + * the encrypted register state of the guest. + */ + vmsa_page = alloc_page(GFP_KERNEL); + if (!vmsa_page) + goto free_page4; + } + err = avic_init_vcpu(svm); if (err) - goto free_page4; + goto free_page5; /* We initialize this flag to true to make sure that the is_running * bit would be set the first time the vcpu is loaded. @@ -1219,6 +1230,12 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->vmcb = page_address(page); clear_page(svm->vmcb); svm->vmcb_pa = __sme_set(page_to_pfn(page) << PAGE_SHIFT); + + if (vmsa_page) { + svm->vmsa = page_address(vmsa_page); + clear_page(svm->vmsa); + } + svm->asid_generation = 0; init_vmcb(svm); @@ -1227,6 +1244,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) return 0; +free_page5: + if (vmsa_page) + __free_page(vmsa_page); free_page4: __free_page(hsave_page); free_page3: @@ -1258,6 +1278,26 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) */ svm_clear_current_vmcb(svm->vmcb); + if (sev_es_guest(vcpu->kvm)) { + struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info; + + if (vcpu->arch.vmsa_encrypted) { + u64 page_to_flush; + + /* + * The VMSA page was used by hardware to hold guest + * encrypted state, be sure to flush it before returning + * it to the system. This is done using the VM Page + * Flush MSR (which takes the page virtual address and + * guest ASID). + */ + page_to_flush = (u64)svm->vmsa | sev->asid; + wrmsrl(MSR_AMD64_VM_PAGE_FLUSH, page_to_flush); + } + + __free_page(virt_to_page(svm->vmsa)); + } + __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); __free_page(virt_to_page(svm->nested.hsave)); @@ -4012,6 +4052,99 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu) (svm->vmcb->control.intercept & (1ULL << INTERCEPT_INIT)); } +/* + * These return values represent the offset in quad words within the VM save + * area. This allows them to be accessed by casting the save area to a u64 + * array. + */ +#define VMSA_REG_ENTRY(_field) (offsetof(struct vmcb_save_area, _field) / sizeof(u64)) +#define VMSA_REG_UNDEF VMSA_REG_ENTRY(valid_bitmap) +static inline unsigned int vcpu_to_vmsa_entry(enum kvm_reg reg) +{ + switch (reg) { + case VCPU_REGS_RAX: return VMSA_REG_ENTRY(rax); + case VCPU_REGS_RBX: return VMSA_REG_ENTRY(rbx); + case VCPU_REGS_RCX: return VMSA_REG_ENTRY(rcx); + case VCPU_REGS_RDX: return VMSA_REG_ENTRY(rdx); + case VCPU_REGS_RSP: return VMSA_REG_ENTRY(rsp); + case VCPU_REGS_RBP: return VMSA_REG_ENTRY(rbp); + case VCPU_REGS_RSI: return VMSA_REG_ENTRY(rsi); + case VCPU_REGS_RDI: return VMSA_REG_ENTRY(rdi); +#ifdef CONFIG_X86_64 + case VCPU_REGS_R8: return VMSA_REG_ENTRY(r8); + case VCPU_REGS_R9: return VMSA_REG_ENTRY(r9); + case VCPU_REGS_R10: return VMSA_REG_ENTRY(r10); + case VCPU_REGS_R11: return VMSA_REG_ENTRY(r11); + case VCPU_REGS_R12: return VMSA_REG_ENTRY(r12); + case VCPU_REGS_R13: return VMSA_REG_ENTRY(r13); + case VCPU_REGS_R14: return VMSA_REG_ENTRY(r14); + case VCPU_REGS_R15: return VMSA_REG_ENTRY(r15); +#endif + case VCPU_REGS_RIP: return VMSA_REG_ENTRY(rip); + default: + WARN_ONCE(1, "unsupported VCPU to VMSA register conversion\n"); + return VMSA_REG_UNDEF; + } +} + +/* For SEV-ES guests, populate the vCPU register from the appropriate VMSA/GHCB */ +static void svm_reg_read_override(struct kvm_vcpu *vcpu, enum kvm_reg reg) +{ + struct vmcb_save_area *vmsa; + struct vcpu_svm *svm; + unsigned int entry; + unsigned long val; + u64 *vmsa_reg; + + if (!sev_es_guest(vcpu->kvm)) + return; + + entry = vcpu_to_vmsa_entry(reg); + if (entry == VMSA_REG_UNDEF) + return; + + svm = to_svm(vcpu); + vmsa = get_vmsa(svm); + vmsa_reg = (u64 *)vmsa; + val = (unsigned long)vmsa_reg[entry]; + + /* If a GHCB is mapped, check the bitmap of valid entries */ + if (svm->ghcb) { + if (!test_bit(entry, (unsigned long *)vmsa->valid_bitmap)) + val = 0; + } + + vcpu->arch.regs[reg] = val; +} + +/* For SEV-ES guests, set the vCPU register in the appropriate VMSA */ +static void svm_reg_write_override(struct kvm_vcpu *vcpu, enum kvm_reg reg, + unsigned long val) +{ + struct vmcb_save_area *vmsa; + struct vcpu_svm *svm; + unsigned int entry; + u64 *vmsa_reg; + + entry = vcpu_to_vmsa_entry(reg); + if (entry == VMSA_REG_UNDEF) + return; + + svm = to_svm(vcpu); + vmsa = get_vmsa(svm); + vmsa_reg = (u64 *)vmsa; + + /* If a GHCB is mapped, set the bit to indicate a valid entry */ + if (svm->ghcb) { + unsigned int index = entry / 8; + unsigned int shift = entry % 8; + + vmsa->valid_bitmap[index] |= BIT(shift); + } + + vmsa_reg[entry] = val; +} + static void svm_vm_destroy(struct kvm *kvm) { avic_vm_destroy(kvm); @@ -4150,6 +4283,9 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .need_emulation_on_page_fault = svm_need_emulation_on_page_fault, .apic_init_signal_blocked = svm_apic_init_signal_blocked, + + .reg_read_override = svm_reg_read_override, + .reg_write_override = svm_reg_write_override, }; static struct kvm_x86_init_ops svm_init_ops __initdata = { diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index f42ba9d158df..ff587536f571 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -159,6 +159,10 @@ struct vcpu_svm { */ struct list_head ir_list; spinlock_t ir_list_lock; + + /* SEV-ES support */ + struct vmcb_save_area *vmsa; + struct ghcb *ghcb; }; struct svm_cpu_data { @@ -509,9 +513,34 @@ void sev_hardware_teardown(void); static inline struct vmcb_save_area *get_vmsa(struct vcpu_svm *svm) { - return &svm->vmcb->save; + struct vmcb_save_area *vmsa; + + if (sev_es_guest(svm->vcpu.kvm)) { + /* + * Before LAUNCH_UPDATE_VMSA, use the actual SEV-ES save area + * to construct the initial state. Afterwards, use the mapped + * GHCB in a VMGEXIT or the traditional save area as a scratch + * area when outside of a VMGEXIT. + */ + if (svm->vcpu.arch.vmsa_encrypted) { + if (svm->ghcb) + vmsa = &svm->ghcb->save; + else + vmsa = &svm->vmcb->save; + } else { + vmsa = svm->vmsa; + } + } else { + vmsa = &svm->vmcb->save; + } + + return vmsa; } +#define SEV_ES_SET_VALID(_vmsa, _field) \ + __set_bit(GHCB_BITMAP_IDX(_field), \ + (unsigned long *)(_vmsa)->valid_bitmap) + #define DEFINE_VMSA_SEGMENT_ENTRY(_field, _entry, _size) \ static inline _size \ svm_##_field##_read_##_entry(struct vcpu_svm *svm) \ @@ -528,6 +557,9 @@ static inline struct vmcb_save_area *get_vmsa(struct vcpu_svm *svm) struct vmcb_save_area *vmsa = get_vmsa(svm); \ \ vmsa->_field._entry = value; \ + if (svm->vcpu.arch.vmsa_encrypted) { \ + SEV_ES_SET_VALID(vmsa, _field); \ + } \ } \ #define DEFINE_VMSA_SEGMENT_ACCESSOR(_field) \ @@ -551,6 +583,9 @@ static inline struct vmcb_save_area *get_vmsa(struct vcpu_svm *svm) struct vmcb_save_area *vmsa = get_vmsa(svm); \ \ vmsa->_field = *seg; \ + if (svm->vcpu.arch.vmsa_encrypted) { \ + SEV_ES_SET_VALID(vmsa, _field); \ + } \ } DEFINE_VMSA_SEGMENT_ACCESSOR(cs) @@ -579,6 +614,9 @@ DEFINE_VMSA_SEGMENT_ACCESSOR(tr) struct vmcb_save_area *vmsa = get_vmsa(svm); \ \ vmsa->_field = value; \ + if (svm->vcpu.arch.vmsa_encrypted) { \ + SEV_ES_SET_VALID(vmsa, _field); \ + } \ } \ \ static inline void \ @@ -587,6 +625,9 @@ DEFINE_VMSA_SEGMENT_ACCESSOR(tr) struct vmcb_save_area *vmsa = get_vmsa(svm); \ \ vmsa->_field &= value; \ + if (svm->vcpu.arch.vmsa_encrypted) { \ + SEV_ES_SET_VALID(vmsa, _field); \ + } \ } \ \ static inline void \ @@ -595,6 +636,9 @@ DEFINE_VMSA_SEGMENT_ACCESSOR(tr) struct vmcb_save_area *vmsa = get_vmsa(svm); \ \ vmsa->_field |= value; \ + if (svm->vcpu.arch.vmsa_encrypted) { \ + SEV_ES_SET_VALID(vmsa, _field); \ + } \ } #define DEFINE_VMSA_ACCESSOR(_field) \ @@ -629,6 +673,25 @@ DEFINE_VMSA_ACCESSOR(last_excp_to) DEFINE_VMSA_U8_ACCESSOR(cpl) DEFINE_VMSA_ACCESSOR(rip) DEFINE_VMSA_ACCESSOR(rax) +DEFINE_VMSA_ACCESSOR(rbx) +DEFINE_VMSA_ACCESSOR(rcx) +DEFINE_VMSA_ACCESSOR(rdx) DEFINE_VMSA_ACCESSOR(rsp) +DEFINE_VMSA_ACCESSOR(rbp) +DEFINE_VMSA_ACCESSOR(rsi) +DEFINE_VMSA_ACCESSOR(rdi) +DEFINE_VMSA_ACCESSOR(r8) +DEFINE_VMSA_ACCESSOR(r9) +DEFINE_VMSA_ACCESSOR(r10) +DEFINE_VMSA_ACCESSOR(r11) +DEFINE_VMSA_ACCESSOR(r12) +DEFINE_VMSA_ACCESSOR(r13) +DEFINE_VMSA_ACCESSOR(r14) +DEFINE_VMSA_ACCESSOR(r15) +DEFINE_VMSA_ACCESSOR(sw_exit_code) +DEFINE_VMSA_ACCESSOR(sw_exit_info_1) +DEFINE_VMSA_ACCESSOR(sw_exit_info_2) +DEFINE_VMSA_ACCESSOR(sw_scratch) +DEFINE_VMSA_ACCESSOR(xcr0) #endif From patchwork Mon Sep 14 20:15:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774775 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E40B86CA for ; Mon, 14 Sep 2020 20:18:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC8AA21741 for ; Mon, 14 Sep 2020 20:18:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="OVV/sTKx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726125AbgINUSC (ORCPT ); Mon, 14 Sep 2020 16:18:02 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726100AbgINUQ7 (ORCPT ); Mon, 14 Sep 2020 16:16:59 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RR5JPj/TwHsJJRJl/QQK+PexB9f4yWo2L2uciDLxaH3r73PDTb5hwc/EfxpV13dU++a6i9NUFdAIoSf1/+hBjgewAsA2o+evpqDYnz6gSUa7SGschRnZfIEOjt9ICNldpmqsz/FUihAhVSRKIVMOvsbEUfQNMUAJDmJwohNT8/4pA6IGOAKFhU8iqYXia3QFf34wH03G58lfrG+ySn55rxXnhFMforAqsk4uYM8BMokKnkOrw+3hshZi2x+0Gea+gS5fVYS58LW9PYsCv0rO7HV5nZcsFZykLBv4rZ1PsIIfD8SacRWcOJQo9GhptWjPlMmsPQ/YoH2c9jGFEO/QQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=taxoIxZp0bZp/NVXjXBqJPvClCBO1J+EfzAt7HtG68A=; b=BheN7//+kuB/Ca1LXhFkJqIO3QpvTvrGzH47pi9ylGF1WnyS2uWrzeL8yzkELNrblcXnzPDzKhjz+AozPClG+beDTkwKzr5gtfLsESu+F5RSQU64KKdzKaD9JF9uj4NmERZUI5+U9rm9OM6l8uzyBSey6ojq5adfeDaC8xi3RM//PSy22BcI74lM9wvEGfxSnDElxYrDGmUUQisy5sXIKZtq2q+GdJHSH3VkanintgI/oZIGR0CqAldgk5Pe+J9ddWASBQNLgwGV+8Aykt4Z/Z0XTSfMG5Ysvom23dsbG15oJXr+1rSLw6inkpXUgw6LiHWbNRCT/ImvQiPWCn69DQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=taxoIxZp0bZp/NVXjXBqJPvClCBO1J+EfzAt7HtG68A=; b=OVV/sTKxozHjxVoefKjwCf/1/F7Qg+S4MGUqeR+QCpRjfk5jqCJPyzmIhjp+Pdo3wrcyu/G3UJ8wG7IAPabm0uW2RM2EoYbeagRUcLZqp5csj7A4QrMWwgo8KfDGnF71RfUqA3b+Hr4/HWM34A/sOPEuyYyBP77aIFELBSfR9pU= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:16:47 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:16:47 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 06/35] KVM: SVM: Add required changes to support intercepts under SEV-ES Date: Mon, 14 Sep 2020 15:15:20 -0500 Message-Id: <16838d177e7f12eb4666bb55e14763970aa6552a.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR07CA0146.namprd07.prod.outlook.com (2603:10b6:3:ee::12) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR07CA0146.namprd07.prod.outlook.com (2603:10b6:3:ee::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:16:46 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: bb29c9c9-0bb6-48dd-b015-08d858eb1b4c X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EPgKqFIT5ljo3sPSFAfBRybbXyTW/mkau5zjgPfYb6C4Mp1vKSS0s8ibk83f+dwgL2EG2s+em0vaiQGiW5bIpQPwxFwYxRb9VYG2FPkFow2UV0U5qYT162wRPQMYwTonHWDPl7iWEG7kyJ0DEir7Ul619XvhdMDAKf915OtCOww/tnMxoaS1HbChEY28CsTnmrCWxP3B1N1FeALopeTkO4xfFtf9/U1z8psQHpFgsDvmsVBs83TA+/SwQCC3tvk88JIVtm6RvMDYWBLG/Gddz8s96qrVMrJFeBdsgfaQsMNfwLAhUnVSfLutXroT633000prrZBnDzUxVvsr2q4ljQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: vdPDMqD9uazl8tZnOomBQbAZp9mKu6ZJMrXYDeI/CcUcmAqpT4Ei+J1hOQPFjf96RGrozAgjxuzjg0QBd9sq91SwuC7X1kX1a0MDOdGffet9Q7I3eKh4jANRkA4uk+iTR4CZvXAAtThsXTSQVeRcXAxeE4LqeqeXzXV8uf0v670Ju+CwEj0KRKLq72xZiY+cIyI89uoKuzdO3l8Xvtqyir32xwlUF2OElAZTltPOQ1YD79RXHucxAh7uqUZwG2gMPoh/4eE7rEJmDOXNmtQyHj1F8DEL6hi392j1q9Car3O1+PCI8VMjqNprfEQ2nvnWUQ3ZWjuppry74/xG7+Dc2/saCEZ8Y68PxaJxmoMw3djzZuuX72Tg9GUcU4u6oJBj8vuAYQbuYRhfayV1UnsaQWjaHZEMJYD0iJ3xpdkwcJFXeGpzC6Co65avi7sI2f1qQSnUZw9bqvIUvnjTNfgfSjJSOceARbt6Io/EHf4k/+OQC9lILunTEaSD6vMgFNJvE9ek35ZWBcOlz2MYwQKjqy07rcJ/aO1oHo421/iGedJZxhh8w75Ks3Mbhpz4FY1bq1lVnsei1sgOZVaZkv5G03KsiLU8lsPBVf+XbuvFfxdIUcEsKLVtQfmcV4gW2UTuGOpAxWvi4cxaGUDCMJtfeQ== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: bb29c9c9-0bb6-48dd-b015-08d858eb1b4c X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:16:47.4153 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: gi0YHbLQo5GCqXfUQGXZVuQqP9uiMFQuErVW+q61sx/m9EOgB55vZnYi+q3mBPEO/oxEJKcsII/XEG0WFhXH8g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky When a guest is running under SEV-ES, the hypervisor cannot access the guest register state. There are numerous places in the KVM code where certain registers are accessed that are not allowed to be accessed (e.g. RIP, CR0, etc). Add checks to prevent register accesses and intercept updates at various points within the KVM code. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/svm.h | 3 +- arch/x86/kvm/cpuid.c | 1 + arch/x86/kvm/svm/svm.c | 114 ++++++++++++++++++++++++++++++++++--- arch/x86/kvm/x86.c | 6 +- 4 files changed, 113 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index c112207c201b..ed03d23f56fe 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -130,7 +130,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area { #define LBR_CTL_ENABLE_MASK BIT_ULL(0) #define VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK BIT_ULL(1) -#define SVM_INTERRUPT_SHADOW_MASK 1 +#define SVM_INTERRUPT_SHADOW_MASK BIT_ULL(0) +#define SVM_GUEST_INTERRUPT_MASK BIT_ULL(1) #define SVM_IOIO_STR_SHIFT 2 #define SVM_IOIO_REP_SHIFT 3 diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 3fd6eec202d7..15f2b2365936 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -115,6 +115,7 @@ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu) MSR_IA32_MISC_ENABLE_MWAIT); } } +EXPORT_SYMBOL_GPL(kvm_update_cpuid_runtime); static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d1f52211627a..f8a5b7164008 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include "trace.h" @@ -320,6 +321,13 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + /* + * SEV-ES does not expose the next RIP. The RIP update is controlled by + * the type of exit and the #VC handler in the guest. + */ + if (sev_es_guest(vcpu->kvm)) + goto done; + if (nrips && svm->vmcb->control.next_rip != 0) { WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS)); svm->next_rip = svm->vmcb->control.next_rip; @@ -331,6 +339,8 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu) } else { kvm_rip_write(vcpu, svm->next_rip); } + +done: svm_set_interrupt_shadow(vcpu, 0); return 1; @@ -1578,9 +1588,17 @@ static void svm_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt) static void update_cr0_intercept(struct vcpu_svm *svm) { - ulong gcr0 = svm->vcpu.arch.cr0; + ulong gcr0; u64 hcr0; + /* + * SEV-ES guests must always keep the CR intercepts cleared. CR + * tracking is done using the CR write traps. + */ + if (sev_es_guest(svm->vcpu.kvm)) + return; + + gcr0 = svm->vcpu.arch.cr0; hcr0 = (svm_cr0_read(svm) & ~SVM_CR0_SELECTIVE_MASK) | (gcr0 & SVM_CR0_SELECTIVE_MASK); @@ -2209,6 +2227,17 @@ static int task_switch_interception(struct vcpu_svm *svm) static int cpuid_interception(struct vcpu_svm *svm) { + /* + * SEV-ES guests require the vCPU arch registers to be populated via + * the GHCB. + */ + if (sev_es_guest(svm->vcpu.kvm)) { + if (kvm_register_read(&svm->vcpu, VCPU_REGS_RAX) == 0x0d) { + svm->vcpu.arch.xcr0 = svm_xcr0_read(svm); + kvm_update_cpuid_runtime(&svm->vcpu); + } + } + return kvm_emulate_cpuid(&svm->vcpu); } @@ -2527,7 +2556,28 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) static int rdmsr_interception(struct vcpu_svm *svm) { - return kvm_emulate_rdmsr(&svm->vcpu); + u32 ecx = kvm_rcx_read(&svm->vcpu); + u64 data; + + if (kvm_get_msr(&svm->vcpu, ecx, &data)) { + trace_kvm_msr_read_ex(ecx); + if (sev_es_guest(svm->vcpu.kvm)) { + ghcb_set_sw_exit_info_1(svm->ghcb, 1); + ghcb_set_sw_exit_info_2(svm->ghcb, + X86_TRAP_GP | + SVM_EVTINJ_TYPE_EXEPT | + SVM_EVTINJ_VALID); + } else { + kvm_inject_gp(&svm->vcpu, 0); + } + return 1; + } + + trace_kvm_msr_read(ecx, data); + + kvm_rax_write(&svm->vcpu, data & 0xffffffff); + kvm_rdx_write(&svm->vcpu, data >> 32); + return kvm_skip_emulated_instruction(&svm->vcpu); } static int svm_set_vm_cr(struct kvm_vcpu *vcpu, u64 data) @@ -2716,7 +2766,25 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) static int wrmsr_interception(struct vcpu_svm *svm) { - return kvm_emulate_wrmsr(&svm->vcpu); + u32 ecx = kvm_rcx_read(&svm->vcpu); + u64 data = kvm_read_edx_eax(&svm->vcpu); + + if (kvm_set_msr(&svm->vcpu, ecx, data)) { + trace_kvm_msr_write_ex(ecx, data); + if (sev_es_guest(svm->vcpu.kvm)) { + ghcb_set_sw_exit_info_1(svm->ghcb, 1); + ghcb_set_sw_exit_info_2(svm->ghcb, + X86_TRAP_GP | + SVM_EVTINJ_TYPE_EXEPT | + SVM_EVTINJ_VALID); + } else { + kvm_inject_gp(&svm->vcpu, 0); + } + return 1; + } + + trace_kvm_msr_write(ecx, data); + return kvm_skip_emulated_instruction(&svm->vcpu); } static int msr_interception(struct vcpu_svm *svm) @@ -2746,7 +2814,14 @@ static int interrupt_window_interception(struct vcpu_svm *svm) static int pause_interception(struct vcpu_svm *svm) { struct kvm_vcpu *vcpu = &svm->vcpu; - bool in_kernel = (svm_get_cpl(vcpu) == 0); + bool in_kernel; + + /* + * CPL is not made available for an SEV-ES guest, so just set in_kernel + * to true. + */ + in_kernel = (sev_es_guest(svm->vcpu.kvm)) ? true + : (svm_get_cpl(vcpu) == 0); if (!kvm_pause_in_guest(vcpu->kvm)) grow_ple_window(vcpu); @@ -2972,10 +3047,13 @@ static int handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM); - if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE)) - vcpu->arch.cr0 = svm_cr0_read(svm); - if (npt_enabled) - vcpu->arch.cr3 = svm_cr3_read(svm); + /* SEV-ES guests must use the CR write traps to track CR registers. */ + if (!sev_es_guest(vcpu->kvm)) { + if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE)) + vcpu->arch.cr0 = svm_cr0_read(svm); + if (npt_enabled) + vcpu->arch.cr3 = svm_cr3_read(svm); + } svm_complete_interrupts(svm); @@ -3094,6 +3172,13 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) { struct vcpu_svm *svm = to_svm(vcpu); + /* + * SEV-ES guests must always keep the CR intercepts cleared. CR + * tracking is done using the CR write traps. + */ + if (sev_es_guest(vcpu->kvm)) + return; + if (nested_svm_virtualize_tpr(vcpu)) return; @@ -3162,6 +3247,13 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; + /* + * SEV-ES guests to not expose RFLAGS. Use the VMCB interrupt mask + * bit to determine the state of the IF flag. + */ + if (sev_es_guest(svm->vcpu.kvm)) + return !(vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK); + if (!gif_set(svm)) return true; @@ -3347,6 +3439,12 @@ static void svm_complete_interrupts(struct vcpu_svm *svm) svm->vcpu.arch.nmi_injected = true; break; case SVM_EXITINTINFO_TYPE_EXEPT: + /* + * Never re-inject a #VC exception. + */ + if (vector == X86_TRAP_VC) + break; + /* * In case of software exceptions, do not reinject the vector, * but re-execute the instruction instead. Rewind RIP first diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 539ea1cd6020..a5afdccb6c17 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3771,7 +3771,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { int idx; - if (vcpu->preempted) + if (vcpu->preempted && !vcpu->arch.vmsa_encrypted) vcpu->arch.preempted_in_kernel = !kvm_x86_ops.get_cpl(vcpu); /* @@ -7774,7 +7774,9 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu) { struct kvm_run *kvm_run = vcpu->run; - kvm_run->if_flag = (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0; + kvm_run->if_flag = (vcpu->arch.vmsa_encrypted) + ? kvm_arch_interrupt_allowed(vcpu) + : (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0; kvm_run->flags = is_smm(vcpu) ? KVM_RUN_X86_SMM : 0; kvm_run->cr8 = kvm_get_cr8(vcpu); kvm_run->apic_base = kvm_get_apic_base(vcpu); From patchwork Mon Sep 14 20:15:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774773 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F90A6CA for ; Mon, 14 Sep 2020 20:17:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E812208DB for ; Mon, 14 Sep 2020 20:17:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="hAz8Ce0p" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726149AbgINURZ (ORCPT ); Mon, 14 Sep 2020 16:17:25 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725999AbgINURJ (ORCPT ); Mon, 14 Sep 2020 16:17:09 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fgJIT8oz3evdnOnud4+d6YYERsmImiQjlS/1JD1ALjeTZLoIC/4Cr0VcwNFoTa++A9HOJ+ZR5m+aPoXXl8WIonYNBR738BFJajcr4bLMfXXWdEwSqTa8HTXf4oUrefbUzCqEhhjXeaiH8LWqmIhFid+8pOs6QRmp6jkd34dvt/bXS/xmlJuIGTY9mWTYJ5xOn77RGEDOdDfVLFc/6mKWFHJTbt/hMB4PevL0QTsssnCmMVpdusOj1Mie525cnxpgN2j87Pu28qkBaxcoBfCtKo3QwzopDY5ckg5OHNgsl09DD98R+0KiMrzc03LIhFw87o2oDARvI1vLiEZiee+E1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KaAt1EACuc436NaKECVjXdsTcv9y++x3yWXF819pil4=; b=Fk20yJh0GrBKr77uPPvK+3d+dJGRalQJUykX+s5z/+HBDcqfX4i080W+0EZZW196ZBNLqTC1Y88D7DvXsj1oIyXn5VhzQyi3eBfFtkQI3piXnN8AxsHeq66HMJaweNMr05ARCB0J2UR9JyKI4ALHAbQeTZwiKBcavFJXnghwF7q4Z1Rgx2MdiiuM9UZWc6lGeS5tLi+a4OYMNHJoDPAp5CvrhoHlHqTX323P3n3XSE0E+SI9Y7wJRfOTs4XMtpl7QqPyXpPD9ThEV4ZQpVM1smzM0SbA8p3mxuNOZY7TGJcoOcSvkkXIgCxCk1esoyG0n90EvfPgGZ+tivIFblXbIw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KaAt1EACuc436NaKECVjXdsTcv9y++x3yWXF819pil4=; b=hAz8Ce0pxg8/5Bqm36jag5RqKChMOvWxaOsDYQ/HZn9XU+qwzdZj0b7PdCv13BvCdPZX02HvRvaHvpzb7RzMRa2fkPaweWWM1LzgEww/UxyRxGgQe9uG0o7LWj9YcTdZCbqkPHt+kVGl2Sylp6WYpCqaQMFiGB+fbC2coow/PQI= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:16:55 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:16:55 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 07/35] KVM: SVM: Modify DRx register intercepts for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:21 -0500 Message-Id: <081d45d7c76c97407eefb1f32d96ba6212c639be.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM3PR12CA0056.namprd12.prod.outlook.com (2603:10b6:0:56::24) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM3PR12CA0056.namprd12.prod.outlook.com (2603:10b6:0:56::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:16:54 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: e6029b45-f04c-48b8-f10e-08d858eb200f X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:110; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1FegvyExD5u9KcKE0Dr0nFzY1cBVsoujqN7QHt2lMzmfOfYwa4ifiWgfHR3o5x8i28k+SjPXzquUmuUKlrt7DGv0DnshuOE3w/piA8dVKls/C+g1QrZP6BQ09qcHWeSSdagf7B99rXSxCnVtVQrelQrnwtTVBl7SPn9gf5Fe8T3wchnODTNEtmahkrToimmJSEju2VM6nd9kMTfmKAo5xzenMpMylJ8CbfoF2DQzAA3Df+6HDxGpuZ7/uPR+oYBIAHL8lyr6FXcyyjMYdD6us4nrWzPJgTbU18XK8AJEHzwlttTgcaA6KvHs0WD1xG+E X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: YjnsF/1xqvbnSqA+0bmWABpMKIjwM+1T/cJrsU7lbuGY4P06YP0e8pnkw/HOkgX4XWNELDLTGfCZSIoFza54ESQ80Yo4JqPmWXk/WJv+O5iCCZM9BKZze7YLTAcbZ4573Wds4v4bPv4hTJ+n2QDCxTkWv1JlyM9emHaH937/WQQCbHfpZ35+1UfvKxAbECqvcZRlWhfbprE82oqcgUqtBNLY7SidFbBz5FwQBL/duyfptphfqAeweqLxfPD0MZDirDIe8m9oxegwPv6c7M4Kzfp/+JRt+L0vLPdJoDrESqgWomKhkikje0CxhL5QmFA2MfKdQOrKLPGA+42CY3MG1mltWCuGQox77g4WtDQf8qOQ3Ybyl0msJc8T9+Wf3M4XZoVJnhIEuUBD6p+zUo6sw5w7R2DL6nNKTT7Of4Sp1WSoTVfFmHZD1GiyxeyywDQxOhALs6bjzLypm5aP50kDAAs7mmXB73DbJ5uY+uHExYVY/eBS+OzrRZrpMmGDGTc9IA1h4xFJbsBKjoYqk3oF/O7NxbuRVS/Ah/5DJy8p1Jm+V5LSTm5H6XspEg6QxVSuyWfVpsuW/QoEnTa3aTLtIC7STEqnNWxugOoPwleImynCk7BUPWS6ElV3t9LenhYdrS5V2nI5OQ5hQTeR1MCnNQ== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: e6029b45-f04c-48b8-f10e-08d858eb200f X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:16:55.3438 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: wqfIHhOhNCdHAM1QRZc/Zv2WDFjjto+MmkPfH6GsJMyUo9bq6UJ0OsH8DLDJ7JvK8wuuxDh54wBO0WTSdPa1VQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky An SEV-ES guest must only and always intercept DR7 reads and writes. Update set_dr_intercepts() and clr_dr_intercepts() to account for this. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/svm.h | 89 ++++++++++++++++++++++++------------------ 1 file changed, 50 insertions(+), 39 deletions(-) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index ff587536f571..9953ee7f54cd 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -190,6 +190,28 @@ static inline struct kvm_svm *to_kvm_svm(struct kvm *kvm) return container_of(kvm, struct kvm_svm, kvm); } +static inline bool sev_guest(struct kvm *kvm) +{ +#ifdef CONFIG_KVM_AMD_SEV + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + return sev->active; +#else + return false; +#endif +} + +static inline bool sev_es_guest(struct kvm *kvm) +{ +#ifdef CONFIG_KVM_AMD_SEV + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + return sev_guest(kvm) && sev->es_active; +#else + return false; +#endif +} + static inline void vmcb_mark_all_dirty(struct vmcb *vmcb) { vmcb->control.clean = 0; @@ -244,26 +266,35 @@ static inline bool is_cr_intercept(struct vcpu_svm *svm, int bit) return vmcb->control.intercept_cr & (1U << bit); } +#define SVM_DR_INTERCEPTS \ + ((1 << INTERCEPT_DR0_READ) \ + | (1 << INTERCEPT_DR1_READ) \ + | (1 << INTERCEPT_DR2_READ) \ + | (1 << INTERCEPT_DR3_READ) \ + | (1 << INTERCEPT_DR4_READ) \ + | (1 << INTERCEPT_DR5_READ) \ + | (1 << INTERCEPT_DR6_READ) \ + | (1 << INTERCEPT_DR7_READ) \ + | (1 << INTERCEPT_DR0_WRITE) \ + | (1 << INTERCEPT_DR1_WRITE) \ + | (1 << INTERCEPT_DR2_WRITE) \ + | (1 << INTERCEPT_DR3_WRITE) \ + | (1 << INTERCEPT_DR4_WRITE) \ + | (1 << INTERCEPT_DR5_WRITE) \ + | (1 << INTERCEPT_DR6_WRITE) \ + | (1 << INTERCEPT_DR7_WRITE)) + +#define SVM_SEV_ES_DR_INTERCEPTS \ + ((1 << INTERCEPT_DR7_READ) \ + | (1 << INTERCEPT_DR7_WRITE)) + static inline void set_dr_intercepts(struct vcpu_svm *svm) { struct vmcb *vmcb = get_host_vmcb(svm); - vmcb->control.intercept_dr = (1 << INTERCEPT_DR0_READ) - | (1 << INTERCEPT_DR1_READ) - | (1 << INTERCEPT_DR2_READ) - | (1 << INTERCEPT_DR3_READ) - | (1 << INTERCEPT_DR4_READ) - | (1 << INTERCEPT_DR5_READ) - | (1 << INTERCEPT_DR6_READ) - | (1 << INTERCEPT_DR7_READ) - | (1 << INTERCEPT_DR0_WRITE) - | (1 << INTERCEPT_DR1_WRITE) - | (1 << INTERCEPT_DR2_WRITE) - | (1 << INTERCEPT_DR3_WRITE) - | (1 << INTERCEPT_DR4_WRITE) - | (1 << INTERCEPT_DR5_WRITE) - | (1 << INTERCEPT_DR6_WRITE) - | (1 << INTERCEPT_DR7_WRITE); + vmcb->control.intercept_dr = + (sev_es_guest(svm->vcpu.kvm)) ? SVM_SEV_ES_DR_INTERCEPTS + : SVM_DR_INTERCEPTS; recalc_intercepts(svm); } @@ -272,7 +303,9 @@ static inline void clr_dr_intercepts(struct vcpu_svm *svm) { struct vmcb *vmcb = get_host_vmcb(svm); - vmcb->control.intercept_dr = 0; + vmcb->control.intercept_dr = + (sev_es_guest(svm->vcpu.kvm)) ? SVM_SEV_ES_DR_INTERCEPTS + : 0; recalc_intercepts(svm); } @@ -472,28 +505,6 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu); extern unsigned int max_sev_asid; -static inline bool sev_guest(struct kvm *kvm) -{ -#ifdef CONFIG_KVM_AMD_SEV - struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; - - return sev->active; -#else - return false; -#endif -} - -static inline bool sev_es_guest(struct kvm *kvm) -{ -#ifdef CONFIG_KVM_AMD_SEV - struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; - - return sev_guest(kvm) && sev->es_active; -#else - return false; -#endif -} - static inline bool svm_sev_enabled(void) { return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0; From patchwork Mon Sep 14 20:15:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774777 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 88FD56CA for ; Mon, 14 Sep 2020 20:18:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6751620E65 for ; Mon, 14 Sep 2020 20:18:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="FuyBE5sz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726166AbgINUSL (ORCPT ); Mon, 14 Sep 2020 16:18:11 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726142AbgINURV (ORCPT ); Mon, 14 Sep 2020 16:17:21 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cCVF7NZfzAT2W7A3aKqZpcw6/VikPu1cBQsFt8v0Rdzx2i/Agr1Jl4OK1j7SPqPepNpHqgqUWVazdXJj17IpxaYt+ER5X0PdJYHQr6NEFcMjyLk4VFLxziPKy62e/51dJQV+ZqIOfFOd4D/402ywTkB6OhThqGc7O2hkxBWzQnTbRsYo/liJ/71yBRls1BCWXTxjlxzftwRrC/h0XmCaP6Vxr1JgUsefqApPj5gizTQv+5QslywF4N4tLPCnZMfeEv21zSnIuGhHxUcbBJThA1HpfWaaQoh4tBYHz1Ok0tWCbkr5XhiqwSLrTE9M/scbfLjRZmtw37+IejC026X6qA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Qsw4fVvYw5dooPDBM98sNdSh5tjxyKiJeNnHC0qBETo=; b=URoru6MZd04iEItmIoo3ZIVAupQhRRjn/IGYud/B4XZHOeVgneO5nSK66ZxUAVv/RaEya4m3lkM7+XxDE71wMDWbjRwHrv/RPDVnDfWCdubN52U4ZN8Eqdn05TXjtPrKmV7TvSX1UdBlAWBXshZAvsCYXnLD7XuVS1JboYXyq669A0qo1wNRzvLtUthyWx8JulM4xabYImNAD7T5CFgH6+awr9SS4cj7kpxXSJxBHFJ0/U4zHj7urJppPgYatjO9eGG2PLQ/ssz1C2zGyKSn+Q64NMBQMZVu053O6onOxQ0aGNAuEiXSzn5BBB6XczE7QVHMvf3JGBejNVui55nPRQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Qsw4fVvYw5dooPDBM98sNdSh5tjxyKiJeNnHC0qBETo=; b=FuyBE5szdHyssdmyCJaX+v24TZFz+rtX8bi4KRVNUisRFUaNTR5bhcDip7+s/CatEm0ACb5QbQOB4lTyJ0fH6+6PoHlGMX1j6LEcAv51KIzxwaH6mseAZZfl/6B0FEZXl5EXWVR2Cn9MB/7r9f3GQEXnEPmQVvyNPg3ze7vB2AA= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:17:03 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:17:03 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 08/35] KVM: SVM: Prevent debugging under SEV-ES Date: Mon, 14 Sep 2020 15:15:22 -0500 Message-Id: <58093c542b5b442b88941828595fb2548706f1bf.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM6PR06CA0008.namprd06.prod.outlook.com (2603:10b6:5:120::21) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM6PR06CA0008.namprd06.prod.outlook.com (2603:10b6:5:120::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:17:02 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 5f321452-0ab0-416d-80bf-08d858eb24c1 X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:235; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: b6M38PzXXGk0Lvu1YxeRNEjEBZZeEq9XQzKj+Vi0E3egyt3oFHKCCFRPov4t6QSYfKMC/47jW1uI1aOS1HJmq5As2HXLEuv4tH+Tae7xM8pFp5pZ/VfUMUD6hMRJ8xM6EJTbNjZeTRS6itbyFoeB455KJi11+Ua5vflNLkt3fBjHAJu6CQjmV9y5oFvKXzhx+UIRwUJCiYTxR+gTl1C/GB3EYN1tYB/Vg3qV0a7KWERN4RdQw4/2xJr+Iycsub9B/6WaRsd6loqt/d2nPy2xArBgCyM5gSrHRJq7KBhvDu2TjMnkzPOEKownGllFCjxi/w6zWLhdK5YvcocY34smeA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: h91u3AovyN14eranPli9+raHUuGcmGk00C1F8q1VNZ6b9SfeZV5GKyihsWL6yeo9hI4oVXm48i28Lk6Tn5DUywBfhnPLlOGi/alJf5Mc3uAlL+nl5A3LZjdyEPiMpItATrbcu35WYGhNei1PYmoJbwmwX8zYyYnGyzSrbVXln2/Rbgwojlxd2kp9IebMHMaIeyapeoJ43bFaftgcu6StpGUfF18+mjSGYyAb038PRDTcW/JZbiBHcmAQAWcgX/ynH/eX7qbXBxL0t1U/hHDqZcwPT2i8yNLH1gzf94m2u/oIRSx0WGU7gFdytBR9+lduYbnmBYAudUwADHROXWzi4exR4mzVnuBIaP0GiahDUF/ZpvPKUI9TUvTz17IQAeV7Gp9pMGe/Bp0cX5U6fonqrSw8VrlmItzCqNeumLleDtKaVyFnyJUhg32jcqxu3r5J1fa75+E6NqJrHocGlqQT/68tVHierrvhU3sl2oIcFKEHEZ4cC35YC5W7h4UW/nUUqodg6PZnrXgoytwxk5dAsmyhmkFp7/WmD5QWExUxVj7UhHDgXv6/IGJJ1nKJ/dfC0BxjIy6Ngfqs9LXHhxqmaN7WZDRRIbZEHXu8NvnC59VPo78Wv+vOhRZVJ4Aokvx1cieez0ACoTBnWoLLToeHfw== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5f321452-0ab0-416d-80bf-08d858eb24c1 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:17:03.2184 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: LgImxKZlKpIY0B+0BmLSmkJoSw2lhVisa6DWHphXe+l8e6aFAZCVAxXOHJcfN9VSP7m3s+4IlnrKmBnNYTglkg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky Since the guest register state of an SEV-ES guest is encrypted, debugging is not supported. Update the code to prevent guest debugging when the guest is an SEV-ES guest. This includes adding a callable function that is used to determine if the guest supports being debugged. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/svm/svm.c | 16 ++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 7 +++++++ arch/x86/kvm/x86.c | 3 +++ 4 files changed, 28 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index c900992701d6..3e2a3d2a8ba8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1234,6 +1234,8 @@ struct kvm_x86_ops { void (*reg_read_override)(struct kvm_vcpu *vcpu, enum kvm_reg reg); void (*reg_write_override)(struct kvm_vcpu *vcpu, enum kvm_reg reg, unsigned long val); + + bool (*allow_debug)(struct kvm *kvm); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f8a5b7164008..47fa2067609a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1729,6 +1729,9 @@ static void svm_set_dr6(struct vcpu_svm *svm, unsigned long value) { struct vmcb *vmcb = svm->vmcb; + if (svm->vcpu.arch.vmsa_encrypted) + return; + if (unlikely(value != svm_dr6_read(svm))) { svm_dr6_write(svm, value); vmcb_mark_dirty(vmcb, VMCB_DR); @@ -1739,6 +1742,9 @@ static void svm_sync_dirty_debug_regs(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + if (vcpu->arch.vmsa_encrypted) + return; + get_debugreg(vcpu->arch.db[0], 0); get_debugreg(vcpu->arch.db[1], 1); get_debugreg(vcpu->arch.db[2], 2); @@ -1757,6 +1763,9 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value) { struct vcpu_svm *svm = to_svm(vcpu); + if (vcpu->arch.vmsa_encrypted) + return; + svm_dr7_write(svm, value); vmcb_mark_dirty(svm->vmcb, VMCB_DR); } @@ -4243,6 +4252,11 @@ static void svm_reg_write_override(struct kvm_vcpu *vcpu, enum kvm_reg reg, vmsa_reg[entry] = val; } +static bool svm_allow_debug(struct kvm *kvm) +{ + return !sev_es_guest(kvm); +} + static void svm_vm_destroy(struct kvm *kvm) { avic_vm_destroy(kvm); @@ -4384,6 +4398,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .reg_read_override = svm_reg_read_override, .reg_write_override = svm_reg_write_override, + + .allow_debug = svm_allow_debug, }; static struct kvm_x86_init_ops svm_init_ops __initdata = { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 46ba2e03a892..fb8591bba96f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7879,6 +7879,11 @@ static bool vmx_check_apicv_inhibit_reasons(ulong bit) return supported & BIT(bit); } +static bool vmx_allow_debug(struct kvm *kvm) +{ + return true; +} + static struct kvm_x86_ops vmx_x86_ops __initdata = { .hardware_unsetup = hardware_unsetup, @@ -8005,6 +8010,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .need_emulation_on_page_fault = vmx_need_emulation_on_page_fault, .apic_init_signal_blocked = vmx_apic_init_signal_blocked, .migrate_timers = vmx_migrate_timers, + + .allow_debug = vmx_allow_debug, }; static __init int hardware_setup(void) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a5afdccb6c17..9970c0b7854f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9279,6 +9279,9 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, unsigned long rflags; int i, r; + if (!kvm_x86_ops.allow_debug(vcpu->kvm)) + return -EINVAL; + vcpu_load(vcpu); if (dbg->control & (KVM_GUESTDBG_INJECT_DB | KVM_GUESTDBG_INJECT_BP)) { From patchwork Mon Sep 14 20:15:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774779 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB6466CA for ; Mon, 14 Sep 2020 20:18:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCE34215A4 for ; Mon, 14 Sep 2020 20:18:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="bPw4aqJC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726193AbgINUSX (ORCPT ); Mon, 14 Sep 2020 16:18:23 -0400 Received: from mail-bn8nam12on2078.outbound.protection.outlook.com ([40.107.237.78]:7038 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726161AbgINURp (ORCPT ); Mon, 14 Sep 2020 16:17:45 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AFq+VQk/CfcZNVuZGvfZyXMDvs6eRWLioct5gifMtiF9sQ/zevAZXpejO1DF+9to7TZxjwb8oIsNRmWWtTPglD1CqXAfKr9IFMxWdq+nMDKAJ1dITAQSaSbEwVjiBgQtZ8WGoTURfOFHnmP5u6jVdpxb3dhAVudfIxggdmaTKIQfv7e0LfXV8Z6seUiDGi2JmW7QawNNLfFaoZdawD+SazWGvI+WVLI+8J3q8qwDvorKc1gYOqFeN/P8lfO78edA/JAvperQmoYzNXVL7oUthAK2Jqtr9UiyRU91MDUkTCEo252mVPZ1pRWyoPR+ALMMqHvuHxVD4u6sNSvi3g4V4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9BvxN/opVS0M6RRUBUGIg6JNjdfL/tcEMRo0tXd0+HA=; b=PTe06S1brL/Um0zu6/U8E9/V2hx3gLp7wIszC9q5fYfUAC9lDVLqbggU+9ro9/Muff67WUbhlJ+0Y2DYm0aBW3BmgqN2QdY7vngN9QRYnlI4sbd5eQTNO5Mb5Mqzvx1n25Xj/mEYtvyTd40gab3E531UJijWkwX5K7vgaR9piWeg33GrdfoNHzKFH9HsT+OgC2oMRGEx5jXXumnzqJlXLarFO79mUVuPR+KubuSJFQTDBKTeFQTz1jgqeVYyyHPFBjavyhlh4KS6voAHytsTQzfXq93+SqLJcE5lsX+CG6hb+fEIzgKbWoaozIOTa7Lb5NgTBkeP5WC/NhQIyFhTbw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9BvxN/opVS0M6RRUBUGIg6JNjdfL/tcEMRo0tXd0+HA=; b=bPw4aqJCUi6CoiRPASzXufLPAgEJXZ3xgWWjZGzR61FJaWrIUsoJjmGlE7TldzltLLjZ8i4EqYyh8K+IKxt5D4rxpgRkVQPYAJd6wIXA3iAyAgiaEN2CyjZW06MPLlhzV1p+KblK0JF8w83uIEXuBhfE86vikYmDpaKv0lTLn5A= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:17:11 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:17:11 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 09/35] KVM: SVM: Do not emulate MMIO under SEV-ES Date: Mon, 14 Sep 2020 15:15:23 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM6PR13CA0006.namprd13.prod.outlook.com (2603:10b6:5:bc::19) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM6PR13CA0006.namprd13.prod.outlook.com (2603:10b6:5:bc::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.4 via Frontend Transport; Mon, 14 Sep 2020 20:17:10 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: e46faa4a-4805-4088-1476-08d858eb297a X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hLtcEQo82JOwImHwXzkdqLuydAUSpBU6bZ/IbQuScdAOWa3uHP+0R+RUKgOYLvuhEbS/N02fW+nHqUAI7Ooyq8dqJb/BCjcIT38csLdZ7BzrhruscaV+8MuuvJumQeLuydo7B0GgQFaf4DdY9gEaGMwykjcjZAAsoqHOQQi/1p4AUufokQNTdcDkerGCD52IZpeWKPuz3Yf07zdpCY2ue2/W0xb4WSdVDlcu74XX6xErIqk/YBXOLKuq468A+d57BvugxRrd4JvjK2qtBw0rSy8wDfGFymzj9UOf7DGr5ah9lEzsAs7Tzl5/7FCgB67GaiEVeD06RhB3Q7PnfdpyUQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 06vqAw2W3Gxq7lndheMiacZs74A2GFy0N/RzOZVa7fadfbxk7eoia0O5zWwdRlqij5CRneY5wpgeAk2KHsWFRPCPxRlNhMIeCb0rJTwUzboopLgTCn9cL0tAc/9naWJFfuFCUBcyY3ltGg9YiRN6K7+9ugiWUiGh48x1O/LDR+v1CZjfbM8eLOzwGHxicS0isUa9luSCIrQGnxg963aYJp4R49XhWhyPtT7lJTN8bs5kdW0ZwRF71mGvSspniicymGMNoYdSaD7FGo5SyhJKBPcetrBuhPM/akHmc//28UYWWlPNgP/TK07srdisYUMLBp5twcnlkA8dx0+0bjbujkRYw4OUjKns7J1q8y/C5xHUJWNBdlCjvDIeYZv8lvMJksQyXt0ILxqNMpShyZMF142p0YipM426+VV8Z90kLi9HUSTkz2d2+Lv+Z6MQ/wuB2DqLYZDOmGaK192BM9vdCHVtngPuRNZ7xLUaUZ6JZdVpB58BTP0JLNfkPzepDPb+nGGP06O0JZUYFeobTUS6zIVYH4fNFdSw+K1GoqHhD5p6WaAfQ5aaY+uV70mQvoSf6HuUjmulG4yPi9mzvKweb9JkLqTdBUjqBvpWoOtIsipO0nckAWaZLFS4xKjEKKcLFskGNi8S5tp/8+HGWV/7cA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: e46faa4a-4805-4088-1476-08d858eb297a X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:17:11.0949 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: qBbYJR6pmB75RPy/3wE6WLWZeNBjp0kpN4T2MsuUWbnb1USP9Rnlagv6mLZ+qD/+0jrMQ7MEAJTsfpzZ1E5soA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky When a guest is running as an SEV-ES guest, it is not possible to emulate MMIO. Add support to prevent trying to perform MMIO emulation. Signed-off-by: Tom Lendacky --- arch/x86/kvm/mmu/mmu.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a5d0207e7189..2e1b8b876286 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5485,6 +5485,13 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, if (!mmio_info_in_cache(vcpu, cr2_or_gpa, direct) && !is_guest_mode(vcpu)) emulation_type |= EMULTYPE_ALLOW_RETRY_PF; emulate: + /* + * When the guest is an SEV-ES guest, emulation is not possible. Allow + * the guest to handle the MMIO emulation. + */ + if (vcpu->arch.vmsa_encrypted) + return 1; + /* * On AMD platforms, under certain conditions insn_len may be zero on #NPF. * This can happen if a guest gets a page-fault on data access but the HW From patchwork Mon Sep 14 20:15:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774865 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 86B676CA for ; Mon, 14 Sep 2020 20:34:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 64C9C217BA for ; Mon, 14 Sep 2020 20:34:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="36riDGn0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726098AbgINUd7 (ORCPT ); Mon, 14 Sep 2020 16:33:59 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725920AbgINURz (ORCPT ); Mon, 14 Sep 2020 16:17:55 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hdwUGqaD5SpylCrZ7C46Wcwn5EnZ/OJYk7SteW0kcu+T5SH+Pe3OGaPJfkT6X4ahyRYL3H6Ind6eqceSFHycUx9B30MjZMd7X9ABhnYpDEYdgoHw7LnQqnIJoUv+cUcITnqb5Bm2/sOtF3+upuvF2srgFbDu2St0oivFR4wJKBbSsv6fZaxpEFatcg6lve7WAmCgnK1uIBoGltIiw3uJqhAdpLuaoMuP+UZxxPjXzdMXm4CN2kborHh5dvo07BPTmH6J/E4Dx6G2DDWc956kzPSADFLwCYzuOaiYDq2y10qKhRQWvpvQ/oA1WD+qYvmHFCtl3Wtx8briiFD9O6qWKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kZ8GNZcP0Md0CTxo2DMH36cP/64mZuq4WsY9nNNz9vI=; b=baUdtFjUcJAIfBdrubqfYxxPh26c1JVE6rdXivj0dwF22hSRl0PyQshMURCpT8o2neatWvPKdbxmvV2oKnlTY159MtmJjM9mh4SjJ+iEQaBfNMTwI0Gzo1k/O9dQ3Xqdia6flIj+sS67alkxf0SQz5ST9fopFX7kRcfL5nk2aafjsY5DqB0fxcORlVJirlMuakcP7gquJ+StsYRPgXDt0GV/9ryWcCleeIZorokvjBS7pro8j6Gh78KDgs4a4AV80iobsTYmBKE0vpo/5+s4xwxU/e930rgpHcjMcwgMcszv+yNUw1YDDOOgfxTqcev6/3u7HTiFoXVfdqJqJ1YWDg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kZ8GNZcP0Md0CTxo2DMH36cP/64mZuq4WsY9nNNz9vI=; b=36riDGn0VLWqJmXv27cCxJr/5A4+dck/uSaTL4dbADrSDgAKcOJrJhvWFq3U+tankCk2U7dwFpmSmOacDur78QdvwirlW6E5TGgcOfZig6Lt+jgMDidk5pRboBw3C1smVjCCK6jbn7JtwofbTBHV0PHu2RaTeQv+lYb3cwOBnPU= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:17:18 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:17:18 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 10/35] KVM: SVM: Cannot re-initialize the VMCB after shutdown with SEV-ES Date: Mon, 14 Sep 2020 15:15:24 -0500 Message-Id: <79a8f9e03580f6cd45ffd02492fdf236fb14a88f.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM6PR14CA0072.namprd14.prod.outlook.com (2603:10b6:5:18f::49) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM6PR14CA0072.namprd14.prod.outlook.com (2603:10b6:5:18f::49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:17:18 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: ab6dd264-3aad-4d42-add6-08d858eb2e0c X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6rAIicv5oGp25sZIl4oy6eaSwmXkHxBjha6wKw6HnWLxU3Sfz7EwSoLPIA/4dS2MuBcXohi3P36HofGkxFYdx1dGS/wnhY45VWmWoSboSAVKDxx5GheSA9VPVprdDLKV+lRR3eMFAb6OmbKZh6Te4NPbe7FS1jDz+VXdvHpU8OvTTB7YwsV1VoBkm/Lj/ZsVZw86j6qNUzjigKez6UH+KKqla4D5lHrA5zRpYpTdwDVqMGMT8DrSoSeT0cygKu826UOiBASvOoY91UENjefH4GpPqXbEXSN4EBED+93uRY+m01liWF9yWfGEkllG156GQEVfWpeO67LuQax35wzCuQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(4744005)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: tJmR7n3etXCdiSgfFHmnjb3qJXRJH393e7/1lYarmCSPNO3r2Mhg0+Bt7WjQ6Ac3GBthc9i05b6kkKCuU+nY7hHbw67eXBWdtK/+9SlEE9zObk03+rhWF8CZJfF79KzSDK0ue4iamCN/dXB5va4tiErVmddFwXL5WL4Eg4bhzkR4a8ZOiLASeY9ZPxL3rk3bSKHRLdrd4cjP4u5s5eM1iKMhra2aDUpWKZhC7Z2BkE+p0mWxi8nkjDZqlYZbDMlaUQ8h+PVKpoZIiLPFPAoWVRL7BuJKkS8Gl5lYRYwHY4PAUQ2IsoxPTibRPhoFqEFPERpKapJaBhFaBqKYW95LAte7AsIsulGor97Wx1kmt0alSiRj110otc+gh9vVdCScAPqZH2wGfMKtH+GRu8J0Yo5rZnS+8JcVnKull8Tr9eRemPznSivNO7QFOgo/QhjZXAstB9dV32P+hJ4aqeZJ4+a+G/Bsl8mdUS/+Vxjwc7/gMLFAWGxthBVvQVlgPZLuTTQLz78QTk9SrDKxZz19j0hHw4DPh6eEqJTSQ/IiXZ00+4atT4nFcIPaNWc3ljFSai7N6fNSzcCZm7i9GfEUr5bLjytdkhOCPnp8pazC6roXsTZT4/8eySr4K3UCcUH3HcvloJdKdJXdyMndv2pkEw== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: ab6dd264-3aad-4d42-add6-08d858eb2e0c X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:17:18.8086 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: VuwcN/7RayRCjUBzI3FI37ToieuUPb4GztUXcBHpKXSDR+T2Fgbebgp9QogDbo+/XmAPuJ/H+i1ZHYw3OCf7bg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky When a SHUTDOWN VMEXIT is encountered, normally the VMCB is re-initialized so that the guest can be re-launched. But when a guest is running as an SEV-ES guest, the VMSA cannot be re-initialized because it has been encrypted. For now, just return -EINVAL to prevent a possible attempt at a guest reset. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/svm.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 47fa2067609a..f9daa40b3cfc 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1953,6 +1953,13 @@ static int shutdown_interception(struct vcpu_svm *svm) { struct kvm_run *kvm_run = svm->vcpu.run; + /* + * The VM save area has already been encrypted so it + * cannot be reinitialized - just terminate. + */ + if (sev_es_guest(svm->vcpu.kvm)) + return -EINVAL; + /* * VMCB is undefined after a SHUTDOWN intercept * so reinitialize it. From patchwork Mon Sep 14 20:15:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774863 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2FE1E59D for ; Mon, 14 Sep 2020 20:33:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 109B02193E for ; Mon, 14 Sep 2020 20:33:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="cMEQoSV4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726143AbgINUdS (ORCPT ); Mon, 14 Sep 2020 16:33:18 -0400 Received: from mail-bn8nam12on2078.outbound.protection.outlook.com ([40.107.237.78]:7038 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726162AbgINUSJ (ORCPT ); Mon, 14 Sep 2020 16:18:09 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jirggKw5cDYi2WYV0JzuOcmu9JCYw9nEUqEKJgfPqt+70uo6hy+phJqYDhxjkJ/cwdM7IW4wzpxcuQ2odGHGuxm/UV/YTdQ7WyveznYKdq8kj3FAGpQ1k4ZX2I+AdHF6P0ovA9h9RRhDh6TwR4gzSwpDVtzrAmQ/2R8kaocQkcOuCZXU47R8gkPbhMZJgzNee201r8ouHkeHP6fiepZCe4ljrw2TTtHLwO+4Im55uMVT+eI1FYR3ghsXtAIJg6lB4F9zNVNQGQFIlzPJrAZolK3FWQcSo+T3x8hE1RJV5pQ+lHoTuxIUzDCEoh0zRCE7zOIE3EdcefBnpe0vc5iBIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kE6bjUOHPF2n+1Xj/O5V0z7+8x3KSOIhAncBkn2174Q=; b=U8178TP/eM4q4Ix1gY3gyAuc7+3QDZUIhS2d8fJv4X9nbHgt5MJjZCps6CBrnsZHKRgCchluKLDnMljWr0sMdIFF3bGDvjencAy+FTee54XO91zdGtYtZ5zRvnK16abC8jNviZFmnds0hYSo55cYmDGkU0MplVb9ldZeh6+hcrkZ9UZuQ0fJ+tg1bNBPgNX66vkIGhv2qG0BjbLJRGZTVqnjQdegHjvGSpgYg3R9oRgxL/MNyt8Whh8Ale22/+dNa4drz3L0/slqpb/HH3Ml3OQCtCk6m3Q9TMQPDq6s+IWJAhGQrLoSP9I4tUTX77SuCVyMy7jkKoD/Ag7/DIGIfQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kE6bjUOHPF2n+1Xj/O5V0z7+8x3KSOIhAncBkn2174Q=; b=cMEQoSV4kYn7UZ4UToNlGIzDsJjeFABWsgzYcVyvOzFx4JG5gw2ZjQ/QXgitnHOaaYJhzst3a0HhT6M7xXxUK1FxFkljLAUwLwmfM83jY/pob41rCjb2pOhXr24lF5mr+hiUeKklfkq5w6SF5RttX1YYuNc02IXu1LfX3S165JE= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:17:26 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:17:26 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 11/35] KVM: SVM: Prepare for SEV-ES exit handling in the sev.c file Date: Mon, 14 Sep 2020 15:15:25 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM6PR08CA0042.namprd08.prod.outlook.com (2603:10b6:5:1e0::16) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM6PR08CA0042.namprd08.prod.outlook.com (2603:10b6:5:1e0::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:17:25 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: ed286f9c-1e30-4ec2-4fba-08d858eb32a4 X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5797; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uJdk7B0t5SLPARnelUIfZI6Bz3oBV+Nh3/VmuijJfYcxjYTT9Ptql/xOQ2Oun+retfZK/dFE/5gijRd6m2YZqlS/0DXO6g+H7LORmSh7emaUY0hcaJaf4u2zoV6StuHJr2MwDE9kyAb48VLnS/2eMR8YkIM+aHU6GzV89gPSPTHZTCEvxubKg68qO4TPHRTnVGLS1DV2ExOut+ZF+TuaTl/WhP+5CpPcNS4akhYqG91GSp/ZBQu4AxAEmsktXG46b5hJJk4JxNxn65PnSHNCt6OErI84nkZJfCzNhrWt6nKfvc6qinUg2cUWnGnNO30rONoQtIyuF5WAmSdM6sKMRQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 9e8t9b4iYwOylDxBjsldPCowxqDt38SlMSyiZQit1ZvdPjtvNYgrLzNdvW9r8NZpjqNYV2UeE5AFnswdj5dbt/03C5dO8M6RVgPT9wQByQzere576sw6xXsKGDrBzBFM5D3nJOjXyHfBrrD/twx3NW56faiPqoP3Wcb+2Rw/5qEtDcWBU63fFGbL/i/LNIz9Ta61btIXyOLFkO25Eb+pisSFuuR5wQ9x1bF7YY+gT5X9JQEwaKWXMR5lJ3tOhDv7nnUJPFEhwl8BvAqg1MOpkOWjqG+Y6mb8LdDiRqjtrQvIpbvA6jS8KftN3Uz9/PXgjEz9UrpMDUyfa68JXNpigrV+HljGnkj71ru6Yf5y5mbrR6Dg/avDALuAfSr481wW6vEi0LzCnKyiaSEAuFbM3ZG+kjAcAqysQY+RFGJWjhZfXGT3DiZz0K08eecB4qFY1gUDu8ShvGqEPnFVrUB5kowgEhI0wmYsB/Ywq2qcLb1m+kv4uzuDZ/FFc1RV3XvpZbF5YOuUL4Oq6HxMQ7bFiLWrZr7xqpU6p6yn80d47V1UriA8i+rm/a2+XmVF/Z+Ry8M0ODy+dquYfk7w+JzVePlpa61ojO5KvnfpktyT3d0OzpkQfdILr+N4Su6VxzHxIKtKIRqGPbVoM2uv4dJLzg== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: ed286f9c-1e30-4ec2-4fba-08d858eb32a4 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:17:26.4992 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: LVyg3v2GUgr8qWXk3qHv3YqtzzGii8mBkqEl835KSk1BElWNKAuFgeo7Xi3AvGyf1jTPE4A6Oz4ROSAvOwr0mg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky This is a pre-patch to consolidate some exit handling code into callable functions. Follow-on patches for SEV-ES exit handling will then be able to use them from the sev.c file. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/svm.c | 64 +++++++++++++++++++++++++----------------- 1 file changed, 38 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f9daa40b3cfc..6a4cc535ba77 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3047,6 +3047,43 @@ static void dump_vmcb(struct kvm_vcpu *vcpu) "excp_to:", save->last_excp_to); } +static bool svm_is_supported_exit(struct kvm_vcpu *vcpu, u64 exit_code) +{ + if (exit_code < ARRAY_SIZE(svm_exit_handlers) && + svm_exit_handlers[exit_code]) + return true; + + vcpu_unimpl(vcpu, "svm: unexpected exit reason 0x%llx\n", exit_code); + dump_vmcb(vcpu); + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON; + vcpu->run->internal.ndata = 2; + vcpu->run->internal.data[0] = exit_code; + vcpu->run->internal.data[1] = vcpu->arch.last_vmentry_cpu; + + return false; +} + +static int svm_invoke_exit_handler(struct vcpu_svm *svm, u64 exit_code) +{ + if (!svm_is_supported_exit(&svm->vcpu, exit_code)) + return 0; + +#ifdef CONFIG_RETPOLINE + if (exit_code == SVM_EXIT_MSR) + return msr_interception(svm); + else if (exit_code == SVM_EXIT_VINTR) + return interrupt_window_interception(svm); + else if (exit_code == SVM_EXIT_INTR) + return intr_interception(svm); + else if (exit_code == SVM_EXIT_HLT) + return halt_interception(svm); + else if (exit_code == SVM_EXIT_NPF) + return npf_interception(svm); +#endif + return svm_exit_handlers[exit_code](svm); +} + static void svm_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2) { struct vmcb_control_area *control = &to_svm(vcpu)->vmcb->control; @@ -3113,32 +3150,7 @@ static int handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) if (exit_fastpath != EXIT_FASTPATH_NONE) return 1; - if (exit_code >= ARRAY_SIZE(svm_exit_handlers) - || !svm_exit_handlers[exit_code]) { - vcpu_unimpl(vcpu, "svm: unexpected exit reason 0x%x\n", exit_code); - dump_vmcb(vcpu); - vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror = - KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON; - vcpu->run->internal.ndata = 2; - vcpu->run->internal.data[0] = exit_code; - vcpu->run->internal.data[1] = vcpu->arch.last_vmentry_cpu; - return 0; - } - -#ifdef CONFIG_RETPOLINE - if (exit_code == SVM_EXIT_MSR) - return msr_interception(svm); - else if (exit_code == SVM_EXIT_VINTR) - return interrupt_window_interception(svm); - else if (exit_code == SVM_EXIT_INTR) - return intr_interception(svm); - else if (exit_code == SVM_EXIT_HLT) - return halt_interception(svm); - else if (exit_code == SVM_EXIT_NPF) - return npf_interception(svm); -#endif - return svm_exit_handlers[exit_code](svm); + return svm_invoke_exit_handler(svm, exit_code); } static void reload_tss(struct kvm_vcpu *vcpu) From patchwork Mon Sep 14 20:15:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774781 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45679112E for ; Mon, 14 Sep 2020 20:19:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D5DC21741 for ; Mon, 14 Sep 2020 20:19:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="OqCHuDhv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726236AbgINUTA (ORCPT ); Mon, 14 Sep 2020 16:19:00 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726056AbgINUSV (ORCPT ); Mon, 14 Sep 2020 16:18:21 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HAEBbI9X2szN7VDxHsI6WOz8Ueu8XkXsEMSN1awe8RLoT+6KJEGPfd8xTsgKnH7SElj8jO0g1NbO/TNtoDxgjySDU9M2/gTgAomH0p2q2MiFyCnXX2AKxIbb1BxwnIZa1jp50otUkqDxf8ix9zBTDWuItiIdXHsVE+udAvMXc8aAupHHYrfhatK/rhgFa53LQHX6kimdiaDelui3V1+GqmLfuvZ5x1bQDuVY/BiwSQMbZiH2jYmyzCZXqMu9aL6Ql1rv0IlBVYwhwI2+zaKsJgbtupjNsk0W1akPBSOPnWHUxIDMqM3K0eBhoWuHRFLf9AMAZtRxA/VLAjR8etZcUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DBpsnT2tJVySvuhrxxLYudkEgoLGk4WII7tECszYZhI=; b=C25G8Xy4LDAykGGA7ApoPp8OK8vRvXOTEy6NkwtMw26MIxFxlHv2dQiGtIyOH1OllwaF8inhyRqnkrENIuK0lm8LLWAK0YrzvPXFdj1igh20M7tR8ENoeh47CWvm4SPstBQ6iEPYh8g8q3bGiP7yeCZ5YmZlA4SdlgPtjItnbTW2V/e3zMJ3enxHSsAVjCque222/HJ+TwHpGMnLmKc7ItdacHI9xctId3IbsTtR+k6v5d2oMdygn1q2bZHGxvaQpqWVXKP7D/rGHPnMpUsz78fOEg9uS9Y+b2jZi5RenAgbLLjpXp3In0zxpG9my1+t6H7IypOYckGwPDosfiqrlw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DBpsnT2tJVySvuhrxxLYudkEgoLGk4WII7tECszYZhI=; b=OqCHuDhv1dvjY1epIlqbuh2DCfpBz5aqmWGV8rJ5AUg1JPorcKk5XNmjbeYMOJVXcNCJF+JtuX8TjlGiNfgxkZAe21BTaJz2Uh7B16W6fs3mxd077z2kTLygCFDi2srTjF0+TET+ROJaF4rN0fji6+NuSVJhPtNsKYFzrzG2tFg= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:17:34 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:17:34 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 12/35] KVM: SVM: Add initial support for a VMGEXIT VMEXIT Date: Mon, 14 Sep 2020 15:15:26 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR16CA0031.namprd16.prod.outlook.com (2603:10b6:4:15::17) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR16CA0031.namprd16.prod.outlook.com (2603:10b6:4:15::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:17:33 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: e821f261-34c1-4158-c9e6-08d858eb374d X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: C55egjotSMJkDzKRTmFdlI5b8gBu1UDSYXdGuUIq5XBHSoLi4zZk40ZM2SHmDyi3rljiu3tUmrx2z8jJdkXpLvwQyvZOyPhxCGB6QxmJzUutIr/SSf3cCN0Q9Tw6UtHKResd1aTrniVlyF2Ufd1b70ehGHGaNTYaSnQeEnjYE+q8uUJ+tXeDj4fprptJ9V6QNsf10SibDY+NgEDJI9cAq149E99wtVULKlOOlL7czTBGaBmrL5d0Iwa0mWLjUy0R5WiHSosmbJ24WoYUoVrppKHd0oduLAAcKLGi92/cqQd94HKucKV49GBxlDXh3SSmBpVq9gjkFXZT0r62H5uZ5w== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: zp/SrmxuZx0M4TNmVV2TjwZ877lvX5QJuutQzPX/nB7M1hIZ1D/S4SGvmyn35M57j5syNJIVMxl4JOUDeMA563cakJIDyuEQwo3KVg2gDMfWHEHvGq+W5iPu1wPStxEig2auuJ2K9dFApWem9LX0AL7huHw9I611yNwfBPWllXAVBACasQL+dR0mVlggGqTHCOcly8+VVE9h6mVfiVg1xy0L1BZ+ZFcFFZlcJsVwVQMvgtBbti9DEsdu0twQLkkUeJZHlvYp/UJVW2J5nVd58urw3JbfPzRk1NO9oTfSZ25OaX5v4HXgeB3qLngIuFLB+qZCmUYTDJviqqIspXy2BpYFdMuWOhYKYKyhvX9Z47po8srOEenBJcRWwDYQCmWPdt7oX5gYSCZSZ4kBcmHtJNeWGgGTvK2ryRhsTBX04ZT2o7xcNhyP0qORL0EwDUf5NopxU6WHQo2jHOybYlcocpO+VdygmIiAa11KSoa5tmtAYTdJ2yJohc5hLXMSlSDOAZhAlGtld0Sd1z8d6MUUjcU+PY41NsLFlNvVk97cD/0tzi8/s1gwRCFjUe2qAS7A0HU0PJ5kvuTUmUNHFNdGMfs37fcqrRe/MV+p3JGqFgbVD0HGakd+qqiHjIUfFC8o1YUrGGxqp57HNB5j3Jn24g== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: e821f261-34c1-4158-c9e6-08d858eb374d X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:17:34.3657 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: W0/k5VvZqwdpN2SepU+EQUx0hBUQ5jeZLydTCkbGXa8DoVSoNeGo/8NUDNi9YbT6wxQtVWhUTRVPMPe72ZBobQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky SEV-ES adds a new VMEXIT reason code, VMGEXIT. Initial support for a VMGEXIT includes reading the guest GHCB and performing the requested action. Since many of the VMGEXIT exit reasons correspond to existing VMEXIT reasons, the information from the GHCB is copied into the VMCB control exit code areas and then the standard exit handlers are invoked, similar to standard VMEXIT processing. Before restarting the vCPU, the now updated SVM GHCB is copied back to the guest GHCB. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/svm.h | 2 +- arch/x86/include/uapi/asm/svm.h | 7 ++++ arch/x86/kvm/svm/sev.c | 65 +++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 6 ++- arch/x86/kvm/svm/svm.h | 7 ++++ 5 files changed, 85 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index ed03d23f56fe..07b4ac1e7179 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -82,7 +82,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area { u32 exit_int_info_err; u64 nested_ctl; u64 avic_vapic_bar; - u8 reserved_4[8]; + u64 ghcb_gpa; u32 event_inj; u32 event_inj_err; u64 nested_cr3; diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index 0f837339db66..0bc3942ffdd3 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -80,6 +80,7 @@ #define SVM_EXIT_NPF 0x400 #define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401 #define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402 +#define SVM_EXIT_VMGEXIT 0x403 /* SEV-ES software-defined VMGEXIT events */ #define SVM_VMGEXIT_MMIO_READ 0x80000001 @@ -185,6 +186,12 @@ { SVM_EXIT_NPF, "npf" }, \ { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \ { SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \ + { SVM_EXIT_VMGEXIT, "vmgexit" }, \ + { SVM_VMGEXIT_MMIO_READ, "vmgexit_mmio_read" }, \ + { SVM_VMGEXIT_MMIO_WRITE, "vmgexit_mmio_write" }, \ + { SVM_VMGEXIT_NMI_COMPLETE, "vmgexit_nmi_complete" }, \ + { SVM_VMGEXIT_AP_HLT_LOOP, "vmgexit_ap_hlt_loop" }, \ + { SVM_VMGEXIT_AP_JUMP_TABLE, "vmgexit_ap_jump_table" }, \ { SVM_EXIT_ERR, "invalid_guest_state" } diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 48379e21ed43..e085d8b83a32 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1180,11 +1180,23 @@ void sev_hardware_teardown(void) sev_flush_asids(); } +static void pre_sev_es_run(struct vcpu_svm *svm) +{ + if (!svm->ghcb) + return; + + kvm_vcpu_unmap(&svm->vcpu, &svm->ghcb_map, true); + svm->ghcb = NULL; +} + void pre_sev_run(struct vcpu_svm *svm, int cpu) { struct svm_cpu_data *sd = per_cpu(svm_data, cpu); int asid = sev_get_asid(svm->vcpu.kvm); + /* Perform any SEV-ES pre-run actions */ + pre_sev_es_run(svm); + /* Assign the asid allocated with this SEV guest */ svm->vmcb->control.asid = asid; @@ -1202,3 +1214,56 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu) svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ASID; vmcb_mark_dirty(svm->vmcb, VMCB_ASID); } + +static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) +{ + return -EINVAL; +} + +int sev_handle_vmgexit(struct vcpu_svm *svm) +{ + struct vmcb_control_area *control = &svm->vmcb->control; + struct ghcb *ghcb; + u64 ghcb_gpa; + int ret; + + /* Validate the GHCB */ + ghcb_gpa = control->ghcb_gpa; + if (ghcb_gpa & GHCB_MSR_INFO_MASK) + return sev_handle_vmgexit_msr_protocol(svm); + + if (!ghcb_gpa) { + pr_err("vmgexit: GHCB gpa is not set\n"); + return -EINVAL; + } + + if (kvm_vcpu_map(&svm->vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->ghcb_map)) { + /* Unable to map GHCB from guest */ + pr_err("vmgexit: error mapping GHCB from guest\n"); + return -EINVAL; + } + + svm->ghcb = svm->ghcb_map.hva; + ghcb = svm->ghcb_map.hva; + + control->exit_code = lower_32_bits(ghcb_get_sw_exit_code(ghcb)); + control->exit_code_hi = upper_32_bits(ghcb_get_sw_exit_code(ghcb)); + control->exit_info_1 = ghcb_get_sw_exit_info_1(ghcb); + control->exit_info_2 = ghcb_get_sw_exit_info_2(ghcb); + + ghcb_set_sw_exit_info_1(ghcb, 0); + ghcb_set_sw_exit_info_2(ghcb, 0); + + ret = -EINVAL; + switch (ghcb_get_sw_exit_code(ghcb)) { + case SVM_VMGEXIT_UNSUPPORTED_EVENT: + pr_err("vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n", + control->exit_info_1, + control->exit_info_2); + break; + default: + ret = svm_invoke_exit_handler(svm, ghcb_get_sw_exit_code(ghcb)); + } + + return ret; +} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 6a4cc535ba77..89ee9d533e9a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2929,6 +2929,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_RSM] = rsm_interception, [SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception, [SVM_EXIT_AVIC_UNACCELERATED_ACCESS] = avic_unaccelerated_access_interception, + [SVM_EXIT_VMGEXIT] = sev_handle_vmgexit, }; static void dump_vmcb(struct kvm_vcpu *vcpu) @@ -2968,6 +2969,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu) pr_err("%-20s%lld\n", "nested_ctl:", control->nested_ctl); pr_err("%-20s%016llx\n", "nested_cr3:", control->nested_cr3); pr_err("%-20s%016llx\n", "avic_vapic_bar:", control->avic_vapic_bar); + pr_err("%-20s%016llx\n", "ghcb:", control->ghcb_gpa); pr_err("%-20s%08x\n", "event_inj:", control->event_inj); pr_err("%-20s%08x\n", "event_inj_err:", control->event_inj_err); pr_err("%-20s%lld\n", "virt_ext:", control->virt_ext); @@ -3064,7 +3066,7 @@ static bool svm_is_supported_exit(struct kvm_vcpu *vcpu, u64 exit_code) return false; } -static int svm_invoke_exit_handler(struct vcpu_svm *svm, u64 exit_code) +int svm_invoke_exit_handler(struct vcpu_svm *svm, u64 exit_code) { if (!svm_is_supported_exit(&svm->vcpu, exit_code)) return 0; @@ -3080,6 +3082,8 @@ static int svm_invoke_exit_handler(struct vcpu_svm *svm, u64 exit_code) return halt_interception(svm); else if (exit_code == SVM_EXIT_NPF) return npf_interception(svm); + else if (exit_code == SVM_EXIT_VMGEXIT) + return sev_handle_vmgexit(svm); #endif return svm_exit_handlers[exit_code](svm); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 9953ee7f54cd..1690e52d5265 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -17,6 +17,7 @@ #include #include +#include #include @@ -163,6 +164,7 @@ struct vcpu_svm { /* SEV-ES support */ struct vmcb_save_area *vmsa; struct ghcb *ghcb; + struct kvm_host_map ghcb_map; }; struct svm_cpu_data { @@ -399,6 +401,7 @@ bool svm_smi_blocked(struct kvm_vcpu *vcpu); bool svm_nmi_blocked(struct kvm_vcpu *vcpu); bool svm_interrupt_blocked(struct kvm_vcpu *vcpu); void svm_set_gif(struct vcpu_svm *svm, bool value); +int svm_invoke_exit_handler(struct vcpu_svm *svm, u64 exit_code); /* nested.c */ @@ -503,6 +506,9 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu); /* sev.c */ +#define GHCB_MSR_INFO_POS 0 +#define GHCB_MSR_INFO_MASK (BIT_ULL(12) - 1) + extern unsigned int max_sev_asid; static inline bool svm_sev_enabled(void) @@ -519,6 +525,7 @@ int svm_unregister_enc_region(struct kvm *kvm, void pre_sev_run(struct vcpu_svm *svm, int cpu); void __init sev_hardware_setup(void); void sev_hardware_teardown(void); +int sev_handle_vmgexit(struct vcpu_svm *svm); /* VMSA Accessor functions */ From patchwork Mon Sep 14 20:15:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774861 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEA026CA for ; Mon, 14 Sep 2020 20:32:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C272215A4 for ; Mon, 14 Sep 2020 20:32:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="esv2Z5Wi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726172AbgINUcc (ORCPT ); Mon, 14 Sep 2020 16:32:32 -0400 Received: from mail-bn8nam12on2078.outbound.protection.outlook.com ([40.107.237.78]:7038 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726198AbgINUSa (ORCPT ); Mon, 14 Sep 2020 16:18:30 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SL4xmqtbUJuOobCaupd7cVaHQC3JVa4mxojOh3PrRqdXLPS2TrRoD0jXxoRM2oukYvd76tUo79QU6ZSiOXE3IANEHqcm+99EcVlRveUprHAks6zXV/WRQ4unyvIjraDJujdMimbZKKk/BkWwsL6HLdwUWWLNUSm5v8ol4lgqZzvlZw+aAPXYmbophrnuRI8LjHsua/0/Im7dhCWqxyety7j3a18C5nZLIrOPQ9zV7qhveFV8B3y1XaQ+O2HQfEk/z+zifXpBRjSK8czpKplobo1TxwXTwut9s22vmdeAXuT7UcLKntoZ/8b9EqPC/+wFryDAXPsNCbvn1Ybm+L8a7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zrG9QiFSQe5yeZLMvzPMw9PkYBtJ1V5t2I+SYppvbTQ=; b=l21mNYs45tdFCUUmPwYTx5VBg/618Ie4jqM/W5hSZmuOcnamBoMefTXTjbJXj90weYUKIXJwfIhBulqodYVQDOyWgYDG2Oc5pAdnKWj3aVc5gKzehKZm95X56MftgMF16Uhky8s2iUtZGFW6XH/Q8WxZMPAAwBJqb/89HV4e7VNoOvfszqcU+nKJfaHRYgDi3ieFvl+/oYKY5HXSPcpisjoiE5XF+5bOB3tZEaJLR2FsLoWBv+wlD8+t/1gU9O2+HU4zbKQ9G0Eqer2JHepNLBNGO4XzgVjObSwsFdAuQtmAq+QUQsOyHmwa5EzWfQm34XQFs6fIfxH6kK6xb0nOrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zrG9QiFSQe5yeZLMvzPMw9PkYBtJ1V5t2I+SYppvbTQ=; b=esv2Z5WiUg7HAg4P9Ohd2teDnMnfGQjN+lIqU05zvhd+hgGcNHXCUxIptKP6+xfPH5liybTeugLs3WSENhOzb5Z42aZse+Cu9OYmtnTjf0MjRvixKrHOMFcZqa/yCyEZdk3oAqUrWeQWllmXrnJGnv8nGWv4BqgHon6TTZuz4r0= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:17:42 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:17:42 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 13/35] KVM: SVM: Create trace events for VMGEXIT processing Date: Mon, 14 Sep 2020 15:15:27 -0500 Message-Id: <2d3976f1daae326c2cb7fd15f1c7ba06d7f4c525.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR06CA0072.namprd06.prod.outlook.com (2603:10b6:3:37::34) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR06CA0072.namprd06.prod.outlook.com (2603:10b6:3:37::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:17:41 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 91d46098-b57d-449c-ac93-08d858eb3beb X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kl29/JZnMuSaw5Yzt21+0W9gEXHvT0qnMCFTiPgLFnUOHuldq8X11QpEVOWuySxauzzR8J25aveq47JJyjfIfoSsYHtvr7YO59LderCU+3LEFsHJWePjr/UaB/A4PQOktDgfPuUJvqxZSheFJDHjQAH+Ati+B/v0iqiZWsMS2shfbXR7ihiUgBA7UmnwGR93CCj+IS/ZMEmp/ZocCwWPRQPWZQr32X9BrKlUkEi6D9D2UZuWZ35eXC2bcr0l+2t1iixPoVkW/Kbyu6wZKoc9FhODvZYsL/HF4bk8NUWRBaV+F+xUoSmo5z5KKoVo3p7i X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: lfuQ19JBrLk4TyCfzrisT0ZSypIE3fEnZX1SxX4CmkEvLZvTTOMVJcu4GEneVrT2vJUGy7VgFIxa7JVtT9i0LFQAlc4qSDaKrp0T3qon0nz6XcvowHjKjXpfE095WwpOQ+5dlLRLiP3YCj1Z8TPIS23/nB5HEN4Thk17QK0onC+iXL3vjWkh8tzz5sGuF59gvGPsdIiGy1mw4IE8MmgJz4wKPrDP6js2nehWH6//BNiP1BFu/u1u3InngtqHe8amT+LEdGJJyDqAPSHlDUt/AxyaJnDVw/UxP2q19lfzTLbe0aBe8kILevwIxKBsP1mFDxJ8z6eizDFeZujdVpx+Jx7eNjQRvwdrXDfkCommM5Ey94Mf4m3YwnxHp/MUONp9AeOxKCOtryfj6Xoonw9cGOS7LX99ahPwayMrkqxOWvrvkt4+6hw0IZgRNFk73gerDjIJyuXx9aNhf3x7t8VwOrgiHMJjAd5qxXdWpxLH7gdO5eLTiYBo4u04un+2yRZjUn+8E9Mk73TBzF0uNnZSHhLYpK9Ap7aww4McrrEH4Bx0/VMo16zcOMVXee6U8JaVjY1hUTpR9Wfcd08BQR3wMaaXo45cPYkOc3fN7bnj07bVHu/S/5wgx3mfP7F7PzbxTQ8DSbE+pdvz2UpPpOMY5w== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 91d46098-b57d-449c-ac93-08d858eb3beb X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:17:42.1313 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: NiHWx6Koe7JIqmaWKE6ixBcVFMhCYG6DCA6gLPvFX/Tbxlvw3bTDkTrPAxWAWu7vhCGDi1IKsqkyLbirFAwK3w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky Add trace events for entry to and exit from VMGEXIT processing. The vCPU id and the exit reason will be common for the trace events. The exit info fields and valid bitmap fields will represent the input and output values for the entry and exit events, respectively. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 6 +++++ arch/x86/kvm/trace.h | 55 ++++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.c | 2 ++ 3 files changed, 63 insertions(+) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index e085d8b83a32..f0fd89788de7 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -14,9 +14,11 @@ #include #include #include +#include #include "x86.h" #include "svm.h" +#include "trace.h" static int sev_flush_asids(void); static DECLARE_RWSEM(sev_deactivate_lock); @@ -1185,6 +1187,8 @@ static void pre_sev_es_run(struct vcpu_svm *svm) if (!svm->ghcb) return; + trace_kvm_vmgexit_exit(svm->vcpu.vcpu_id, svm->ghcb); + kvm_vcpu_unmap(&svm->vcpu, &svm->ghcb_map, true); svm->ghcb = NULL; } @@ -1246,6 +1250,8 @@ int sev_handle_vmgexit(struct vcpu_svm *svm) svm->ghcb = svm->ghcb_map.hva; ghcb = svm->ghcb_map.hva; + trace_kvm_vmgexit_enter(svm->vcpu.vcpu_id, ghcb); + control->exit_code = lower_32_bits(ghcb_get_sw_exit_code(ghcb)); control->exit_code_hi = upper_32_bits(ghcb_get_sw_exit_code(ghcb)); control->exit_info_1 = ghcb_get_sw_exit_info_1(ghcb); diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index b66432b015d2..06e5c15d0508 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -1592,6 +1592,61 @@ TRACE_EVENT(kvm_hv_syndbg_get_msr, __entry->vcpu_id, __entry->vp_index, __entry->msr, __entry->data) ); + +/* + * Tracepoint for the start of VMGEXIT processing + */ +TRACE_EVENT(kvm_vmgexit_enter, + TP_PROTO(unsigned int vcpu_id, struct ghcb *ghcb), + TP_ARGS(vcpu_id, ghcb), + + TP_STRUCT__entry( + __field(unsigned int, vcpu_id) + __field(u64, exit_reason) + __field(u64, info1) + __field(u64, info2) + __field(u8 *, bitmap) + ), + + TP_fast_assign( + __entry->vcpu_id = vcpu_id; + __entry->exit_reason = ghcb->save.sw_exit_code; + __entry->info1 = ghcb->save.sw_exit_info_1; + __entry->info2 = ghcb->save.sw_exit_info_2; + __entry->bitmap = ghcb->save.valid_bitmap; + ), + + TP_printk("vcpu %u, exit_reason %llx, exit_info1 %llx, exit_info2 %llx, valid_bitmap", + __entry->vcpu_id, __entry->exit_reason, + __entry->info1, __entry->info2) +); + +/* + * Tracepoint for the end of VMGEXIT processing + */ +TRACE_EVENT(kvm_vmgexit_exit, + TP_PROTO(unsigned int vcpu_id, struct ghcb *ghcb), + TP_ARGS(vcpu_id, ghcb), + + TP_STRUCT__entry( + __field(unsigned int, vcpu_id) + __field(u64, exit_reason) + __field(u64, info1) + __field(u64, info2) + ), + + TP_fast_assign( + __entry->vcpu_id = vcpu_id; + __entry->exit_reason = ghcb->save.sw_exit_code; + __entry->info1 = ghcb->save.sw_exit_info_1; + __entry->info2 = ghcb->save.sw_exit_info_2; + ), + + TP_printk("vcpu %u, exit_reason %llx, exit_info1 %llx, exit_info2 %llx", + __entry->vcpu_id, __entry->exit_reason, + __entry->info1, __entry->info2) +); + #endif /* _TRACE_KVM_H */ #undef TRACE_INCLUDE_PATH diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9970c0b7854f..ef85340e05ea 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10790,3 +10790,5 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_unaccelerated_access); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_incomplete_ipi); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_ga_log); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_apicv_update_request); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_enter); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_exit); From patchwork Mon Sep 14 20:15:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774857 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A607159D for ; Mon, 14 Sep 2020 20:32:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 893EB217BA for ; Mon, 14 Sep 2020 20:32:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="Sdcohynv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726349AbgINUa5 (ORCPT ); Mon, 14 Sep 2020 16:30:57 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726184AbgINUS5 (ORCPT ); Mon, 14 Sep 2020 16:18:57 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JtgKOonny7YnCx0P1jth7AA47HcF/m0kwvsL+gx7THSwb2kvJGv5dkoy8BsDZXB+Hac1F5zbxgfWEK9Dg/2AqmDqpXhmDnHopWKmFJILQr1km2GuEqreVUejsOpb8xpicn9ROo9pdw/sa6/h6/IYQIlpU+8wA9i/LDh/N//rFJQ8KnGwDiUsI8uJuCdVIZIWW6LpYHvgzTA5V+XqnTMz/4J5SpunvQEhoNAF9uq4Lw+r+lWiqRZII96Ooz1XrOxHmHEnUoh59Gc58XcAljjVfxTOd5wjxRQzeHdh8xyiNY5sI+tc/kB7ixvOPSa5HP5cy5vDFmTq0poPUly6h1eW8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Dsm7MhJiYek0PTUZS3UBc0XnHSKpAqET/iou8FFRtOQ=; b=aLuA5I6zC5Uox79MXG0uVU46cvTOfwL1g3z3L0D9kRqQXUAtWGmLPf0odPMAhnfDExqwCPLYHk/MAhh2juJJynjiicfY2pmONZbUEHsADFK+bYWo/MsFlNYlL4mWUrKV+mJx02zzBJz7sIPAwaRqiEb+D7H0rqyaWXRSVAGVgEp5n/JGgmM3BrBlng8CuS921Yj5KKDjKW+e5P/Fe5HXKReslnxvin1MfzURNpX6r8qcezDi3OZM/vRCXaefSpcInceY8hoHDpDj89onQzhb8ae/mg3jYm5AQLld23lDsT4FWvR7OEdJAN+LiGCus9D8yhPwd3Qt2K0NBJyJDVkXMA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Dsm7MhJiYek0PTUZS3UBc0XnHSKpAqET/iou8FFRtOQ=; b=SdcohynvQqtkXhmL7j9dnIT9KjceWpDZSKzl/SMlIXSVePC7nrTFSz1fi5MnePUyTGrrXKU5bAYQ84W38o7xSszGejdOhnJx26KKgZkoO7TE1t415cpiI0kZ+1Zl+zGvRehzBN2N39KoGdDJruQhqeh2+pCuZRtlDBA5BC5Mt2E= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:17:50 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:17:50 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 14/35] KVM: SVM: Add support for SEV-ES GHCB MSR protocol function 0x002 Date: Mon, 14 Sep 2020 15:15:28 -0500 Message-Id: <91ddbabb64a338c59bf5cbe554d537d0b72464d9.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR16CA0023.namprd16.prod.outlook.com (2603:10b6:3:c0::33) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR16CA0023.namprd16.prod.outlook.com (2603:10b6:3:c0::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:17:49 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: b0943643-bbf7-4fe6-a4b9-08d858eb4091 X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: HbzOLhHtQWW4OF/fNwBXp52/6yn+AYQ7xAV9s1G935Q4yv0Fw+9OnI7fzYgCgE8a5BuZYa/GNZiCsuFFRl+DZL3wAqnzKvq+NfTf8FM5v62bG9yljfCxUWEJanV0TmRoAThIhrWfM5BVx9F39pf1Ivm/dCzXDfikzp+D/5XTeav/5mzqf00hDYAnh+7HXunuw+G6fXpPfX+huboUFNdh+KAd/srnR+aM8hEC1CptofarOqnGshCYK2SSJwuf+kaAsb1FYsd5aOq97WiMMIXj4K79neXiHdBoql/9h+e9Yk54RHUMm8rAn9ZGLFXfWcV1a7KRUam8BUsKicjj/XAl1w== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: eurorStjFpmB0ZjtltoYDSbZZhayzePpzgGnKws6KP6LEicNYeoXUW3foRjMCIrUgkL3eOHeCPsBkJZpT2DP/bIt0UMw+MeEJz7MuzMzNFMwBdd0cDxmewlMNcJdMMmrWJSlPzjRTyPUiEiZJGs6qOqs19GCDjnjKQUVljNpifYOHPY+8w/jrJwRvCNzIh42qkgz6/LgO594+vcZdFP0ruUeP3nbFiH+8ZpmO94IDl1Eev7fKNfDilspgk+yQJy4lGiR3wpV+JON22KLTA0Nfl2eZvUdj3t68mj/ei9Qv/9EZieFCqVsPuyBN7/yO3HNgD/bmbf/YjtcWi2ZHXu9S1Mbve0SNMXSGp9pFMO/8dvsKf3rqAHzKF/3wwyLXOC5ZVFTfacYmQYwIY5HJcLrM1pnOuBm+ZKxrl4N/i66eQfaC3oCblp+jZwXg55bQyc5gCEzK+BWUpVqrg/9IUHQu4YfAjsBOHpYrNTgr4EhKUaNRuK0PDO2wds7oDVmEGraEKv2bwNEIqtUDVCbS2VTax76RiqinICit9pRgfGGBoFDo3bRJZ9legv0B4uYEXyPv4YYGtJUwRoQzArNCl129yBBIBIVPe8nacuKVPRzeVTovibJsiRT7yX/NQZp9SJT4E24cRLYE99aDT/+3dEo4g== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: b0943643-bbf7-4fe6-a4b9-08d858eb4091 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:17:49.9289 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 675UOxHuRmDnxopW65jTnmEfPk9rXNTrcsERKHeEu1TjvL19gHyTuM8KPTj3JvXHq5QZqX0lOtNpaQ0WfTTuPw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The GHCB defines a GHCB MSR protocol using the lower 12-bits of the GHCB MSR (in the hypervisor this corresponds to the GHCB GPA field in the VMCB). Function 0x002 is a request to set the GHCB MSR value to the SEV INFO as per the specification via the VMCB GHCB GPA field. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 26 +++++++++++++++++++++++++- arch/x86/kvm/svm/svm.h | 17 +++++++++++++++++ 2 files changed, 42 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index f0fd89788de7..07082c752c76 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -20,6 +20,7 @@ #include "svm.h" #include "trace.h" +static u8 sev_enc_bit; static int sev_flush_asids(void); static DECLARE_RWSEM(sev_deactivate_lock); static DEFINE_MUTEX(sev_bitmap_lock); @@ -1130,6 +1131,9 @@ void __init sev_hardware_setup(void) /* Retrieve SEV CPUID information */ cpuid(0x8000001f, &eax, &ebx, &ecx, &edx); + /* Set encryption bit location for SEV-ES guests */ + sev_enc_bit = ebx & 0x3f; + /* Maximum number of encrypted guests supported simultaneously */ max_sev_asid = ecx; @@ -1219,9 +1223,29 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu) vmcb_mark_dirty(svm->vmcb, VMCB_ASID); } +static void set_ghcb_msr(struct vcpu_svm *svm, u64 value) +{ + svm->vmcb->control.ghcb_gpa = value; +} + static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) { - return -EINVAL; + struct vmcb_control_area *control = &svm->vmcb->control; + u64 ghcb_info; + + ghcb_info = control->ghcb_gpa & GHCB_MSR_INFO_MASK; + + switch (ghcb_info) { + case GHCB_MSR_SEV_INFO_REQ: + set_ghcb_msr(svm, GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX, + GHCB_VERSION_MIN, + sev_enc_bit)); + break; + default: + return -EINVAL; + } + + return 1; } int sev_handle_vmgexit(struct vcpu_svm *svm) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 1690e52d5265..b1a5d90a860c 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -506,9 +506,26 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu); /* sev.c */ +#define GHCB_VERSION_MAX 1ULL +#define GHCB_VERSION_MIN 1ULL + #define GHCB_MSR_INFO_POS 0 #define GHCB_MSR_INFO_MASK (BIT_ULL(12) - 1) +#define GHCB_MSR_SEV_INFO_RESP 0x001 +#define GHCB_MSR_SEV_INFO_REQ 0x002 +#define GHCB_MSR_VER_MAX_POS 48 +#define GHCB_MSR_VER_MAX_MASK 0xffff +#define GHCB_MSR_VER_MIN_POS 32 +#define GHCB_MSR_VER_MIN_MASK 0xffff +#define GHCB_MSR_CBIT_POS 24 +#define GHCB_MSR_CBIT_MASK 0xff +#define GHCB_MSR_SEV_INFO(_max, _min, _cbit) \ + ((((_max) & GHCB_MSR_VER_MAX_MASK) << GHCB_MSR_VER_MAX_POS) | \ + (((_min) & GHCB_MSR_VER_MIN_MASK) << GHCB_MSR_VER_MIN_POS) | \ + (((_cbit) & GHCB_MSR_CBIT_MASK) << GHCB_MSR_CBIT_POS) | \ + GHCB_MSR_SEV_INFO_RESP) + extern unsigned int max_sev_asid; static inline bool svm_sev_enabled(void) From patchwork Mon Sep 14 20:15:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774859 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52ED759D for ; Mon, 14 Sep 2020 20:32:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3155321741 for ; Mon, 14 Sep 2020 20:32:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="rkikJhJ+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726170AbgINUa4 (ORCPT ); Mon, 14 Sep 2020 16:30:56 -0400 Received: from mail-bn8nam12on2078.outbound.protection.outlook.com ([40.107.237.78]:7038 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726241AbgINUTC (ORCPT ); Mon, 14 Sep 2020 16:19:02 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f1hxuJnFyOBjy/+x6lgBnwxkH47GEvBrXH4Hq1/9MC38pIyyxWNIrGztQ5z/zeGNXTWuzMyVhmZ0xa3q5itqvII/SbmJZKg/wteIs8pR1bh5qQoNXtG95UJaKUVJfiFhBzp4HLtv1kHIwtaxQhC7s2VgiZlnIvKGgKep0LtMjkYXGTqtCoIwZRvie7Q3hdrTdkbRQeAo+nQn2t3Rl3/yd8VAUjCEbgF1/ap98VGsVmwDg6eqq/yD3DcxhECUH3n9NbXp8MiQBoO6xRptXSghn6XT+9bTcRm/egM9tSNNt4jh210JXVZoy2LuhOY2zUg7xX6e5Mdz2f2l9akvGW4BFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=z+xuYZWf/50TsJUUIRBNqB1KwL4FT1zloPfZ9kSHEJk=; b=VyCY0Cxpn2xBsW7xA0CagG/T128cWle5qHgcPluPgr3ZSUmB7die2wfMLzG5CqsNYDDIq8BnD7CXuLTo65OPlMRkjLlDNBthuO9TjocWNoLWiH5UvavMY8VbZ6sWkCN9ox2VuAZs/k5iEY3Tx+Xii96DdB0RP+t5HG7h5czEu3Qh7I7wzA1AVQpp8wnWKtXnHY4yZi/zF/Nu5BvYeJHjygmEoleFlXVv8yAWfRHUQPA9xXu34r1hEUL6QLBMfrx/Tstfx/AqAC/caFme20Iqf1kbnnRVMjkXj0b/Zx5ZkC9BtzXVIfAAzQ98S3NY35lc/c3NHqtrWQHoFdHG0/gqUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=z+xuYZWf/50TsJUUIRBNqB1KwL4FT1zloPfZ9kSHEJk=; b=rkikJhJ+5tjWIbpiTiC1MSO0udKDr6ajtQHATTmNLCw8Co2tqH0jE1xJ/y5pf3nGmSImQo0yc5wh+hPMCutT2lSeV39hdYtmIlztJxw6JEEKFmm5q8D4q6z0b1NmKTHC8W9QdUFe2Skcv6sNcLBypJwRlV6rFLbXfP45ojBO/2I= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:17:58 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:17:58 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 15/35] KVM: SVM: Add support for SEV-ES GHCB MSR protocol function 0x004 Date: Mon, 14 Sep 2020 15:15:29 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR19CA0037.namprd19.prod.outlook.com (2603:10b6:3:9a::23) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR19CA0037.namprd19.prod.outlook.com (2603:10b6:3:9a::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:17:57 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: abafe5c7-dcb9-4b60-37ad-08d858eb455c X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: un+8OJSgJZjo6lZhZdaP52I4MYTXwn65wlpYPRXrNHP11RL9eK4Cs3FtQ7MBkF2acSBf5vmUZRkdS834VjNxvySpzceWJuLDG4VtoBZcp7pNkLxbrq4mVEup3OaXbongu4iInAV8hcBHmuC/sUBHyaC9ZJ97voy46XtbEQoJpCrXwZai3PsmuSCljIUVMVZeYYESMy23t4J6pNrMM0Z75yUFZ+F/wA7R/vgpK7eW7YePLRiYO0imOhKVlHHASioMCdaS65qfeQk5TmxaMhf6Rk43pQ5hqdI8NQxi4097pYtLLVZOkM8C1EGUoqQvwhrWqwJ/2DcHtyPbO0la48gDvA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: WFW3nR8hf7JTIoQ/5AaTIvC+mrU8CfEvj3W+0oYUzWfRxNAT13P3o0ZKTq90PaE2kweM8esZQfUaTDjaCtCPh6vZOLwuWeowM0wHnzA5lRNw5PktCJHtF1PfjiSUZa7i+UZS7h3w1Sd+qtL98n31DYs8bMB027QxljRFuaKe1JJSPDxXQ52TxAmi1QGD+XYFH0OssByHIcBkhCFTLgvwpVzu4pM0r2kM6wV6KgRYfm3g7oO9AxJAMzphBEpIJMke9G6Sc3Qh0jluWtxXyab4MwMDqbW4UkQhGPtvxkqKkVf7GYSfhdic1mZb29ywWlMqVPqmvBOGJL7PQ+Sv81RT7XBk/KNraDlHNGfYXZGmI+wfIT7vC+lY1Wl3QZHrkiWp7J34jW08okah4+wb+uK60REcutBy5xdnQVuWeFS0sWDYvKi4KuWb52QQpNZ1ht9qHcpBOzBA0+/aRRYsk1JyArwwJkpPEwHD0W4qTsekRFB4eYMHQGiOAZURa/BmtCS5Tzb0St6Wyh/vN7JUypay1+WuQJRRbnvgOzRyDrhkj7nx9nTPK6gEp1fjzrz+HBoT/GD76SkvM/V6rj90uLEOB+uNmqmaJIzyyCfYXmSdrDefbll8zluVnA8g6iJ2zDbZdggCirOfSs9FKGFTAeJLtQ== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: abafe5c7-dcb9-4b60-37ad-08d858eb455c X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:17:57.9204 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: CudbptPVRErotfDGYiwkIzgBGdxp7VACmsPDJW8lyM+gHKlq2jtZHRFWSRsQPTpRM70NCPpwbr8liv6MAtA1gw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The GHCB defines a GHCB MSR protocol using the lower 12-bits of the GHCB MSR (in the hypervisor this corresponds to the GHCB GPA field in the VMCB). Function 0x004 is a request for CPUID information. Only a single CPUID result register can be sent per invocation, so the protocol defines the register that is requested. The GHCB MSR value is set to the CPUID register value as per the specification via the VMCB GHCB GPA field. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 55 ++++++++++++++++++++++++++++++++++++++++-- arch/x86/kvm/svm/svm.h | 9 +++++++ 2 files changed, 62 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 07082c752c76..5cf823e1ce01 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1223,6 +1223,18 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu) vmcb_mark_dirty(svm->vmcb, VMCB_ASID); } +static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask, + unsigned int pos) +{ + svm->vmcb->control.ghcb_gpa &= ~(mask << pos); + svm->vmcb->control.ghcb_gpa |= (value & mask) << pos; +} + +static u64 get_ghcb_msr_bits(struct vcpu_svm *svm, u64 mask, unsigned int pos) +{ + return (svm->vmcb->control.ghcb_gpa >> pos) & mask; +} + static void set_ghcb_msr(struct vcpu_svm *svm, u64 value) { svm->vmcb->control.ghcb_gpa = value; @@ -1232,6 +1244,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; u64 ghcb_info; + int ret = 1; ghcb_info = control->ghcb_gpa & GHCB_MSR_INFO_MASK; @@ -1241,11 +1254,49 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) GHCB_VERSION_MIN, sev_enc_bit)); break; + case GHCB_MSR_CPUID_REQ: { + u64 cpuid_fn, cpuid_reg, cpuid_value; + + cpuid_fn = get_ghcb_msr_bits(svm, + GHCB_MSR_CPUID_FUNC_MASK, + GHCB_MSR_CPUID_FUNC_POS); + + /* Initialize the registers needed by the CPUID intercept */ + svm_rax_write(svm, cpuid_fn); + svm_rcx_write(svm, 0); + + ret = svm_invoke_exit_handler(svm, SVM_EXIT_CPUID); + if (!ret) { + ret = -EINVAL; + break; + } + + cpuid_reg = get_ghcb_msr_bits(svm, + GHCB_MSR_CPUID_REG_MASK, + GHCB_MSR_CPUID_REG_POS); + if (cpuid_reg == 0) + cpuid_value = svm_rax_read(svm); + else if (cpuid_reg == 1) + cpuid_value = svm_rbx_read(svm); + else if (cpuid_reg == 2) + cpuid_value = svm_rcx_read(svm); + else + cpuid_value = svm_rdx_read(svm); + + set_ghcb_msr_bits(svm, cpuid_value, + GHCB_MSR_CPUID_VALUE_MASK, + GHCB_MSR_CPUID_VALUE_POS); + + set_ghcb_msr_bits(svm, GHCB_MSR_CPUID_RESP, + GHCB_MSR_INFO_MASK, + GHCB_MSR_INFO_POS); + break; + } default: - return -EINVAL; + ret = -EINVAL; } - return 1; + return ret; } int sev_handle_vmgexit(struct vcpu_svm *svm) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index b1a5d90a860c..0a84fae34629 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -526,6 +526,15 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu); (((_cbit) & GHCB_MSR_CBIT_MASK) << GHCB_MSR_CBIT_POS) | \ GHCB_MSR_SEV_INFO_RESP) +#define GHCB_MSR_CPUID_REQ 0x004 +#define GHCB_MSR_CPUID_RESP 0x005 +#define GHCB_MSR_CPUID_FUNC_POS 32 +#define GHCB_MSR_CPUID_FUNC_MASK 0xffffffff +#define GHCB_MSR_CPUID_VALUE_POS 32 +#define GHCB_MSR_CPUID_VALUE_MASK 0xffffffff +#define GHCB_MSR_CPUID_REG_POS 30 +#define GHCB_MSR_CPUID_REG_MASK 0x3 + extern unsigned int max_sev_asid; static inline bool svm_sev_enabled(void) From patchwork Mon Sep 14 20:15:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774785 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D225A6CA for ; Mon, 14 Sep 2020 20:20:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF15E21741 for ; Mon, 14 Sep 2020 20:20:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="B8vHcvpe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726038AbgINUTy (ORCPT ); Mon, 14 Sep 2020 16:19:54 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726202AbgINUTW (ORCPT ); Mon, 14 Sep 2020 16:19:22 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cEB0dz2MfSUZnfRWjFmBFMM00DODNPRk0NCKc0NcUutimbgAm19fVUQ2eow+luy9omtLLoCmCEWQR9ClaV92K7yJKbRfS31zoDqf8sapgQaCyDkf60vAZG3OurqOwb9iYwN93n3+RB51cjmOVtNIGgEesScJShjVBm8j7/2lIkhrAzGfBXbfGQUwfj74VFcT8xF7ZGjTLqBitqz02kxrtgSCB5jscV5YrJdlJuNIWoNBpGH8/OJJXo9r9Bn0XPB+p4iIrYOzGN4Ld1o43lOghGAhS7rC0aF/HyzH4y7nMnd49+gt3ZfEuEtjvtkbU4wPLywYq2IcTHUcYjSTX2rDlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LBdYoePzxSxwo/FexpMU4ueg8FgUrUUboEtMhaRNv8A=; b=Xx1ovFf/xHhK0+76yelfZd7I1iVfdxuO2CMF29tMchC4hKE/rXnfCTNRpXKm82qGJOURxdHbqy04p/CqExKFHZWUogvAa13P67z326n2Y0EpIMmiENbI3kCEd9qU/ptzUlU9M+qfgLLQONKVGqoOJZW8Un9EimiCHo7XxDjdG02oi6xIR536J+lJvIw5KM7CWTHm/fnsBFFpE2O8FsoKZS3bmkUVIwfujIlrzFCtzUJzlhdkY6PaRDzC8D6+u9sWYEfq/oZpp6A1f0Zx7tG8w600IwxPuueQkYfpUCxfNbqYywWbcslnjZuHhqkhFWGWaWQY7SDCl6mnK0psAx5/yQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LBdYoePzxSxwo/FexpMU4ueg8FgUrUUboEtMhaRNv8A=; b=B8vHcvpe28MXV0goXCESvwb3kWy6oI2YOFOcqMSPyOb8YAw5WudS4l7DumUTh/e9hXj96xyeqW2a9FsgGX6puuNnmC9iWwJ+1V8YA1DSUJD8djEUaKASTNNobiZMkW+v0xaEPLYaPezNaJjItD0RFExzjzSF+rRqYT5pB8tqv48= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:18:06 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:18:06 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 16/35] KVM: SVM: Add support for SEV-ES GHCB MSR protocol function 0x100 Date: Mon, 14 Sep 2020 15:15:30 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR06CA0037.namprd06.prod.outlook.com (2603:10b6:3:5d::23) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR06CA0037.namprd06.prod.outlook.com (2603:10b6:3:5d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:18:05 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: e47b0997-b8de-4988-0254-08d858eb4a3a X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: o3MZQVKZjUbKiqoKo7sYqNX5bdGJ133EB6UtmE4VEO7DiFMcoyj5Sl1idIQcLkE0vo6EXKDtlpE0jxViSCFTdrtcy2X3Tj0wngTBjB5CvZjXIH6Kci12aItZ0lARbrizgS9zQ1tu/zqRegB8hhU+OdPNUE/pkQ3Skf8djCDXa1Ng8Zs5F43bxjWD+xGI+3V865/ShamUtN4azdAKPX5BrwybJt72pT5aYGUbNk2UmlD+rIUN+Ub9d6jFR59L7LAQ6iu+F8V6WDhiyfirrMc4lTOpk/QUiY7YTN6n6nn4TF9drjPjTgEOSLPVZu6X+8ak X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: TjagE3jv8w2Q+XMbQb2ZawHxTKBl2l9kbi4/GK2GlH7eic624n0QrG4Tv02BbQxnoaYbOx2ooMbL/zg6VZYMbiJJzzxyb+fC7V7RQq+OXn/weOIPwVQVhhmkt3pNr7rVdCQ6AHV3yLYhAooETZEkSbG7y17mv2eniMFqiBreSDUmvaRYUklV7sYVv4kdQgKifm7ja+RbZx0e+tlchhqtRkUdzBzKddr/xJsj7XDgJs74mzCX+j59XkD4U6bqsyB46mOMT/2dACUM/Mh3BB6aP7hD413x26z62a+x7pOVnxODTqR+T9a5EmiXKY7dK/E1xsYv1XTHLDtJsfwzAdcVG467Ozz/upBupR6iCe0FsUVVnEniML/Bw0m4oa5oXUxWnxvSWhbilm4sGLW8dBcfTRnyMhg1Pa3FTvdWcBdJbfnC/63xXyGN/E+Rc712pQ5Od5MAD8IVLUAgZxIPmq+NIWHFzbElPPr+SLXMv384FLoAH+Ey122XpmLbJvosvmzhcE3cZJazpOucfTU3ScWSi9RvRm/CLrZ2pXq4hiNgcfdvXYlLRkHCk8LkHrBpPGgaeZXQDQfw+oRaDHfApGfs/+kdICuOnHwaYdeFPujfpPRxnX1gnhJ1WCJHaditG9YIAXhEMUjESgGcQov3WSD6eg== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: e47b0997-b8de-4988-0254-08d858eb4a3a X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:18:06.0448 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 4zzZ1g3F2BPkeOJVK54GPommkPkWPzwGpENhsLUgmy+K0k7mBBOGDYjwi5jWXkO7A7Ktar4QtzF1ZAjvr1kapg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The GHCB defines a GHCB MSR protocol using the lower 12-bits of the GHCB MSR (in the hypervisor this corresponds to the GHCB GPA field in the VMCB). Function 0x100 is a request for termination of the guest. The guest has encountered some situation for which it has requested to be terminated. The GHCB MSR value contains the reason for the request. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 13 +++++++++++++ arch/x86/kvm/svm/svm.h | 6 ++++++ 2 files changed, 19 insertions(+) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 5cf823e1ce01..8300f3846580 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1292,6 +1292,19 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) GHCB_MSR_INFO_POS); break; } + case GHCB_MSR_TERM_REQ: { + u64 reason_set, reason_code; + + reason_set = get_ghcb_msr_bits(svm, + GHCB_MSR_TERM_REASON_SET_MASK, + GHCB_MSR_TERM_REASON_SET_POS); + reason_code = get_ghcb_msr_bits(svm, + GHCB_MSR_TERM_REASON_MASK, + GHCB_MSR_TERM_REASON_POS); + pr_info("SEV-ES guest requested termination: %#llx:%#llx\n", + reason_set, reason_code); + fallthrough; + } default: ret = -EINVAL; } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0a84fae34629..3574f52f8a1c 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -535,6 +535,12 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu); #define GHCB_MSR_CPUID_REG_POS 30 #define GHCB_MSR_CPUID_REG_MASK 0x3 +#define GHCB_MSR_TERM_REQ 0x100 +#define GHCB_MSR_TERM_REASON_SET_POS 12 +#define GHCB_MSR_TERM_REASON_SET_MASK 0xf +#define GHCB_MSR_TERM_REASON_POS 16 +#define GHCB_MSR_TERM_REASON_MASK 0xff + extern unsigned int max_sev_asid; static inline bool svm_sev_enabled(void) From patchwork Mon Sep 14 20:15:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774853 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B05059D for ; Mon, 14 Sep 2020 20:30:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE03C215A4 for ; Mon, 14 Sep 2020 20:30:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="A7SwzcfC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726285AbgINUaR (ORCPT ); Mon, 14 Sep 2020 16:30:17 -0400 Received: from mail-bn8nam12on2078.outbound.protection.outlook.com ([40.107.237.78]:7038 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725914AbgINUTg (ORCPT ); Mon, 14 Sep 2020 16:19:36 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dop4l3tSTWrWB6srqCRe/XS8f3q2V0siEIafH4+JN+a+Y8NCMKSJ44AAC4+CAaNnWnT76aufVr8S5iihdlB7IY44OmmoNvp1GT+7rbKpUifeful2/IiyE5nU3rUhfWwU3DqNjeW5qPedYsYE1mlLyX9j1vE3O47CsId75M3dkJridYOVA4/mWiR3HvvsqKu+gBjzFQ/FQxoKkMqfy17/jxWJtRDvxBI93t4yEfXzlbxXdmHUaZw+6CnoMDBqrKZeDYHg8BcztKXctVPCZcUcdpqc/BFgrHgyrhCVJ5FY1PZsxnLOEweBb4xU+avS4rjS4wkchYicQ9fOyqhwFsXihA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W817GBrQsy6weF97usiyU69QEwCZKsdf23gktrJU5io=; b=lBGcECXuMcP0dUJE3lqlyYquBc8kLULVWfFdWU5mlPpZtuNBIUC4wRg9QBADlJoQgPjS7vucEuw/RnJaMZQmY5A91k5Kcof6vDpShPaTufbbIrjF5CFhEfJGuiXEmv8YxrObhA9S+d5g7z8GRZ9jHQs5hTLHiY3P4gytEVzBCAW3YfXashijU7xLTtHxJ3H1yaE3jmdks00bpqYaNVSvcWeVWNoYgmYaTgQYSRNo16csxBD9lirOwl2IjiuuCLsYW8dDPMFbR/t5At2PgemmwRXgbkCesaLcS7642/IIVrvi9txCncLBI9hLyNZzYrg7f1KZ48MBWNpwbEW/bnsiew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W817GBrQsy6weF97usiyU69QEwCZKsdf23gktrJU5io=; b=A7SwzcfCkCjiqWeq44tKFqW5KgPauBG+rSjtv4BdCz6cE7Lh9cM2JfVDaJRxeaD6s76XF/SoPOk0/C7vN5pdol5EcHtXRVXyGezQSfO8jwG1hQhAXUSNeu6TcwMay03mPSgxNHO377tpPkF4lAdpBF95IYKzhN+TGwnBuapgCW4= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:18:14 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:18:14 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 17/35] KVM: SVM: Create trace events for VMGEXIT MSR protocol processing Date: Mon, 14 Sep 2020 15:15:31 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN2PR01CA0014.prod.exchangelabs.com (2603:10b6:804:2::24) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN2PR01CA0014.prod.exchangelabs.com (2603:10b6:804:2::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:18:13 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: f244b588-b5a1-44ee-b2da-08d858eb4f1a X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BLcL87sADCSgUAyJRSW7wIZj7uCZEVHYApkxAEUwCpPC+bCP8hJOtTwNPeuAsa0i4I8ddzqSAdho/7HdBrt8cYj2+YzIJZncQV/JNb0rk0bkN6SnUI71UKcIgZeywWW2HLl1LGHgY3tZqLKbF01STQrt9+dziVT4OhSc6iT2VYMomS8ydH0X00k+jGzWslLSrTs36S1Ii5nB8K1X44Qn2jSu3ehsDCwMBle1PWG/T/u54pnaIhU/kKeBWA+5OOu4/HnjyZql/r6y9R/QsEHHmJioYktR/Qni7fj1P6fVqez0HQ1hYuemtc5IBqtuFwA8 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 8kxSAeJ8BOHYn5F5dbrfqm7gvztFg5Bzq0VgSIsbZR0yxWhc4VGLrFPC6djXnT89+br+3/W3t288M45Cc7Ibgjd998VFb2H95VelhO5KFcI3tVi5xL0pFrcrrA5uaqonFN7K4/BGyuzmEJbvz5t3R57Fe7A0GvELUqVy+99vEihQBEBAH9DLcnbSMgClPD8xBiT96tqSmEaJdaXXVTYTOqGHuf0h11KBc2IGg4UDuPCHTlqpZnMd1WeMMjvWd7/Y7P4pcgxIL1IhM4k64+K66jRUZ7Sx6QiOwnvAavuJ/ZMO8oXeD2/5HClwUeMy41arDJDeIBsqC9KzUX7Wimp+asq2LoZzEeYrRYygtntGT1IudfXwv2Jeek0JTz1+rLysMQmHBvIfcUUDn+qmlDkfFuhTZ4c+/kAlYEE9E1eMj9qqmyMOCVb7BjuOWagXhR5iZ85jXxVkSKeQGKA0y0Q0sYhYwenpbIIk4a8LBYElK1tyNSpZZtDdeeylz2h6POpGWJtkkrIjcsyir7DE86teCmi5DzA3XkkSLqG0rN0X4j8nEBVaOhR3p9tGTneCMk8ZGGWEVPAMZytbZxLD3LXd2daWHNJJST+uzxmv9+ajxfIBBspZPrNBiTVofH3ubOBP3Z7ld77V7pkVUARYVm9GRg== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: f244b588-b5a1-44ee-b2da-08d858eb4f1a X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:18:14.2313 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: pc1iYXiDCvOvmYU7JH7e1JYP2mORkKfyBgmX1LIJJBrkYbJDSTIOMLyff0RDuN8vkiVtnHItPyGl3s/7rr2oFg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky Add trace events for entry to and exit from VMGEXIT MSR protocol processing. The vCPU will be common for the trace events. The MSR protocol processing is guided by the GHCB GPA in the VMCB, so the GHCB GPA will represent the input and output values for the entry and exit events, respectively. Additionally, the exit event will contain the return code for the event. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 6 ++++++ arch/x86/kvm/trace.h | 44 ++++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.c | 2 ++ 3 files changed, 52 insertions(+) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 8300f3846580..92a4df26057a 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1248,6 +1248,9 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) ghcb_info = control->ghcb_gpa & GHCB_MSR_INFO_MASK; + trace_kvm_vmgexit_msr_protocol_enter(svm->vcpu.vcpu_id, + control->ghcb_gpa); + switch (ghcb_info) { case GHCB_MSR_SEV_INFO_REQ: set_ghcb_msr(svm, GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX, @@ -1309,6 +1312,9 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) ret = -EINVAL; } + trace_kvm_vmgexit_msr_protocol_exit(svm->vcpu.vcpu_id, + control->ghcb_gpa, ret); + return ret; } diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index 06e5c15d0508..117dc4a89c0a 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -1647,6 +1647,50 @@ TRACE_EVENT(kvm_vmgexit_exit, __entry->info1, __entry->info2) ); +/* + * Tracepoint for the start of VMGEXIT MSR procotol processing + */ +TRACE_EVENT(kvm_vmgexit_msr_protocol_enter, + TP_PROTO(unsigned int vcpu_id, u64 ghcb_gpa), + TP_ARGS(vcpu_id, ghcb_gpa), + + TP_STRUCT__entry( + __field(unsigned int, vcpu_id) + __field(u64, ghcb_gpa) + ), + + TP_fast_assign( + __entry->vcpu_id = vcpu_id; + __entry->ghcb_gpa = ghcb_gpa; + ), + + TP_printk("vcpu %u, ghcb_gpa %016llx", + __entry->vcpu_id, __entry->ghcb_gpa) +); + +/* + * Tracepoint for the end of VMGEXIT MSR procotol processing + */ +TRACE_EVENT(kvm_vmgexit_msr_protocol_exit, + TP_PROTO(unsigned int vcpu_id, u64 ghcb_gpa, int result), + TP_ARGS(vcpu_id, ghcb_gpa, result), + + TP_STRUCT__entry( + __field(unsigned int, vcpu_id) + __field(u64, ghcb_gpa) + __field(int, result) + ), + + TP_fast_assign( + __entry->vcpu_id = vcpu_id; + __entry->ghcb_gpa = ghcb_gpa; + __entry->result = result; + ), + + TP_printk("vcpu %u, ghcb_gpa %016llx, result %d", + __entry->vcpu_id, __entry->ghcb_gpa, __entry->result) +); + #endif /* _TRACE_KVM_H */ #undef TRACE_INCLUDE_PATH diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ef85340e05ea..2a2a394126a2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10792,3 +10792,5 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_ga_log); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_apicv_update_request); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_enter); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_exit); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_msr_protocol_enter); +EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_msr_protocol_exit); From patchwork Mon Sep 14 20:15:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774791 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3039C6CA for ; Mon, 14 Sep 2020 20:21:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B6DB20E65 for ; Mon, 14 Sep 2020 20:21:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="IOi8TApn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726034AbgINUVL (ORCPT ); Mon, 14 Sep 2020 16:21:11 -0400 Received: from mail-bn8nam12on2061.outbound.protection.outlook.com ([40.107.237.61]:36897 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726024AbgINUTs (ORCPT ); Mon, 14 Sep 2020 16:19:48 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ksmjwkWOn1NZrt0n+JwbyU4XADR+wr9R8oWSHf8EIpDnsEI9OK4cWYS5xHSUU0tq3ugr6T2h16OR3FmGJHwWfTtX/q57Z4PYXn0ArC9rYwl2a5n5ksdL4cZtDzpxCqrmK6XgjYUdQU35ImZ7pbLdvjx61G4NkoFJ/5edvD0kIpAht0i6iZcir11YkpL/yEkwOlE4axe8L299ZsMoDFCvKNx/SUBLsbBlA9u2PCKFqavHYyyl4RPnjNOJ9qlJiYZ+cAmd/n1XnjmBL7pm5Wfsc2EYhTdbGXUMhvifRX5YhrzgGtVJfLx6HsBx1pyQZgilR5lokaXo92yPdYNhgozHzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ROl8OqG52lq9mjocY9K85fWvKzIccodatAaa6DCpOjY=; b=X2eKfZKdPNMA/d+e13uHw9eYLJOIpJ4shXAtZd/oQ2bB4Z685I8dBgO24SOu08UxeaT53mre53mO2n64lhPnYjHS7iM/k9GO7qNLRP1VoGVh/QXz5AUyivewSkl5sLBC1J0jcrwTCvkGi1+FUv9arvPw+9UUk11JwxxovbWZY8YzvdX8P0L6PkWW/jb09tKgh6eVQbJyXqRU1PtModH0QMLfTNYYmg4URg1CIw6xvwLPxt2QpnkvqWWetk8C9/krowVCrrtkkiCdyKy+59rQ9Yagaie97Orh/M/gTqWHQHAycPquEK3IGQVL5R+Ja6BEhmk5Cus2ieqGvPxg7Z7hZA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ROl8OqG52lq9mjocY9K85fWvKzIccodatAaa6DCpOjY=; b=IOi8TApnBg5IB2OpsotviAK+laN3dakxoSgVQ3YgsOkZfe54/mUDW+tHYM0vXNJQU/dvnH6Rpo+7kVfG+cWFoa6QlD+adr/TcKmsgfaCar+dEor9LMkCNxjfXB/GY577rwq7cp+HgWEfN3dJSOEG4Z90hc5U/OyolZXz3L9Y/ZI= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM5PR12MB1163.namprd12.prod.outlook.com (2603:10b6:3:7a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:18:22 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:18:22 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 18/35] KVM: SVM: Support MMIO for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:32 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN2PR01CA0003.prod.exchangelabs.com (2603:10b6:804:2::13) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN2PR01CA0003.prod.exchangelabs.com (2603:10b6:804:2::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:18:21 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: ba570b8f-318d-467b-007c-08d858eb53e1 X-MS-TrafficTypeDiagnostic: DM5PR12MB1163: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5236; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: F2/l0HR+oP/dMDkAIo86luoCU2SAJzRNjoevR6bsH6Ghbx7OwV8ZAaH0oWYME/oFMOI762EZ4NS97eyJUkiE82qI7mwc4ap+F/Grqhwu225hMQ+4yY2tTjFLogYu1bDgEgqJKv7kVr4Aopb4eM1E5SbgXB5NR2wAy3coFVwyICFLPUZhY1ONJeagwyyYigvuCNvEu4SKGHS0oFwao2VTU8mYyFtM+pDoliZf6Fa0sFaWBpV78m2zhrl3MAd2wXcCMMNh0q3m9ZWM+caBxy+ZldkczXJXzMYb9GAlb13m/a95ky8/En0lxGmyy4c9+5sg5f5fJ7JUcznf/5nRHZgANg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(39860400002)(346002)(8676002)(478600001)(83380400001)(26005)(316002)(7416002)(2906002)(5660300002)(6666004)(956004)(86362001)(186003)(16526019)(52116002)(7696005)(66556008)(66476007)(66946007)(4326008)(8936002)(54906003)(36756003)(2616005)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: ZDhWMO0dcQKZHQlbQ9Sl6Mukb8SxvaglDQZfJ3CQA4YMmU3tbyJlKTLhz1tLPw3Ak1Iv/cCwjE6307GTBQSy2O1bV0rUUTUxbyNAMNCYeeFXU9VL7kWJSprvc/IyMzqib/v2WcxVx9D5LatHlnhaZXtwpLM0LiJ2lGA6yEVDXNKguWBQbtsHlVI3hCvjYe6+ctQtXGHMS7mPswm11IPZNWx4eBYD4uIV6sQsaecHx+NKRxTpRBROdqhqL4Y5vaAoePe1io3OOzzEYD6uKNf3e2mIkVx2DuCrSKS8rijRffw3MPADf5foUU69K0tS2j9mVRUyAn0xWHP5YOMi39cz39O9ISsDcM/2MpjKRzQXb0Y2i9EYpDlq8lfgbnxVlCZiD2oqCMfTNGPxLyy4JkJQizipKjgtiQGZQ1+KHvO39ZXBkVW+a53NJoA7qELFOILqgZh1XQ9rIoKjN4L0uOdfaMz1tzu2ld44JtoH8tQLHibyXB89yS6VsYEZSCwcdJJmC24vCkaD2MkZ5W5lUiPZoj759xPiUR2En71Gtj39+fYyLbUuyX03/Jq6gEtfV/VhCF6V3P1lxnt32Ld3uWA7/wHHnzjuKFRDHHaXVjo5WdsLSs8yLoyHgYKhJdroyH4NcIqs2jzZN1O+oCaIgAlq4g== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: ba570b8f-318d-467b-007c-08d858eb53e1 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:18:22.3267 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 1Pbr5CglFpOPzeYpM/RZ7ssziyS5zCrQ1oy9gowwh/FD/ndWb3d3A4FYSxNtqTKzO7PDiOF9q0LMrFAWEhtj2Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1163 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky For an SEV-ES guest, MMIO is performed to a shared (un-encrypted) page so that both the hypervisor and guest can read or write to it and each see the contents. The GHCB specification provides software-defined VMGEXIT exit codes to indicate a request for an MMIO read or an MMIO write. Add support to recognize the MMIO requests and invoke SEV-ES specific routines that can complete the MMIO operation. These routines use common KVM support to complete the MMIO operation. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 116 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 3 + arch/x86/kvm/svm/svm.h | 6 ++ arch/x86/kvm/x86.c | 123 +++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.h | 5 ++ 5 files changed, 253 insertions(+) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 92a4df26057a..740b44485f36 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1191,6 +1191,24 @@ static void pre_sev_es_run(struct vcpu_svm *svm) if (!svm->ghcb) return; + if (svm->ghcb_sa_free) { + /* + * The scratch area lives outside the GHCB, so there is a + * buffer that, depending on the operation performed, may + * need to be synced, then freed. + */ + if (svm->ghcb_sa_sync) { + kvm_write_guest(svm->vcpu.kvm, + ghcb_get_sw_scratch(svm->ghcb), + svm->ghcb_sa, svm->ghcb_sa_len); + svm->ghcb_sa_sync = false; + } + + kfree(svm->ghcb_sa); + svm->ghcb_sa = NULL; + svm->ghcb_sa_free = false; + } + trace_kvm_vmgexit_exit(svm->vcpu.vcpu_id, svm->ghcb); kvm_vcpu_unmap(&svm->vcpu, &svm->ghcb_map, true); @@ -1223,6 +1241,86 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu) vmcb_mark_dirty(svm->vmcb, VMCB_ASID); } +#define GHCB_SCRATCH_AREA_LIMIT (16ULL * PAGE_SIZE) +static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len) +{ + struct vmcb_control_area *control = &svm->vmcb->control; + struct ghcb *ghcb = svm->ghcb; + u64 ghcb_scratch_beg, ghcb_scratch_end; + u64 scratch_gpa_beg, scratch_gpa_end; + void *scratch_va; + + scratch_gpa_beg = ghcb_get_sw_scratch(ghcb); + if (!scratch_gpa_beg) { + pr_err("vmgexit: scratch gpa not provided\n"); + return false; + } + + scratch_gpa_end = scratch_gpa_beg + len; + if (scratch_gpa_end < scratch_gpa_beg) { + pr_err("vmgexit: scratch length (%#llx) not valid for scratch address (%#llx)\n", + len, scratch_gpa_beg); + return false; + } + + if ((scratch_gpa_beg & PAGE_MASK) == control->ghcb_gpa) { + /* Scratch area begins within GHCB */ + ghcb_scratch_beg = control->ghcb_gpa + + offsetof(struct ghcb, shared_buffer); + ghcb_scratch_end = control->ghcb_gpa + + offsetof(struct ghcb, reserved_1); + + /* + * If the scratch area begins within the GHCB, it must be + * completely contained in the GHCB shared buffer area. + */ + if (scratch_gpa_beg < ghcb_scratch_beg || + scratch_gpa_end > ghcb_scratch_end) { + pr_err("vmgexit: scratch area is outside of GHCB shared buffer area (%#llx - %#llx)\n", + scratch_gpa_beg, scratch_gpa_end); + return false; + } + + scratch_va = (void *)svm->ghcb; + scratch_va += (scratch_gpa_beg - control->ghcb_gpa); + } else { + /* + * The guest memory must be read into a kernel buffer, so + * limit the size + */ + if (len > GHCB_SCRATCH_AREA_LIMIT) { + pr_err("vmgexit: scratch area exceeds KVM limits (%#llx requested, %#llx limit)\n", + len, GHCB_SCRATCH_AREA_LIMIT); + return false; + } + scratch_va = kzalloc(len, GFP_KERNEL); + if (!scratch_va) + return false; + + if (kvm_read_guest(svm->vcpu.kvm, scratch_gpa_beg, scratch_va, len)) { + /* Unable to copy scratch area from guest */ + pr_err("vmgexit: kvm_read_guest for scratch area failed\n"); + + kfree(scratch_va); + return false; + } + + /* + * The scratch area is outside the GHCB. The operation will + * dictate whether the buffer needs to be synced before running + * the vCPU next time (i.e. a read was requested so the data + * must be written back to the guest memory). + */ + svm->ghcb_sa_sync = sync; + svm->ghcb_sa_free = true; + } + + svm->ghcb_sa = scratch_va; + svm->ghcb_sa_len = len; + + return true; +} + static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask, unsigned int pos) { @@ -1356,6 +1454,24 @@ int sev_handle_vmgexit(struct vcpu_svm *svm) ret = -EINVAL; switch (ghcb_get_sw_exit_code(ghcb)) { + case SVM_VMGEXIT_MMIO_READ: + if (!setup_vmgexit_scratch(svm, true, control->exit_info_2)) + break; + + ret = kvm_sev_es_mmio_read(&svm->vcpu, + control->exit_info_1, + control->exit_info_2, + svm->ghcb_sa); + break; + case SVM_VMGEXIT_MMIO_WRITE: + if (!setup_vmgexit_scratch(svm, false, control->exit_info_2)) + break; + + ret = kvm_sev_es_mmio_write(&svm->vcpu, + control->exit_info_1, + control->exit_info_2, + svm->ghcb_sa); + break; case SVM_VMGEXIT_UNSUPPORTED_EVENT: pr_err("vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n", control->exit_info_1, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 89ee9d533e9a..439b0d0e53eb 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1306,6 +1306,9 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) } __free_page(virt_to_page(svm->vmsa)); + + if (svm->ghcb_sa_free) + kfree(svm->ghcb_sa); } __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 3574f52f8a1c..8de45462ff4a 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -165,6 +165,12 @@ struct vcpu_svm { struct vmcb_save_area *vmsa; struct ghcb *ghcb; struct kvm_host_map ghcb_map; + + /* SEV-ES scratch area support */ + void *ghcb_sa; + u64 ghcb_sa_len; + bool ghcb_sa_sync; + bool ghcb_sa_free; }; struct svm_cpu_data { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2a2a394126a2..a0070eeeb139 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10768,6 +10768,129 @@ void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_c } EXPORT_SYMBOL_GPL(kvm_fixup_and_inject_pf_error); +static int complete_sev_es_emulated_mmio(struct kvm_vcpu *vcpu) +{ + struct kvm_run *run = vcpu->run; + struct kvm_mmio_fragment *frag; + unsigned int len; + + BUG_ON(!vcpu->mmio_needed); + + /* Complete previous fragment */ + frag = &vcpu->mmio_fragments[vcpu->mmio_cur_fragment]; + len = min(8u, frag->len); + if (!vcpu->mmio_is_write) + memcpy(frag->data, run->mmio.data, len); + + if (frag->len <= 8) { + /* Switch to the next fragment. */ + frag++; + vcpu->mmio_cur_fragment++; + } else { + /* Go forward to the next mmio piece. */ + frag->data += len; + frag->gpa += len; + frag->len -= len; + } + + if (vcpu->mmio_cur_fragment >= vcpu->mmio_nr_fragments) { + vcpu->mmio_needed = 0; + + // VMG change, at this point, we're always done + // RIP has already been advanced + return 1; + } + + // More MMIO is needed + run->mmio.phys_addr = frag->gpa; + run->mmio.len = min(8u, frag->len); + run->mmio.is_write = vcpu->mmio_is_write; + if (run->mmio.is_write) + memcpy(run->mmio.data, frag->data, min(8u, frag->len)); + run->exit_reason = KVM_EXIT_MMIO; + + vcpu->arch.complete_userspace_io = complete_sev_es_emulated_mmio; + + return 0; +} + +int kvm_sev_es_mmio_write(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned int bytes, + void *data) +{ + int handled; + struct kvm_mmio_fragment *frag; + + if (!data) + return -EINVAL; + + handled = write_emultor.read_write_mmio(vcpu, gpa, bytes, data); + if (handled == bytes) + return 1; + + bytes -= handled; + gpa += handled; + data += handled; + + /*TODO: Check if need to increment number of frags */ + frag = vcpu->mmio_fragments; + vcpu->mmio_nr_fragments = 1; + frag->len = bytes; + frag->gpa = gpa; + frag->data = data; + + vcpu->mmio_needed = 1; + vcpu->mmio_cur_fragment = 0; + + vcpu->run->mmio.phys_addr = gpa; + vcpu->run->mmio.len = min(8u, frag->len); + vcpu->run->mmio.is_write = 1; + memcpy(vcpu->run->mmio.data, frag->data, min(8u, frag->len)); + vcpu->run->exit_reason = KVM_EXIT_MMIO; + + vcpu->arch.complete_userspace_io = complete_sev_es_emulated_mmio; + + return 0; +} +EXPORT_SYMBOL_GPL(kvm_sev_es_mmio_write); + +int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned int bytes, + void *data) +{ + int handled; + struct kvm_mmio_fragment *frag; + + if (!data) + return -EINVAL; + + handled = read_emultor.read_write_mmio(vcpu, gpa, bytes, data); + if (handled == bytes) + return 1; + + bytes -= handled; + gpa += handled; + data += handled; + + /*TODO: Check if need to increment number of frags */ + frag = vcpu->mmio_fragments; + vcpu->mmio_nr_fragments = 1; + frag->len = bytes; + frag->gpa = gpa; + frag->data = data; + + vcpu->mmio_needed = 1; + vcpu->mmio_cur_fragment = 0; + + vcpu->run->mmio.phys_addr = gpa; + vcpu->run->mmio.len = min(8u, frag->len); + vcpu->run->mmio.is_write = 0; + vcpu->run->exit_reason = KVM_EXIT_MMIO; + + vcpu->arch.complete_userspace_io = complete_sev_es_emulated_mmio; + + return 0; +} +EXPORT_SYMBOL_GPL(kvm_sev_es_mmio_read); + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 995ab696dcf0..ce3b7d3d8631 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -398,4 +398,9 @@ bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu); __reserved_bits; \ }) +int kvm_sev_es_mmio_write(struct kvm_vcpu *vcpu, gpa_t src, unsigned int bytes, + void *dst); +int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa_t src, unsigned int bytes, + void *dst); + #endif From patchwork Mon Sep 14 20:15:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774855 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1F476CA for ; Mon, 14 Sep 2020 20:31:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD2E22193E for ; Mon, 14 Sep 2020 20:31:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="AEksWau5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726358AbgINUa7 (ORCPT ); Mon, 14 Sep 2020 16:30:59 -0400 Received: from mail-dm6nam08on2041.outbound.protection.outlook.com ([40.107.102.41]:6432 "EHLO NAM04-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726200AbgINUSg (ORCPT ); Mon, 14 Sep 2020 16:18:36 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BZ+0qPbuZakviUONZVMXN/4UYBoQhfYx+GHkNegErPsrpmwM2xXsKKJGWv6VWs/QyJCYOgRf/QDNkaOM5JBg1Q4uuN9/DBpJSdj3BpK0/0P5Z0VzlMBUts+CO8K3sLA4dsV/+nwBBvUzwfIVYL83Ex8a/Qjqwq/gwMK5r/HM/CzmUzavUKBQ5+wSb5ze3CEcGzP7gVJXdCXeF6tofclgUijiA1dPFSRWekOIK9NS2BuIO4T42iQ03MC43GDds7XscEZk2k/f7l1sEL5gYTphqBe21nQOrDOcYC5CpZB3dggS2C+dB5hVDOtBuwsPbxn+WmBRmckB2DV0HjbV8/19zA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lPegZH37cYRnzfVJGxloOF3Y7RStiMNMLoXqBhqhCvQ=; b=bG9mVKcyMgl5qM3G+A2vWOOJ3o2COZvEpGHGLgBQRCHzxFLiclsmuelD890ogdMQjW+ma7ERKSDuMv5xiVDXCGJql1Ti6bWya95fkfO9IPwiSRWp/XyI4grk5YWgqiRVylM9Z749JkFs3hDayg3kuDzwHeEyCM0PRaZIRUYd3uDr+wZ16pnKD7KQyNIo6BXbwKBCZDT1st0M2GHztuNDPBnAY2yFTgKf/KTwwk3ifT+GqP5NuKWHwSiWtdtT5lnaJKUsiSqJ4IAPCTBIAsIMwWYNwiqmf+YHThMC9l0RdcPp9qutUXAoV/5VhtiLlJQZz7RMu3K7Ha3B4cDtrqcJng== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lPegZH37cYRnzfVJGxloOF3Y7RStiMNMLoXqBhqhCvQ=; b=AEksWau5/XlVonur8AcLZUOMFrjXnmIDHfTddk472cD9bSuflHely+8qxBQEmzxcJRYS3TlTboiDkFMAbth1A4y//S7f7Cx8EEt56raE5ln7fR8KGtxKVOhrT06UN8HUNFH7fd6Jsgulf2skssjYn5Aci6ds+mPESAazynAHrUE= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2828.namprd12.prod.outlook.com (2603:10b6:5:77::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3348.15; Mon, 14 Sep 2020 20:18:30 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:18:30 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 19/35] KVM: SVM: Support port IO operations for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:33 -0500 Message-Id: <6f7e098b00172c2ae9a9ada9224fa4e8a8839cc2.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SA9PR03CA0008.namprd03.prod.outlook.com (2603:10b6:806:20::13) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SA9PR03CA0008.namprd03.prod.outlook.com (2603:10b6:806:20::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:18:29 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 4065fd22-fd22-4f52-0377-08d858eb58c0 X-MS-TrafficTypeDiagnostic: DM6PR12MB2828: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ty6+6tc2CryEcVDwkBMsdkaKpdqtznYm480LFSvt5c0DlSmjve7/WKzIyhEaGq0LyPcxvHLr3XSFxc36LKRxKYrNjsxb4x2V+VnSL0ISdZCxOcjlch8nFin53lGUbN/3eYMCkJjWUQyyjYz6h1WzcJD73X4OG7I16vRbw2DmsRD5+10gl7kkzWgqxCOEUaTP6DJung6DkWIKLvkG1WYnHjkVMBxgM/giRLSe2DN4HZSM7OHErfpP4io/A1BYGWLBmBeLV+DwBbjDwgVBcAjlGz189s5PFczPLN2KrnD3yKb0rjUg7lSJ57J5YBPbAVF6 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(376002)(346002)(136003)(66946007)(36756003)(5660300002)(86362001)(8936002)(8676002)(66556008)(66476007)(7416002)(4326008)(956004)(2616005)(6486002)(52116002)(83380400001)(7696005)(478600001)(186003)(26005)(16526019)(316002)(6666004)(54906003)(2906002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: VLOS9ylVarGbB0AW0NbmdzTX2byWwZ3d8E3vgpi9rpOErQreRaQeUEXu7JqnVoQ7HnrtsbrzypsjlVbV0axdSX2SIn3foifXpBL01j3EPf+D+7s3NIijHxkoLVdqhVfNSty4ZROyunKhoJtOYcQur82tQpAdufqDec7wEHDBG9n+iGhhE8nJwwrM5/92N//6xfPDzQAVS/q7X2ZcdSLbpuZbDySbbQxcyf7adSnTiWjdczGdDuUqqDajjVHogi++JQEY8NgYhMNsUoIfQpb0VB47fOUo6X7ebb9zIZZnhzLbI95GAruJf+2F8u0Vt9fd9x/1scuqsn0jfIUcNf5WMi3Wx0h5RxGfoAGfUz39rJ9IK8D1twp9Z+AfyX1YylMu0QzyL5+Fwv+0Pg+9/TB1PsZMg5p/wVbEAmB0Ywvnwyy9ouCj5mo8uahFFDjCgz73SRXIyizQ6UzoMi1BEEhlseSWTzv8O6vENaObeLIIPwsi1fCMUaZgf4LEdPObIckSuAlw2NdYF8Dg1Pu538P6g4RZMAZU1NcMRdAbLiUIkgKV0hW+y/MKQ0/SOYVEmoS3Smw1YE/DfbLRJP7m/pIhn4wrUq+UOXfNnwHKMuVmudwi4J3f2769ymg73q7dRb+R0yt11OhI//upMv6iOGwvTA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4065fd22-fd22-4f52-0377-08d858eb58c0 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:18:30.4122 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: x6DSFT13hXQpQDov9/yz5UZDSHa0V4Y9Mh6ep6svu4kdAO7Tks6eGTZLlV17lKMVrU9Lr+CtTgnXmsnEHM9TbA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2828 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky For an SEV-ES guest, port IO is performed to a shared (un-encrypted) page so that both the hypervisor and guest can read or write to it and each see the contents. For port IO operations, invoke SEV-ES specific routines that can complete the operation using common KVM port IO support. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/sev.c | 9 ++++++ arch/x86/kvm/svm/svm.c | 11 +++++-- arch/x86/kvm/svm/svm.h | 1 + arch/x86/kvm/x86.c | 51 +++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.h | 3 ++ 6 files changed, 73 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3e2a3d2a8ba8..7320a9c68a5a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -613,6 +613,7 @@ struct kvm_vcpu_arch { struct kvm_pio_request pio; void *pio_data; + void *guest_ins_data; u8 event_exit_inst_len; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 740b44485f36..da1736d228a6 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1483,3 +1483,12 @@ int sev_handle_vmgexit(struct vcpu_svm *svm) return ret; } + +int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in) +{ + if (!setup_vmgexit_scratch(svm, in, svm->vmcb->control.exit_info_2)) + return -EINVAL; + + return kvm_sev_es_string_io(&svm->vcpu, size, port, + svm->ghcb_sa, svm->ghcb_sa_len, in); +} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 439b0d0e53eb..37c98e85aa62 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1984,11 +1984,16 @@ static int io_interception(struct vcpu_svm *svm) ++svm->vcpu.stat.io_exits; string = (io_info & SVM_IOIO_STR_MASK) != 0; in = (io_info & SVM_IOIO_TYPE_MASK) != 0; - if (string) - return kvm_emulate_instruction(vcpu, 0); - port = io_info >> 16; size = (io_info & SVM_IOIO_SIZE_MASK) >> SVM_IOIO_SIZE_SHIFT; + + if (string) { + if (sev_es_guest(vcpu->kvm)) + return sev_es_string_io(svm, size, port, in); + else + return kvm_emulate_instruction(vcpu, 0); + } + svm->next_rip = svm->vmcb->control.exit_info_2; return kvm_fast_pio(&svm->vcpu, size, port, in); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 8de45462ff4a..9f1c8ed88c79 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -564,6 +564,7 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu); void __init sev_hardware_setup(void); void sev_hardware_teardown(void); int sev_handle_vmgexit(struct vcpu_svm *svm); +int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in); /* VMSA Accessor functions */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a0070eeeb139..674719d801d2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10372,6 +10372,10 @@ int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu) unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu) { + /* Can't read RIP of an SEV-ES guest, just return 0 */ + if (vcpu->arch.vmsa_encrypted) + return 0; + if (is_64_bit_mode(vcpu)) return kvm_rip_read(vcpu); return (u32)(get_segment_base(vcpu, VCPU_SREG_CS) + @@ -10768,6 +10772,53 @@ void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_c } EXPORT_SYMBOL_GPL(kvm_fixup_and_inject_pf_error); +static int complete_sev_es_emulated_ins(struct kvm_vcpu *vcpu) +{ + memcpy(vcpu->arch.guest_ins_data, vcpu->arch.pio_data, + vcpu->arch.pio.count * vcpu->arch.pio.size); + vcpu->arch.pio.count = 0; + + return 1; +} + +static int kvm_sev_es_outs(struct kvm_vcpu *vcpu, unsigned int size, + unsigned int port, void *data, unsigned int count) +{ + int ret; + + ret = emulator_pio_out_emulated(vcpu->arch.emulate_ctxt, size, port, + data, count); + vcpu->arch.pio.count = 0; + + return 0; +} + +static int kvm_sev_es_ins(struct kvm_vcpu *vcpu, unsigned int size, + unsigned int port, void *data, unsigned int count) +{ + int ret; + + ret = emulator_pio_in_emulated(vcpu->arch.emulate_ctxt, size, port, + data, count); + if (ret) { + vcpu->arch.pio.count = 0; + } else { + vcpu->arch.guest_ins_data = data; + vcpu->arch.complete_userspace_io = complete_sev_es_emulated_ins; + } + + return 0; +} + +int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, + unsigned int port, void *data, unsigned int count, + int in) +{ + return in ? kvm_sev_es_ins(vcpu, size, port, data, count) + : kvm_sev_es_outs(vcpu, size, port, data, count); +} +EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); + static int complete_sev_es_emulated_mmio(struct kvm_vcpu *vcpu) { struct kvm_run *run = vcpu->run; diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index ce3b7d3d8631..ae68670f5289 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -402,5 +402,8 @@ int kvm_sev_es_mmio_write(struct kvm_vcpu *vcpu, gpa_t src, unsigned int bytes, void *dst); int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa_t src, unsigned int bytes, void *dst); +int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, + unsigned int port, void *data, unsigned int count, + int in); #endif From patchwork Mon Sep 14 20:15:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774783 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 51C1D6CA for ; Mon, 14 Sep 2020 20:19:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 339DA218AC for ; Mon, 14 Sep 2020 20:19:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="sBEKUvwb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726058AbgINUTU (ORCPT ); Mon, 14 Sep 2020 16:19:20 -0400 Received: from mail-bn8nam12on2068.outbound.protection.outlook.com ([40.107.237.68]:3544 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726216AbgINUSu (ORCPT ); Mon, 14 Sep 2020 16:18:50 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jmKa5y4J0+Ix6IYgOGrR8zRIpe7Ebe0KA+a74fTDM24fo1+0yC2q96yXh84GMBiYE7/swoG3Utj7vNKaqrpNCWLQAf1Zx76WEvDsCSFmKnoxYDsHf5kHLZ2XT8fEbytwngtTwtYNaBYdeqy4Mx+vTETSI+R9AHfP+bNvKNOd/hKNeEaeU3H9ScfD1zngUuEpRNVM6FdrTBZJiuWXbed3B39QnreiBHNFu1SMriUGjTBbSoQTXLANsZAmzvceoq1++IAQDMPYxr5I0u25vkCvuNLD5+BlSnbXMjqyXIBNQbrlFu+8IV5SsR9mQHkZ1EzmlYXh9Y9Ulyy928nrerypMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=soe0+0UfVNPdWVLuVR9+O0Z5aUh3S+aeSfwQdbcR+Aw=; b=D7Psln6cZIGMXq+4yiUUJWe+OWqDG6dr4geKBoRse22m8F1xwPYJc2ruqYJ+zgkiG4NCPZWcCN9yRB5kIHBnhZbhGVQnxIkHfGMdpNI0p0wxHR8TX6D+X3MXo96TtPVgWUHmrCsX6NR/ePfGyE5qq+MbrhBsXini8L+uIafs3Fy5sQwlKGoYKPf+FmsCodILLt6hmk5pI8ztcuUdvuLpDCcmQs48RX4GQPy/dgg60KcaCYac0zgQb8TNOY9GSLqsepexLlPgrKs08ImqAEWotYIQxmpjUnSDooB3ezdB8ZfAYGtUDSfzB2Hpu19IRwWIRbCG+w6npT8oI0L70ZYkQQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=soe0+0UfVNPdWVLuVR9+O0Z5aUh3S+aeSfwQdbcR+Aw=; b=sBEKUvwbLt1Pv+5Ec3u4P+zXMqThf+lw/llwlAIOY9mVwTsr/yZfqRoT6bS177K83ziKqV+C5+I1bUaZZn9cauR053sjLiwYZ4lAd27TXZ4cvi8tSsHybavYEE/SpzRiPJ/uQ+/Javmn0rScexErflUljMU8ZYgjgbUbNNVKN+I= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:18:38 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:18:38 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 20/35] KVM: SVM: Add SEV/SEV-ES support for intercepting INVD Date: Mon, 14 Sep 2020 15:15:34 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN2PR01CA0054.prod.exchangelabs.com (2603:10b6:800::22) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN2PR01CA0054.prod.exchangelabs.com (2603:10b6:800::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:18:37 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 445181f8-c0cf-4724-4bfb-08d858eb5d9a X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1Xi2yWaTL/dS3OhXsf1hkSxosHzPCbfCzyh4Psr4nb7+wkPxciZMfyRkRqia4YnYPaXgrnyjnvWaXbyGzi0cSJxy7GauqxehVnr0qNkaPfOMUROlgrbvUaj1RROgtMd9rmfyOcSh/jePwXYTZrsAfI/kHXgCTK2kM2i+vHJbv0IAD0EMmSETW+Jsma2to5vcy6BeFmq2IO+1oCwX3WzEWGdPN1E3dfqwsPSSvK2xbiVkXzyQQtHE2Pw54qhCNpTmYD5DPODc4y7oCfCh0TFGBQtYmUhVknTt/ejQmhWV5dPxUO6fVxTJFMHNblZNKyyi X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 8Na+cNvMsRuKry2n/Qop8cYhwDv8rq6grvFSzmkFmrqo/W/ALn0meS7Hdx7R1m01Isv3u5ayFdzgbBxFqy4K6IwvKuRyIXk3QaxgCkGtFD4WlmWFyKVAfZaGWoEw5aoYb4Nezko4oxkVNFDJHKSJujBSvdlyErrKFM+33JB0SlFVrYg0QPUlwYu0z4juTkkDZlJVfhYua0QkgE8qvh+QDf2Gz1usHlPEdEZOA03pNug1VAVp8oUmMkNXSmDYHiaD+R2ns0rVy+AGBx8fU07rs5lF0gGdYiqllTxE2NRyEkbOvcfMe4kCYMhi47mgmp4S5rR6HwRMcbJPqfeVWj1Qkvq6RmBbxkPywQUxKr6miojgktKQvpHB6J6bX5Nm6i8tPGBBeUTy8UqfxV8qsNFz+MP+xrVpKyfknyZTUd62kEBgVV20FRc1jD784nFAeBn/N/8vAjCAgFUOT9xYIdZjED8X8len4KPamo+dikVwoOGWYfjUoCCeuhWLM8IdFJaGVuJxImTmx83EhJCK49q0YGLnEB4fo8cEmTOqPzNWU7OFSF8Eed1gyLLnWFXUt6lfNAzO2WLfLGF2BAy37jniVIpD8pkvpv6wzSUF6G4xJpgV9DB1Bog8lwhJpZIQzShaNrzPHB2J7koNSRbLu/ECyg== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 445181f8-c0cf-4724-4bfb-08d858eb5d9a X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:18:38.5656 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: /i49m5qjN5uhnQgjOJfG4PfXxPliNykygNOvJ2KP0wWYWy8xr/lD9Pcop4ogj7zRfja6vDU3wFGdh04aTy6r1w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The INVD instruction intercept performs emulation. Emulation can't be done on an SEV or SEV-ES guest because the guest memory is encrypted. Provide a specific intercept routine for the INVD intercept. Within this intercept routine, skip the instruction for an SEV or SEV-ES guest since it is emulated as a NOP anyway. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/svm.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 37c98e85aa62..ac64a5b128b2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2275,6 +2275,17 @@ static int iret_interception(struct vcpu_svm *svm) return 1; } +static int invd_interception(struct vcpu_svm *svm) +{ + /* + * Can't do emulation on any type of SEV guest and INVD is emulated + * as a NOP, so just skip it. + */ + return (sev_guest(svm->vcpu.kvm)) + ? kvm_skip_emulated_instruction(&svm->vcpu) + : kvm_emulate_instruction(&svm->vcpu, 0); +} + static int invlpg_interception(struct vcpu_svm *svm) { if (!static_cpu_has(X86_FEATURE_DECODEASSISTS)) @@ -2912,7 +2923,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_RDPMC] = rdpmc_interception, [SVM_EXIT_CPUID] = cpuid_interception, [SVM_EXIT_IRET] = iret_interception, - [SVM_EXIT_INVD] = emulate_on_interception, + [SVM_EXIT_INVD] = invd_interception, [SVM_EXIT_PAUSE] = pause_interception, [SVM_EXIT_HLT] = halt_interception, [SVM_EXIT_INVLPG] = invlpg_interception, From patchwork Mon Sep 14 20:15:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774787 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AAE896CA for ; Mon, 14 Sep 2020 20:20:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8590A21741 for ; Mon, 14 Sep 2020 20:20:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="Bbi6rN9v" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725997AbgINUTt (ORCPT ); Mon, 14 Sep 2020 16:19:49 -0400 Received: from mail-bn8nam12on2068.outbound.protection.outlook.com ([40.107.237.68]:3544 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726265AbgINUTO (ORCPT ); Mon, 14 Sep 2020 16:19:14 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DLBG/MzvK4VehvzVxSrvnbZQ4gbbT2WDOPYFFi8JLTwm/cOpB07BKxBF4WXP0W+6SX5mpD4IVGJmg6NOURLdKw6AuTJt1gkdzv2rDaMu25CYKMii8+udl7Odm5xFcrBf9rm6tgkqp8gTO7BSYGmV35nGarKWNQXRx2s16f4xli4OW/ahjhPfSDvbsls/z/EC7M+jCpWIa6IuQEL/vkpWeqU+EoJ4HgC2bIDdPx/VRPIvRoqJ39Z4cTHMlpGcmjfs/kTi7T5myd6IlOjrCm4g6Kx9fQ6eXqL3BUnlliTULM6VXNZP9yOLqrF0iqSCRYsVucgljM5xTJR0zjRH3rE30A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=35w8E58b9dhvpvgobJTsgN+taG0QHz/zkZ5Qz6w34Y0=; b=AKUylFCgYnaJovhkQnL6cyC4A2wuG/BsGlrLeVGVw+8XfhStE8x0yWs+kQ2Vai04kVmYwhaNMd8LS/ebMMenIIYOAM62S9S1YU15Q7DzhaP9AWXwO7sKdjHZ0FMYGQU5PXRSTXtnZhouzJNeAFAbNWFpCjE62OOtxB1b8EDea9+0Ap8JDZYijvO1ubMMmOKy9kgXAKpq31FwPR8wqV1rbH9v+sg0/RYOtQ+uio89mYJpIyYDURAJHjV3JI/mfP5dGl/nZFdioZY4sDlCulB9M4WcEpEAQQPI9nkX4KhwbZrmSQGC6UotfheoLtYh0QfdW7j9JaX3HQNkAH6h/Upf/w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=35w8E58b9dhvpvgobJTsgN+taG0QHz/zkZ5Qz6w34Y0=; b=Bbi6rN9v2tj1XFS7CM6QZM6a+S3yXXu6H0rnvTbY5P7xEqHYXVZfk7fHLSO6QpA1y/EMTbf2Pqw1XL80a/WdcL93EfTUx13UcDCcj4+ifuTxKu+302HnK8YOxQZtM/WmW0jnJkWxQhZ8fmkTaabp8d6KwIGbXUODUHUMfVppSb4= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:18:46 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:18:46 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 21/35] KVM: SVM: Add support for EFER write traps for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:35 -0500 Message-Id: <240ce8d7c9bd84adf66a4a0625150a1ae215345e.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN1PR12CA0082.namprd12.prod.outlook.com (2603:10b6:802:21::17) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN1PR12CA0082.namprd12.prod.outlook.com (2603:10b6:802:21::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:18:45 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 85853039-1052-4ad0-7b07-08d858eb6265 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NjvM3rDEHRCKKFneHV6sadmU58MIj9WIrOih7p1zQ6wpniiMNXw4YGtS3D02rUVcIvbEaNOsKurAoSl+x9nce1AaudFua2Zw+ttZrBYU0JzIhwecN090sJ0c8/6SHFAxvDlmy6ksbsqmDc8WtFyr8NM4Z0CwBa57KT8+3R+pefpJZUY/F2ZzsUVtgW0sPL4KgjAYj4Tcti7EMKJWOPUUzwLP3vjaAxUpQuae9iriknZCBzjcA727YQZEeYVB0B3lAXpwQc1fCbYZBQssrEfcvIa8cNlQj6Lygs5DKQHQPFQR7PyDze7OdAcjmG0M9xmHzBcixISbT61CgMy5coKPuQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: qAKgg8xcjIbSh+pTrSbJrc658oPayChCOzw4Y02m35lYh72d1MjItkui+YVcMgTyITN87exeLlWl+f1avJkV637b+NKIPWZ7PWSqfETgx3RNnPHBV7WDytSI4mZFcR9Le8hoBVKjIv1DDYQJj+VxD3FU6En+p+vsLC8n0OCC0x9vexywS1y/SG/Wbv1cDChylwjz6DDj60pjXVkhGZiGFhj1dUIO/l46qU7vHomwWY4hwUmOI5/y3Cjd01jRwzFWFihLNIXvfXcfgQ4xL7PnueYo8Z64QEUTDYPYVLwQe3M3ft5aW5RN5NenagOPoeb6Fr0aYYFSGhPTbe684pm+Nm4Tu8zwosiT0104K/pADqtH5I+ZdPrPiJgXvqW2+W2mpkdIZlXwdWZe8XFJhqK97DvwjDcs/QrAcR9X1quNFfDjmvfITZi9dYhTi+2ss51yrRipCCdaU8Z40qU/yQlz9du1gE3FdcDSEk99Hs8IBVhsdsiBgUTCew9DIJv0Tn6quBRCEguucMBeT3rET3WeXxzYezKIY+3BiO2d+4ZQKNiMEvsLmkPCHncMwHBxmcDkdVKP1Q6ib35eyjf828jtAZQIylgwteshfajkpiYLiMveiaV/ypWL2tHBg2wQh0r7oazW7B3buff2sUJFUj+AFA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 85853039-1052-4ad0-7b07-08d858eb6265 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:18:46.5931 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: PSDFLNJtfz2pSs4mGwFUrr4Ct53WR6QzbVrZYn5pkDtWUWoUz22PDo2rLCUtUutUuz2+Yal/MFZ4dUFBSj/7rA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky For SEV-ES guests, the interception of EFER write access is not recommended. EFER interception occurs prior to EFER being modified and the hypervisor is unable to modify EFER itself because the register is located in the encrypted register state. SEV-ES guests introduce a new EFER write trap. This trap provides intercept support of an EFER write after it has been modified. The new EFER value is provided in the VMCB EXITINFO1 field, allowing the hypervisor to track the setting of the guest EFER. Add support to track the value of the guest EFER value using the EFER write trap so that the hypervisor understands the guest operating mode. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/uapi/asm/svm.h | 2 ++ arch/x86/kvm/svm/svm.c | 12 ++++++++++++ arch/x86/kvm/x86.c | 12 ++++++++++++ 4 files changed, 27 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7320a9c68a5a..b535b690eb66 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1427,6 +1427,7 @@ void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int idt_index, int reason, bool has_error_code, u32 error_code); +int kvm_track_efer(struct kvm_vcpu *vcpu, u64 efer); int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3); int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index 0bc3942ffdd3..ce937a242995 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -77,6 +77,7 @@ #define SVM_EXIT_MWAIT_COND 0x08c #define SVM_EXIT_XSETBV 0x08d #define SVM_EXIT_RDPRU 0x08e +#define SVM_EXIT_EFER_WRITE_TRAP 0x08f #define SVM_EXIT_NPF 0x400 #define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401 #define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402 @@ -183,6 +184,7 @@ { SVM_EXIT_MONITOR, "monitor" }, \ { SVM_EXIT_MWAIT, "mwait" }, \ { SVM_EXIT_XSETBV, "xsetbv" }, \ + { SVM_EXIT_EFER_WRITE_TRAP, "write_efer_trap" }, \ { SVM_EXIT_NPF, "npf" }, \ { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \ { SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ac64a5b128b2..ac467225a51d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2466,6 +2466,17 @@ static int cr8_write_interception(struct vcpu_svm *svm) return 0; } +static int efer_trap(struct vcpu_svm *svm) +{ + int ret; + + ret = kvm_track_efer(&svm->vcpu, svm->vmcb->control.exit_info_1); + if (ret) + return ret; + + return kvm_complete_insn_gp(&svm->vcpu, 0); +} + static int svm_get_msr_feature(struct kvm_msr_entry *msr) { msr->data = 0; @@ -2944,6 +2955,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_MWAIT] = mwait_interception, [SVM_EXIT_XSETBV] = xsetbv_interception, [SVM_EXIT_RDPRU] = rdpru_interception, + [SVM_EXIT_EFER_WRITE_TRAP] = efer_trap, [SVM_EXIT_NPF] = npf_interception, [SVM_EXIT_RSM] = rsm_interception, [SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 674719d801d2..b65bd0c986d4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1480,6 +1480,18 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 0; } +int kvm_track_efer(struct kvm_vcpu *vcpu, u64 efer) +{ + struct msr_data msr_info; + + msr_info.host_initiated = false; + msr_info.index = MSR_EFER; + msr_info.data = efer; + + return set_efer(vcpu, &msr_info); +} +EXPORT_SYMBOL_GPL(kvm_track_efer); + void kvm_enable_efer_bits(u64 mask) { efer_reserved_bits &= ~mask; From patchwork Mon Sep 14 20:15:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774795 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3123F6CA for ; Mon, 14 Sep 2020 20:21:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D376208DB for ; Mon, 14 Sep 2020 20:21:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="oxVvQtXL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726065AbgINUVQ (ORCPT ); Mon, 14 Sep 2020 16:21:16 -0400 Received: from mail-dm6nam11on2069.outbound.protection.outlook.com ([40.107.223.69]:56449 "EHLO NAM11-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726208AbgINUTq (ORCPT ); Mon, 14 Sep 2020 16:19:46 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OZ0r2RPLaVznIgbdH7ZVCd2dwk4NrywkaMN7eAycoqWitKXkhmJ9fTx5Gg7veSpRSeLY6iLHLkqYcF3xN6V1pjMnRhGuHrEGTGX2eJkXvmtw/8hoK8C7koVt4s9CXN+0arIFcgFjZiwp66ojpS5dg380AheY3F0L4tmET64CaYIGSfJHTmUpbUeui8LG+6VKlKr0FTWSnirIZ70jO9Y87Vpj6xpg3j+3FoE3M6PpsIjv9wxir5PfhP9IsBCME93ZtbVQ/0fnYIzx9BnQZPH/AqV/hg9yRpbYgWlgv0M1J6PMdK3dBuspS/wqRGsd/7K7rRrCzyXU3XQ+ahaglLn5Ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Q0NsNk8spO96Mb7KTzGSL7JkcJ9Q6E2ZOgZVR55Hdvc=; b=GHvsT/RaQgrDRMimkSsfTT3YUgg8gY3o1UCzExv9/y4bUqxMsvjzG07NMYvbNIKvo6f2/Cju1sAZb0HgpIa4wx0kX116ClwG4/jPziP8l4CK4uy3RS8twZvhXdDg1cNFZrMZDb5MT8WvxB7Q9+Jq/gIgDiD/OUNCI6Szvr1iQZRMBKTetx176rrStpoMOcWZrl2PC8DgvzoUQwHs3XcXpqGzOQyShLbjoQToAAQKd6DsdaLJlfk1GmxCXvSAhTgfZj9eYUE5pyftfEUTCi0gI+1DhFlymRU0PUZJRcty/gTkHgTIjdB9CD0gakx6RiG1ZcuHCo8tMySKjAL2jMjpag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Q0NsNk8spO96Mb7KTzGSL7JkcJ9Q6E2ZOgZVR55Hdvc=; b=oxVvQtXLOzDkr7P/6nh48JQ3zAR6gp/RzO1RqAZy5D/mgSv3jXBKZs4rbqKNtCr8D9C9l5hNYUT8psCR2g/ehiRL4i96OEZHQfopdA/9nftOdTYxe+9se1o6HKrpg20AKziYq4oygKHRLZ8vLvx72hR4LRHN0zncYyVod7W00+I= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:18:55 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:18:55 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 22/35] KVM: SVM: Add support for CR0 write traps for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:36 -0500 Message-Id: <68f885b63b18e5c72eae92c9c681296083c0ccd8.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN4PR0201CA0046.namprd02.prod.outlook.com (2603:10b6:803:2e::32) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN4PR0201CA0046.namprd02.prod.outlook.com (2603:10b6:803:2e::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:18:53 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 2cedfd4c-3376-45aa-5f12-08d858eb6752 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WHaWpYYERFeutxZ1gk7AO5M98Kjl73pz8RRzdoeCUtGwwSiCphYCo2HXHhCzVLPqPboh4BjU5bwZUT+q1fAgPka3bQKKSbcKtDqJMEYCHbcYGxGRXgtbgxBVptElSER2ivv0mxnwmCjDdIBgNbNVBlDcKASvhdonEyILcyfTniquD2aI1bE3sWuuQZvYjKvNh23SpnenD5Vvmb9Lu5VcFNOTNA8i5XuRGP7kc31AbDzz8DNZs3+lNN/7aQ7N2T6yDOOyM4v+KzmUSAjVIZdowZpJ6ySlagU3ExH7QUc+B9VXLy3+nSgB+XoQBXlsxgJf X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 0Yn0mncDcNlvQvq5fTNDxYN/dMEH1XlC57GJitEAcQtFVv6JTLpdN/uIXBqhhn7y1xMGBIOSEHNbTZv3/B30ps/xXiOIxbo62Ekrx7qw1PKJ4RpkW1T+rv2/IkxjwzVU+6bT3PueGGhr26N/6q4txV7pntCITo1bKifaEC4P2ScF0hdQMypI1dhMwacYP+AojhCUnUp6jN0M5nAlA+IPZyzXlg+1E7gCJavBQZFfv/vDQUem6ncTijJd4m4af9FCrWWbXkgF1FIJQtUtEy9iBLzVcutbU0S59l6QuvkpuvxzjTbcI4KnkQv6FVM7Dl+cvA1LEdo/DDFydACXRJxFzTYyfICIGdaImTUPDYzldJIuLhU9MljZBpxdyjF2DR1wyFKBXzHnSBkd5C7oZjZzyfGRHkIhGFuTMrSfXXBrEq8ysuKDYl3npIJSFyCLzoediC+tcZqvr3cZ1wraAAH5v1WdF1DDE9t9dR3qfwmITrig6OM3RjujKUBybtSjhl//ck5OtAc33N8MV0oJLdKWA9JHVuFyWXA2ZuWxkCDapw/AZIqa8jCJZmwuGT4SRqRkDx/7qz5IK9uo7oUHyVBKwPnLZpqgLrd9IORVG0hskz4PzXCCEs4lDjwIZaP+yyElP/G+8MO4qSWvmnYAOg64BQ== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2cedfd4c-3376-45aa-5f12-08d858eb6752 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:18:54.8714 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: UgOvykuVFYpibUpXefBKdYFCF7DXuOaGa9l3z3aa0AnvWTv5a08hoIc6riYasjSGxIT66Ft7DfX5keG6N5sC2Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky For SEV-ES guests, the interception of control register write access is not recommended. Control register interception occurs prior to the control register being modified and the hypervisor is unable to modify the control register itself because the register is located in the encrypted register state. SEV-ES guests introduce new control register write traps. These traps provide intercept support of a control register write after the control register has been modified. The new control register value is provided in the VMCB EXITINFO1 field, allowing the hypervisor to track the setting of the guest control registers. Add support to track the value of the guest CR0 register using the control register write trap so that the hypervisor understands the guest operating mode. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/uapi/asm/svm.h | 17 +++++++++++++ arch/x86/kvm/svm/svm.c | 20 +++++++++++++++ arch/x86/kvm/x86.c | 43 ++++++++++++++++++++++++--------- 4 files changed, 69 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index b535b690eb66..9cc9b65bea7e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1432,6 +1432,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3); int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); int kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8); +int kvm_track_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val); int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val); unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu); diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index ce937a242995..cc45d7996e9c 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -78,6 +78,22 @@ #define SVM_EXIT_XSETBV 0x08d #define SVM_EXIT_RDPRU 0x08e #define SVM_EXIT_EFER_WRITE_TRAP 0x08f +#define SVM_EXIT_CR0_WRITE_TRAP 0x090 +#define SVM_EXIT_CR1_WRITE_TRAP 0x091 +#define SVM_EXIT_CR2_WRITE_TRAP 0x092 +#define SVM_EXIT_CR3_WRITE_TRAP 0x093 +#define SVM_EXIT_CR4_WRITE_TRAP 0x094 +#define SVM_EXIT_CR5_WRITE_TRAP 0x095 +#define SVM_EXIT_CR6_WRITE_TRAP 0x096 +#define SVM_EXIT_CR7_WRITE_TRAP 0x097 +#define SVM_EXIT_CR8_WRITE_TRAP 0x098 +#define SVM_EXIT_CR9_WRITE_TRAP 0x099 +#define SVM_EXIT_CR10_WRITE_TRAP 0x09a +#define SVM_EXIT_CR11_WRITE_TRAP 0x09b +#define SVM_EXIT_CR12_WRITE_TRAP 0x09c +#define SVM_EXIT_CR13_WRITE_TRAP 0x09d +#define SVM_EXIT_CR14_WRITE_TRAP 0x09e +#define SVM_EXIT_CR15_WRITE_TRAP 0x09f #define SVM_EXIT_NPF 0x400 #define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401 #define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402 @@ -185,6 +201,7 @@ { SVM_EXIT_MWAIT, "mwait" }, \ { SVM_EXIT_XSETBV, "xsetbv" }, \ { SVM_EXIT_EFER_WRITE_TRAP, "write_efer_trap" }, \ + { SVM_EXIT_CR0_WRITE_TRAP, "write_cr0_trap" }, \ { SVM_EXIT_NPF, "npf" }, \ { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \ { SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ac467225a51d..506656988559 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2413,6 +2413,25 @@ static int cr_interception(struct vcpu_svm *svm) return kvm_complete_insn_gp(&svm->vcpu, err); } +static int cr_trap(struct vcpu_svm *svm) +{ + unsigned int cr; + + cr = svm->vmcb->control.exit_code - SVM_EXIT_CR0_WRITE_TRAP; + + switch (cr) { + case 0: + kvm_track_cr0(&svm->vcpu, svm->vmcb->control.exit_info_1); + break; + default: + WARN(1, "unhandled CR%d write trap", cr); + kvm_queue_exception(&svm->vcpu, UD_VECTOR); + return 1; + } + + return kvm_complete_insn_gp(&svm->vcpu, 0); +} + static int dr_interception(struct vcpu_svm *svm) { int reg, dr; @@ -2956,6 +2975,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_XSETBV] = xsetbv_interception, [SVM_EXIT_RDPRU] = rdpru_interception, [SVM_EXIT_EFER_WRITE_TRAP] = efer_trap, + [SVM_EXIT_CR0_WRITE_TRAP] = cr_trap, [SVM_EXIT_NPF] = npf_interception, [SVM_EXIT_RSM] = rsm_interception, [SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b65bd0c986d4..6f5988c305e1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -799,11 +799,29 @@ bool pdptrs_changed(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(pdptrs_changed); +static void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, + unsigned long cr0) +{ + unsigned long update_bits = X86_CR0_PG | X86_CR0_WP; + + if ((cr0 ^ old_cr0) & X86_CR0_PG) { + kvm_clear_async_pf_completion_queue(vcpu); + kvm_async_pf_hash_reset(vcpu); + } + + if ((cr0 ^ old_cr0) & update_bits) + kvm_mmu_reset_context(vcpu); + + if (((cr0 ^ old_cr0) & X86_CR0_CD) && + kvm_arch_has_noncoherent_dma(vcpu->kvm) && + !kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) + kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL); +} + int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) { unsigned long old_cr0 = kvm_read_cr0(vcpu); unsigned long pdptr_bits = X86_CR0_CD | X86_CR0_NW | X86_CR0_PG; - unsigned long update_bits = X86_CR0_PG | X86_CR0_WP; cr0 |= X86_CR0_ET; @@ -842,22 +860,23 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) kvm_x86_ops.set_cr0(vcpu, cr0); - if ((cr0 ^ old_cr0) & X86_CR0_PG) { - kvm_clear_async_pf_completion_queue(vcpu); - kvm_async_pf_hash_reset(vcpu); - } + kvm_post_set_cr0(vcpu, old_cr0, cr0); - if ((cr0 ^ old_cr0) & update_bits) - kvm_mmu_reset_context(vcpu); + return 0; +} +EXPORT_SYMBOL_GPL(kvm_set_cr0); - if (((cr0 ^ old_cr0) & X86_CR0_CD) && - kvm_arch_has_noncoherent_dma(vcpu->kvm) && - !kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) - kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL); +int kvm_track_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) +{ + unsigned long old_cr0 = kvm_read_cr0(vcpu); + + vcpu->arch.cr0 = cr0; + + kvm_post_set_cr0(vcpu, old_cr0, cr0); return 0; } -EXPORT_SYMBOL_GPL(kvm_set_cr0); +EXPORT_SYMBOL_GPL(kvm_track_cr0); void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw) { From patchwork Mon Sep 14 20:15:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774793 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97B16112E for ; Mon, 14 Sep 2020 20:21:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6E6D9215A4 for ; Mon, 14 Sep 2020 20:21:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="NvfQJ2SP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726321AbgINUVN (ORCPT ); Mon, 14 Sep 2020 16:21:13 -0400 Received: from mail-bn8nam12on2068.outbound.protection.outlook.com ([40.107.237.68]:3544 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726216AbgINUTr (ORCPT ); Mon, 14 Sep 2020 16:19:47 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g+78ZsZMJCW0V/MmbGfV81Tvb6H8dJhkIMZlzbbf/yq8jnOkJxuwxt9xqOU5/bxU8B4P7pH5xfqMokUdEHzGLlYu9a46LcO66M59ysfjZKVXXYGO+qgTIaTKQjB85KGgDqnaGVHJhDtoEupuoElVswJNyEHmNOOnNnl6jvCI2Dtnr8N6zjD2SnP7woNHSEA4E4MlOtCQskcxkus0+5O5uHjeTFgmA3/XWyXVPX2vPrD0gMvk43jIUz9tL/vX9r7FgHiIbps9twlPaH3LYRsIVfv5UOlezJJJYXV6RZBaD6KK0jEzdWMe1RnZ4XyEz/qH7Z8x31VCUtmMuqOb2QEXEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3oifPIUm+7KGaHA+TycuJnWx91BwfFHXwDmv+o71oqY=; b=dnFV0Ole/FZC1v4deOBcA3rnId5t4KywzwoTs2cjFgyZ4fQ7N93XFlSqKGpzq9T28yduO7ETFBgP2+xyy0dz0qQtcDIXe322FKDl1tdfhn/U5E0lrbGgx1aZsuJQNmsWhd20BReLTBUCrw41d3UUap6L2LZX2CvTDl9X9CtjBLFC3fItYq/XZugzuPSCWvwVb/OTMjx/HYBOW+eS/8iZvZ6WH8vFmDMiZRX8GQoM+gyLBKcn/W3Kb+fq/IHc2Y+3bqlJqWoaV046SIU2VH2S2uxteb36XkfHiBaGsjjwsMLjZURhNc71zj3SUkQ/5qO7aXu8EHdtHJ3L4QSVbkxWjA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3oifPIUm+7KGaHA+TycuJnWx91BwfFHXwDmv+o71oqY=; b=NvfQJ2SPpmHKG5m3Zpj7MB2ATUnf2g9kZdQE8D10N1ttAbgJAX0lIMgJX234KvzwBGzWpo/zwKjjtWhmh8X+eZZnWpH4PEza2lu76HEi+G0NOp47HYFr/BaLzdK4IxzFhtQYaRoKSUVxa1ACDSgPL5R5iARo1+zsZBP3w0YB+C0= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:19:03 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:19:03 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 23/35] KVM: SVM: Add support for CR4 write traps for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:37 -0500 Message-Id: <97f610c7fcf0410985a3ff4cd6d4013f83fe59e6.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SA9PR11CA0001.namprd11.prod.outlook.com (2603:10b6:806:6e::6) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SA9PR11CA0001.namprd11.prod.outlook.com (2603:10b6:806:6e::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:19:02 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: e5362e0d-7d8f-48ff-6a2c-08d858eb6c3f X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wQc5FvX7Vql1RGxiALYExaQ6gqk94xVJUtOAAPRaYuNBCsrsflXTzIj+MkPpt+kyrz3Qmi0OxyyPt73b1Gygt/b4/Au6Q3YpF/5xMZ0kmF9Bv5q3kEwJ/BSm2eUgZOJc8zZw/Y2wb/mEGBFXyeKb6KDOfLLqJlL8jKYTWWeWiTOuZTj81byeUvg226W4bd5OgyCBr3+D8RQxSys7amDIMMrKPI0O0CzmJJ8aELqKsc7rEq/XEkw3mPj2a8OyjSjDFgSgwec0Z47HzjX8TDkzpMIP0VqOCjlF+IpJUP9CkwdNXM5TcMpJBUEuWs9omn93 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: L0Cp7ywZNreHjxDYcJs2KJ/busWRolUW0VOAMJfRYRKpoMb08/AzblDcUkwouEUHdV3j4ku13YI3KUE9W3ofQZVpY79Z7axOrZoFLD8E2a+AaJc9IlMuXz/SZph5z286ZWKPKpHf+fJ8gZPxWaDY0TKe/7RuwP9KB/7Jz2vwx6AFB6i5quETWDhaf/0Km8LAU517WHokbcTdDnD/3yuWKddvyCfIwHclr9crOEn0Zt2hX1Xnu1lsbNWhIRhzAyhtBhCc2FvfUpHmYsPwkxf19AH9T7olmgJZn4ixKPUSxY+kIE/3qX1vbF4pKT9i3Pe8EFPvMfitJJRn+9BPMZHPH5tV0p4GytcUmhnyp6UuGCXQ1dxPcU0LGnMvyc9icqKyWRuViascSfvATEpJ/C6zN9w7iTEKGibPGclqsqcV6v7fMdUBjum3Cp8hPEEJ/2fRzEvAkLkNZAOFGBVh2P5nOrxn5jgB9bbPTGLSxy8wTeqjsYjCVpk1FL6SHA9yEBFkrxrL7MkJkN2ketPG2YyKNaAqJM6sA9Q/OXms4eScaoAbhwhq/0i/2IWoOQIHYzA6puINxUvtOerHNGseX/Yvt6TByzOLIzfE6IANTnv6eW679uP9EGKdBvnT1d/vtT57dQsR4B+rwQu8ggrhRaw4+w== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: e5362e0d-7d8f-48ff-6a2c-08d858eb6c3f X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:19:03.1538 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ghrav+F2UljazbFoSDLRP700zPxutg/xiL02xZCoHMDYtJdsZEuMGKxANXl384vKK43yI6hM6FlglTjcX53aJw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky For SEV-ES guests, the interception of control register write access is not recommended. Control register interception occurs prior to the control register being modified and the hypervisor is unable to modify the control register itself because the register is located in the encrypted register state. SEV-ES guests introduce new control register write traps. These traps provide intercept support of a control register write after the control register has been modified. The new control register value is provided in the VMCB EXITINFO1 field, allowing the hypervisor to track the setting of the guest control registers. Add support to track the value of the guest CR4 register using the control register write trap so that the hypervisor understands the guest operating mode. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/uapi/asm/svm.h | 1 + arch/x86/kvm/svm/svm.c | 4 ++++ arch/x86/kvm/x86.c | 20 ++++++++++++++++++++ 4 files changed, 26 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9cc9b65bea7e..e4fd2600ecf6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1433,6 +1433,7 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3); int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); int kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8); int kvm_track_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); +int kvm_track_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val); int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val); unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu); diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index cc45d7996e9c..ea88789d71f2 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -202,6 +202,7 @@ { SVM_EXIT_XSETBV, "xsetbv" }, \ { SVM_EXIT_EFER_WRITE_TRAP, "write_efer_trap" }, \ { SVM_EXIT_CR0_WRITE_TRAP, "write_cr0_trap" }, \ + { SVM_EXIT_CR4_WRITE_TRAP, "write_cr4_trap" }, \ { SVM_EXIT_NPF, "npf" }, \ { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \ { SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 506656988559..ec5efa1d4344 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2423,6 +2423,9 @@ static int cr_trap(struct vcpu_svm *svm) case 0: kvm_track_cr0(&svm->vcpu, svm->vmcb->control.exit_info_1); break; + case 4: + kvm_track_cr4(&svm->vcpu, svm->vmcb->control.exit_info_1); + break; default: WARN(1, "unhandled CR%d write trap", cr); kvm_queue_exception(&svm->vcpu, UD_VECTOR); @@ -2976,6 +2979,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_RDPRU] = rdpru_interception, [SVM_EXIT_EFER_WRITE_TRAP] = efer_trap, [SVM_EXIT_CR0_WRITE_TRAP] = cr_trap, + [SVM_EXIT_CR4_WRITE_TRAP] = cr_trap, [SVM_EXIT_NPF] = npf_interception, [SVM_EXIT_RSM] = rsm_interception, [SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6f5988c305e1..5e5f1e8fed3a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1033,6 +1033,26 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) } EXPORT_SYMBOL_GPL(kvm_set_cr4); +int kvm_track_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) +{ + unsigned long old_cr4 = kvm_read_cr4(vcpu); + unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | + X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE; + + if (kvm_x86_ops.set_cr4(vcpu, cr4)) + return 1; + + if (((cr4 ^ old_cr4) & pdptr_bits) || + (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE))) + kvm_mmu_reset_context(vcpu); + + if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE)) + kvm_update_cpuid_runtime(vcpu); + + return 0; +} +EXPORT_SYMBOL_GPL(kvm_track_cr4); + int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) { bool skip_tlb_flush = false; From patchwork Mon Sep 14 20:15:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774841 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DF7A7112E for ; Mon, 14 Sep 2020 20:27:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BF9C620757 for ; Mon, 14 Sep 2020 20:27:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="hJQ7zhYi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726294AbgINU1w (ORCPT ); Mon, 14 Sep 2020 16:27:52 -0400 Received: from mail-bn8nam12on2078.outbound.protection.outlook.com ([40.107.237.78]:9896 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726261AbgINUUR (ORCPT ); Mon, 14 Sep 2020 16:20:17 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z2LPAdhZWqtZBDX4nLbLpHm8ZHfMwk54ENDO02T0H+fZC9EAvTXJyN3vPDEv8uUSxHWaNMeWT/UM5lX8zg0Od/oWlq0fAUk964pUN5FHRaLb1M8PWOOjAzlIOUTF70creBEuOZpEVL9BRrQUuo+YsrcXl9RWqoVcRGI2W55JQ6ShhMwTLT6YSM7Kid5Pq/+XEHvxtE+jK7GHHgHeh91UPute5qndDx0GpJFuZU+eAH/G2CGmCsXo+XzLIsqf6QgiNKoPUoKs/3FK6cN2Y3KwZe/pFHwwk1SjDoVizyc4Ab6LD+WPCWc1IexFKO7jW276me75x1JoyxYkS/V6nbD42g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OPom1vPuU/Gag5lzL7rRu47c4WTQKTNTOcmKVA2BaP0=; b=gvs/VbsxFOHmAiDkfjij4HBCDeiwiBOlJtSc673bFvVqzxDA81EK5tcYKObsQ/OtkIuQt5muHKVSmeq94FRQn0hlv9yQF4IqlO1AmGwd85gfq3iNcgSuoVSVR25buljMCvH47OR94nvc7tInfT7DI2g9qbvhyGj7NA6V0kuOQn6/Z818LiyDdMA1H0JQxyczEF0a2zehE9MrPzbbcDtOv8qAXb2mS6mTxucUdND/NkTOHeAsuSwhzR2uwG5foKr2eD2sTaNfmi7cBPuCbMsu53zAtUVdp9PHIkJesfn2UWYu5I2MSdNxGgG6rsQtXT7Xn9bR5ld99c/cic7i4++WGw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OPom1vPuU/Gag5lzL7rRu47c4WTQKTNTOcmKVA2BaP0=; b=hJQ7zhYiAvjSH0zZSEzllii32YvcoDHUb/JkB9yb3JBsyL0sinv0dWIQuBN7FIC8l/Q6CWGztAlVJZMYUVD9TRW/x3EJjPOMSFhG0xTIXuvAhZbsXduAbYCRZiPgfgSmgFurNqJC0XSHiLfm7Pn+HXEsi3xqXZavWTSZ2o4k+/o= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:19:11 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:19:11 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 24/35] KVM: SVM: Add support for CR8 write traps for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:38 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN4PR0501CA0052.namprd05.prod.outlook.com (2603:10b6:803:41::29) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN4PR0501CA0052.namprd05.prod.outlook.com (2603:10b6:803:41::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.5 via Frontend Transport; Mon, 14 Sep 2020 20:19:10 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 115c43b4-ae5a-46e6-5c15-08d858eb7114 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xKawHvhrbvd/BcRDltqMBYMiFPgExPMWXQprG57U59G6QN+97TpQ92g9GgcSrTYeMXTEmQMObJs1578MOcOuXEWwdbmA8vsCuUtG79B4ZAWgFfloqOUEKmSdHvnmsQJ6gJoCM7i+kuly1HWXeV6inGnIpZB2wRxj38Ekvc68mjr7GfYXCLzkbn3yOJsLUNjLy0mvp1X2u/co5k50Ag4OCNLgATWglKTVcdlBMKbws+su8Ma5KL3dd5NwiSSoPi2ry6JsmkfW/yZIOakQIj/Apw5+gPaD0/AxYTvZbv26oqsCb2FY4zfpJDXzEPFHEk8l/tlGB2PZwPBVS7sBy7SusA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 66wH3nymizaOT5RvHgPl+wCch+PVbEcHI0a2fTi345w4anhP2xZuZAIWzniFPWTiKQa29r2E3uCchKsOmc13h+BLzVDegb0ZMgqWFcWC2WmrKRNM2F566nw+6XIzzzNk/T1KKeSWyo4ZBlSGPIWz5QbRwu0JPWCZwpUwLWJRP5VxR0iIsbJzGGa8+suX0Tbk/DWLek0jdzMj3Gol4sCQuW54cqT0ByXHMBJ7oMVt0c/dg5RjNUTd3Ma4DyFoEKRyTMFVzFOniAugjjDqDOQG9A11ZDC5+WEi3gUDfL37REuSoOFH60lqZDsxNM555yF2ySyNHEyiLHXoudXoKm36B1VOwyJeRk6weRgJcpkkPLn1XFvDj7Y7c0YgQwxSwd1i8Rw4cf6h8pwO/fqEQzsWytnW268St4w0InxB5HOzIk+fq2Rg3l+u87XyLHx/VA6DZJt6Frrie8PTw6XMyJh+JA2l7ny678deCZFYYLeYwMg8z5VTAKhoRwZeMV5u2zL6/TEJ7HJwQl1wsM6HBgg+oyj+KA/DLdW3yrZnMwxSFnmzW62IwhVPD3J7BqgDcFYw7iHxkdV4QRFnqOfB3YbeEmjCn+0xY2+lZbR7uJrvHUCglzXvpE7fnIrCVVF8VVLoHQlA8QTasT2IJx3QK2nJ5A== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 115c43b4-ae5a-46e6-5c15-08d858eb7114 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:19:11.2462 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Tjgzf8huhNYJRbO/aj7MtPbUQt+//B1HgXc19FsdQxjLPbLsIW5aqaPCA5+zEYlwTePe1rt5IOVyXeEoWXvCow== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky For SEV-ES guests, the interception of control register write access is not recommended. Control register interception occurs prior to the control register being modified and the hypervisor is unable to modify the control register itself because the register is located in the encrypted register state. SEV-ES guests introduce new control register write traps. These traps provide intercept support of a control register write after the control register has been modified. The new control register value is provided in the VMCB EXITINFO1 field, allowing the hypervisor to track the setting of the guest control registers. Add support to track the value of the guest CR8 register using the control register write trap so that the hypervisor understands the guest operating mode. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/uapi/asm/svm.h | 1 + arch/x86/kvm/svm/svm.c | 4 ++++ arch/x86/kvm/x86.c | 6 ++++++ 4 files changed, 12 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e4fd2600ecf6..790659494aae 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1434,6 +1434,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); int kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8); int kvm_track_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int kvm_track_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); +int kvm_track_cr8(struct kvm_vcpu *vcpu, unsigned long cr8); int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val); int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val); unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu); diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index ea88789d71f2..60830088e8e3 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -203,6 +203,7 @@ { SVM_EXIT_EFER_WRITE_TRAP, "write_efer_trap" }, \ { SVM_EXIT_CR0_WRITE_TRAP, "write_cr0_trap" }, \ { SVM_EXIT_CR4_WRITE_TRAP, "write_cr4_trap" }, \ + { SVM_EXIT_CR8_WRITE_TRAP, "write_cr8_trap" }, \ { SVM_EXIT_NPF, "npf" }, \ { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \ { SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ec5efa1d4344..b35c2de1130c 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2426,6 +2426,9 @@ static int cr_trap(struct vcpu_svm *svm) case 4: kvm_track_cr4(&svm->vcpu, svm->vmcb->control.exit_info_1); break; + case 8: + kvm_track_cr8(&svm->vcpu, svm->vmcb->control.exit_info_1); + break; default: WARN(1, "unhandled CR%d write trap", cr); kvm_queue_exception(&svm->vcpu, UD_VECTOR); @@ -2980,6 +2983,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_EFER_WRITE_TRAP] = efer_trap, [SVM_EXIT_CR0_WRITE_TRAP] = cr_trap, [SVM_EXIT_CR4_WRITE_TRAP] = cr_trap, + [SVM_EXIT_CR8_WRITE_TRAP] = cr_trap, [SVM_EXIT_NPF] = npf_interception, [SVM_EXIT_RSM] = rsm_interception, [SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5e5f1e8fed3a..6e445a76b691 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1109,6 +1109,12 @@ unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_get_cr8); +int kvm_track_cr8(struct kvm_vcpu *vcpu, unsigned long cr8) +{ + return kvm_set_cr8(vcpu, cr8); +} +EXPORT_SYMBOL_GPL(kvm_track_cr8); + static void kvm_update_dr0123(struct kvm_vcpu *vcpu) { int i; From patchwork Mon Sep 14 20:15:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774789 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4BD316CA for ; Mon, 14 Sep 2020 20:20:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24158208DB for ; Mon, 14 Sep 2020 20:20:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="2Zx/kgLq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726242AbgINUUO (ORCPT ); Mon, 14 Sep 2020 16:20:14 -0400 Received: from mail-dm6nam12on2054.outbound.protection.outlook.com ([40.107.243.54]:16225 "EHLO NAM12-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726293AbgINUT1 (ORCPT ); Mon, 14 Sep 2020 16:19:27 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GPKr/0XsYwc75ivqZBEnAHRHiaZ6UCmCaTuofcT++5j0veFTmPbtKBQKG3F+Eh6LYB0EF2LPPyfcSJbMk3iwM0orHiY1C6kJDOutuUy+/CeNmz9DcE8rkCkwCYuO40lZbD+X64UIlFrfZ/vafZUERI5ZnQ0RzNU9SITlS+eZqjHxKU9EtP137ktYrfwOyYHl3A0F+jZIukaa9TRaTl0SqQ7l0rhr0XBfoxY5gzrcYpnL1EYkrqZ0mha9M59Mc8VQY7v4Bn7g3s5vXfJZBxB5ACc59YCqBfK8FB4qJNkDs6+nNmVotjlga0Az02Qy7ZwKWHVwMzDrkYonsDwg+XZiqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KR65iSw/H1Douj4v4mXpHnY6Z6UJeYg8cdJyUNEbB7c=; b=To3XC3w0pkCGZwH6ZyvXELISfDplCtvPR7Wgu+1FmAUYL3dYQoyqXBEyCwam1REERstcRUnbnnNCAhzEtU2V/DlKMNaZmWNfgb6dw72wfN85wMiWghW4nlB2sooD9/Km/C9wvQqDwKlPhMMMQ8/u6Yj0zuNJrVm5yGPnhGGVE7gtUyZg44FEqfCEUOBa+co0+R+pu2IZXxEPlvbA2BlHFeybxJHphbSQ+SMeLnyqUK9TJe8hN9Af5ZQ4ThUptekKezL9ny97iOw1tR1kWmctqv660TUBJgLgJIykY5pBrfk5YXFCus43veaNX+c1UJsu4Y0cirKpBSSUVSOeZIwOHQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KR65iSw/H1Douj4v4mXpHnY6Z6UJeYg8cdJyUNEbB7c=; b=2Zx/kgLqlueH8C2st7mNe9YTerY1fTP3nbzlE7y60tuI2m6RfD4zhI/AtCE3pbDx+na5KAdRNU11/hEPK7qDGLG5I9E2LiyBFKHlDRxMBXKcaI5a9n6QZubmLFqpCPFYoEUUTc6sjwrZyGQgP3OGLRxwN77bXQ+vANzMZER7ESU= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2828.namprd12.prod.outlook.com (2603:10b6:5:77::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3348.15; Mon, 14 Sep 2020 20:19:19 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:19:19 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 25/35] KVM: x86: Update __get_sregs() / __set_sregs() to support SEV-ES Date: Mon, 14 Sep 2020 15:15:39 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN4PR0501CA0042.namprd05.prod.outlook.com (2603:10b6:803:41::19) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN4PR0501CA0042.namprd05.prod.outlook.com (2603:10b6:803:41::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.5 via Frontend Transport; Mon, 14 Sep 2020 20:19:18 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 61b11bf1-53a3-4919-bd61-08d858eb760f X-MS-TrafficTypeDiagnostic: DM6PR12MB2828: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NWa4QGDe97mSUpJjJu8yg9pOLyGz8Z4S5EPvX920OvsuWBaNkcVksUUi9Tfv2960y76ZUeHk7qml9ViZx7C8Be6Kza8sdN9jSS9aiVixy66odSRuYatrjMbID/zeNjo23wWO8XtvYgrgnZSTvyPjsZCnVoo9RPo3aggcErfQwqTaBq7l11QQGyjDK7oL8gtpercj0e3elVoHdthd/QTD742q8OP7msaWxE8XoVEhKitE75KM3tNcAcSdId5JSGLoODx1+YgXTe9heXs0JJYEaLuqQNR8wyEH5fW+3LiFzFdLy5sRkzomzo3TR7RTlAIYl7lTn3Srv6RyXpZLRfknHg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(366004)(39860400002)(83380400001)(52116002)(6486002)(478600001)(7696005)(956004)(4326008)(2616005)(6666004)(2906002)(316002)(54906003)(16526019)(26005)(186003)(36756003)(66946007)(5660300002)(66556008)(66476007)(8676002)(7416002)(8936002)(86362001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: zUIzAqucUmoA6ga33H9p7hSuxDYZWZQ1E2IJ0yRrxnIoz63RO7wd6tctwdFGYxi4VXU7XJCm6UUSgrXY6iI12eumdKx7iylIG/W1TEe7DPkAFeLlsa9zTa7cYJJOsUHK3WqNIHRVhTwCbOVsEtXZUPW2Ps32MxrDmJ93jWUMIwWn3QGnSfwRAcZSCM1PXfd2Xp4QsfZ1doVMAMuhXgK1/6DBeUD5PTpVID1f9RRtlwNeRuQ/acMyUk7YpqvshPo03DtlS5Gj+yKM7AY7gZATi8KdrzZdjDz+YIzCV31l7YIG8Erj0o7Ll606PpRfr0G5IUDNUDcigJxjEq04nkAHfDSJlBtWG8lVbDheu0tuideyAs7WNbrEcyr7h7MOqpffgL+WvQRLu5fVlENmmU0vMn4yVF+GxgQVB7btJkUWAncxHIBko10FNIVplkruamIaQ+ySfP6hPCKs1wjbmKqvOI0d2LLeiT9AKmxW1a9/Von7TBfmyc6QZHrUHSl944j5ppmWwDl+2oPQoTG1+rneLagB8sc9bwS7S5pGCrhZRppcisL9drLRRltq4J6zPOC53qmCLw3xo0Z2hBjihDZ5L/L9/KhvVgbGnSM1XKD04VfJMP1zL1gPSy3pJ82gM3GX6kMC8X0bhEc4CvNavkk51Q== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 61b11bf1-53a3-4919-bd61-08d858eb760f X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:19:19.5976 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: SnFbJYg5K1/q6KBogXXltMJcc0zH6h/6DESUHUQWIZRK8tlNbQ0WWLK69KKvKH5foUmYhtVXyq9wcdu+hJv9iA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2828 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky Since many of the registers used by the SEV-ES are encrypted and cannot be read or written, adjust the __get_sregs() / __set_sregs() to only get or set the registers being tracked (efer, cr0, cr4 and cr8) once the VMSA is encrypted. For __get_sregs(), return the actual value that is in use by the guest as determined by the write trap support of the registers. For __set_sregs(), set the arch specific value that KVM believes the guest to be using. Note, this will not set the guest's actual value so it might only be useful for such things as live migration. Signed-off-by: Tom Lendacky --- arch/x86/kvm/x86.c | 56 +++++++++++++++++++++++++++------------------- 1 file changed, 33 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6e445a76b691..76efe70cd635 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9090,6 +9090,9 @@ static void __get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) { struct desc_ptr dt; + if (vcpu->arch.vmsa_encrypted) + goto tracking_regs; + kvm_get_segment(vcpu, &sregs->cs, VCPU_SREG_CS); kvm_get_segment(vcpu, &sregs->ds, VCPU_SREG_DS); kvm_get_segment(vcpu, &sregs->es, VCPU_SREG_ES); @@ -9107,12 +9110,15 @@ static void __get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) sregs->gdt.limit = dt.size; sregs->gdt.base = dt.address; - sregs->cr0 = kvm_read_cr0(vcpu); sregs->cr2 = vcpu->arch.cr2; sregs->cr3 = kvm_read_cr3(vcpu); + +tracking_regs: + sregs->cr0 = kvm_read_cr0(vcpu); sregs->cr4 = kvm_read_cr4(vcpu); sregs->cr8 = kvm_get_cr8(vcpu); sregs->efer = vcpu->arch.efer; + sregs->apic_base = kvm_get_apic_base(vcpu); memset(sregs->interrupt_bitmap, 0, sizeof(sregs->interrupt_bitmap)); @@ -9248,18 +9254,6 @@ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) if (kvm_set_apic_base(vcpu, &apic_base_msr)) goto out; - dt.size = sregs->idt.limit; - dt.address = sregs->idt.base; - kvm_x86_ops.set_idt(vcpu, &dt); - dt.size = sregs->gdt.limit; - dt.address = sregs->gdt.base; - kvm_x86_ops.set_gdt(vcpu, &dt); - - vcpu->arch.cr2 = sregs->cr2; - mmu_reset_needed |= kvm_read_cr3(vcpu) != sregs->cr3; - vcpu->arch.cr3 = sregs->cr3; - kvm_register_mark_available(vcpu, VCPU_EXREG_CR3); - kvm_set_cr8(vcpu, sregs->cr8); mmu_reset_needed |= vcpu->arch.efer != sregs->efer; @@ -9276,6 +9270,14 @@ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) if (cpuid_update_needed) kvm_update_cpuid_runtime(vcpu); + if (vcpu->arch.vmsa_encrypted) + goto tracking_regs; + + vcpu->arch.cr2 = sregs->cr2; + mmu_reset_needed |= kvm_read_cr3(vcpu) != sregs->cr3; + vcpu->arch.cr3 = sregs->cr3; + kvm_register_mark_available(vcpu, VCPU_EXREG_CR3); + idx = srcu_read_lock(&vcpu->kvm->srcu); if (is_pae_paging(vcpu)) { load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)); @@ -9283,16 +9285,12 @@ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) } srcu_read_unlock(&vcpu->kvm->srcu, idx); - if (mmu_reset_needed) - kvm_mmu_reset_context(vcpu); - - max_bits = KVM_NR_INTERRUPTS; - pending_vec = find_first_bit( - (const unsigned long *)sregs->interrupt_bitmap, max_bits); - if (pending_vec < max_bits) { - kvm_queue_interrupt(vcpu, pending_vec, false); - pr_debug("Set back pending irq %d\n", pending_vec); - } + dt.size = sregs->idt.limit; + dt.address = sregs->idt.base; + kvm_x86_ops.set_idt(vcpu, &dt); + dt.size = sregs->gdt.limit; + dt.address = sregs->gdt.base; + kvm_x86_ops.set_gdt(vcpu, &dt); kvm_set_segment(vcpu, &sregs->cs, VCPU_SREG_CS); kvm_set_segment(vcpu, &sregs->ds, VCPU_SREG_DS); @@ -9312,6 +9310,18 @@ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) !is_protmode(vcpu)) vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; +tracking_regs: + if (mmu_reset_needed) + kvm_mmu_reset_context(vcpu); + + max_bits = KVM_NR_INTERRUPTS; + pending_vec = find_first_bit( + (const unsigned long *)sregs->interrupt_bitmap, max_bits); + if (pending_vec < max_bits) { + kvm_queue_interrupt(vcpu, pending_vec, false); + pr_debug("Set back pending irq %d\n", pending_vec); + } + kvm_make_request(KVM_REQ_EVENT, vcpu); ret = 0; From patchwork Mon Sep 14 20:15:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774839 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 152F6112E for ; Mon, 14 Sep 2020 20:27:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E946620E65 for ; Mon, 14 Sep 2020 20:27:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="xbf6cTzm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726243AbgINU1u (ORCPT ); Mon, 14 Sep 2020 16:27:50 -0400 Received: from mail-dm6nam11on2069.outbound.protection.outlook.com ([40.107.223.69]:56449 "EHLO NAM11-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726048AbgINUUi (ORCPT ); Mon, 14 Sep 2020 16:20:38 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HcWfW7Xxu40/AdSIuMoo5wZQcwNRm1msJzc/gKytm1OqCv7xoBVpOj1Z4aXCCUqVpD3huXP4M2qbcduZs6cuKJbjczfkk57AlWW2bjMIcepyoJY+wLNJFtqBZFWs6RC2P4xUkVBCeRXIsCtEysLwUqM7BgWfSwtn7QLaveiD31HkaenkuLMmglZo5k/aswdm1+QraFrwHK6t+pW/xfupLl4SX6boCojvWhKC7nUoivN6OLy2OttGXWE5cZLAGgpnD0cq3tPGDoEvrd3Ao2lck/l0SWvgx6iYNKD6uEqxvEOX5DnWj2LlBFiDOmIRj+GWVZxg+CbqU1ndIVf3KIW1Mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TCFwC7skNF7LPhLoCU8DZb5CDciLp3xPg6AZz+CB/Dg=; b=WbVdsK6+aVS/+RI+4ty3PaFPPsDzjqOE/TR3NUGwq+/SPEedG6Wd/SJ7qozZnhC5JxZVaH16PIjnkzi2rE+nX0UzJVdZBWPYrPRadFedqotR9NGV9bmO2wmqH6y2KlfmTtF/Amgb93QX4/WVjT3/sPGosG5Dzq2ZN5jgR7O1Gz/aJMolChwgZUchUUEHumjZawlnaSlahbplF6nTJF8AYgSKbc7YhzCQL1jYDpN6XSLdjOYJ3e6LEhfLNhMRumn5pglpqKrPRjRaZpiffaOgD7MA65yyX0Xj/bOlnOhbK+GBTd/FhuRw/yc17Wj0H0+p4wsGwfq6q6y4TWMHqb4uTQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TCFwC7skNF7LPhLoCU8DZb5CDciLp3xPg6AZz+CB/Dg=; b=xbf6cTzm2BqXGwNxchvvL3MwrhsqqvzZRCKKnmO4oMxxML4g6Bj25InQ+GipAdJHPbmOfea6tDcD9ETqtqRpXH5iDG0uznmGrKmPLjU2rtqAHCgEsix5Q9WZYqrbhOoPhpHIVLXzTw8iSkk6CZhjrxwexHXT7VKKQbLa8wLkQyU= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:19:28 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:19:28 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 26/35] KVM: SVM: Guest FPU state save/restore not needed for SEV-ES guest Date: Mon, 14 Sep 2020 15:15:40 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SA9PR10CA0017.namprd10.prod.outlook.com (2603:10b6:806:a7::22) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SA9PR10CA0017.namprd10.prod.outlook.com (2603:10b6:806:a7::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:19:27 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 7ded8d0a-b333-4457-63f6-08d858eb7b0f X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zhF0dkFrPocVATIrOgwCjMQ82nA0bYZJjKT4rilJpO4k4r9X9DAdE5bOSuEy6w9eJIDE22m9oeSfr4qgM4LCEbMqUC1FAd6utGBChYwb99piXlFXcNtF7xcA0fU3NaS/OM8lrXgIwysL7+tvGvtPc+flKGZa31ZimzipfGrbFJATp7l9iUwP1pqlQsvTcgljZvtXhUAaNfxLHjJnchnxqUXsGOoNQbitrHgM+/BbClwjCBHHJQYAD9Dk5j+014uHfyl6dtjIMKM328RyhNcHXOMOG9pFIcBCYL0B7rK50FcEPlDFhR1W8MOnqmjMd9VTeyUc/WraRbMMkl8FhS8rlQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: cpaFE0+mb6n/mAS3wb7fi3N+VP9eRD23rNR69hwscwPCIdE4bZ6f47aGgu2PlDvLW1fcEBF2Dryy+ysjQnVvvhNITSFkTuwefDMQJ+HrwFzgREECQw7RNvGoy/kRr4+xUHCntSAFuQIcKV4YYCtAQQ2NhiyTU8rmKOitXnR+0c05gJIVRgg43fIn1/do3sDmoa0wRZ/KsfifniOHK/IkdOzNRdVBx292UgulF9neZLwU2BgGUFe9U32x4Ms9RNyYFwPkrrC4HSp2b/redqbHfeGh/+9UHxWLI+EAgxhdhtaU/cnAG0okSOH9kttSei2uQqGmlz5EEXdvqGfn4S8GQXGxF5okwBA33JOi749Y2K4w/lA8xlJn0hcBYG3geJvJd4o7zMBIKW2HceC963URyq5HpywPR2hfLVWvKHnXKNRWbxlsX1x18fASUxpYLY2TIX/LLBSOp+8gxtvpXgrBdkQgYfW+bq7XvoOGErURlXUDW3sshJzmBgtVBIh7h57S/vlaVnwVLioeXUT9BjhHJ5H4LF/ir8sISfxAQ2mzTEWKSJQPgtJsXywJl8/ERzjZgD+UI22tnfiOd+++16plkR0MMUHvhZ8PtRm7x87OYdr1TlYbXtQIzWzJPjzx19dW3e7Tdq8TILU6ivwaqzEzWA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7ded8d0a-b333-4457-63f6-08d858eb7b0f X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:19:27.9829 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: hZzp1oKQwF7PbmDk32jbl6jH/hZRf2R9xahjSEp8XXGpv66ZN6Mv1ebMNXAG5QfIfPTdN9QsJ06a/g9IbQO+RQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The guest FPU is automatically restored on VMRUN and saved on VMEXIT by the hardware, so there is no reason to do this in KVM. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/svm.c | 8 ++++++-- arch/x86/kvm/x86.c | 18 ++++++++++++++---- 2 files changed, 20 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index b35c2de1130c..48699c41b62a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3682,7 +3682,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) svm_set_dr6(svm, DR6_FIXED_1 | DR6_RTM); clgi(); - kvm_load_guest_xsave_state(vcpu); + + if (!sev_es_guest(svm->vcpu.kvm)) + kvm_load_guest_xsave_state(vcpu); if (lapic_in_kernel(vcpu) && vcpu->arch.apic->lapic_timer.timer_advance_ns) @@ -3728,7 +3730,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) kvm_before_interrupt(&svm->vcpu); - kvm_load_host_xsave_state(vcpu); + if (!sev_es_guest(svm->vcpu.kvm)) + kvm_load_host_xsave_state(vcpu); + stgi(); /* Any pending NMI will happen here */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 76efe70cd635..a53e24c1c5d1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8896,9 +8896,14 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) kvm_save_current_fpu(vcpu->arch.user_fpu); - /* PKRU is separately restored in kvm_x86_ops.run. */ - __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state, - ~XFEATURE_MASK_PKRU); + /* + * An encrypted save area means that the guest state can't be + * set by the hypervisor, so skip trying to set it. + */ + if (!vcpu->arch.vmsa_encrypted) + /* PKRU is separately restored in kvm_x86_ops.run. */ + __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state, + ~XFEATURE_MASK_PKRU); fpregs_mark_activate(); fpregs_unlock(); @@ -8911,7 +8916,12 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) { fpregs_lock(); - kvm_save_current_fpu(vcpu->arch.guest_fpu); + /* + * An encrypted save area means that the guest state can't be + * read/saved by the hypervisor, so skip trying to save it. + */ + if (!vcpu->arch.vmsa_encrypted) + kvm_save_current_fpu(vcpu->arch.guest_fpu); copy_kernel_to_fpregs(&vcpu->arch.user_fpu->state); From patchwork Mon Sep 14 20:15:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774851 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B407112E for ; Mon, 14 Sep 2020 20:29:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 036DC21741 for ; Mon, 14 Sep 2020 20:29:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="Bc8QkxzD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726118AbgINU1s (ORCPT ); Mon, 14 Sep 2020 16:27:48 -0400 Received: from mail-bn8nam12on2068.outbound.protection.outlook.com ([40.107.237.68]:3544 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726146AbgINUUl (ORCPT ); Mon, 14 Sep 2020 16:20:41 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MCwxQW4C2ohHbeBMlptLrVcfgvbSuVuoi3+9yP6U8FUZbD4e7x3nRt53CDtZ+9JOoS+YWbwWQnKDETKJ/YmeeVYN/wfDtWO7T8ta2506qjQt5FngLkG5I8BAVH/btsxlLtilRy3ZYWtD5YTA5193iLPgYbnpkErIseviXjuide8lBCHsInU4YciLuZhlF3b4eynMOYj08akl+aWW7BPnfN1DdmJvUmcPl9ezKdLLyVMvtSz7XYMgg6u1HLuK8du4EyOITju14dlIvbCbmkt1VExdbuNU9wGcw+bqmkua34PgakzkNUJ5U0HxF2FOQOnnY3ZfQWKstdO/DhK2mUlwlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=khOSCZxqUUQNoJaZgiZSlEC4S3HSuFUnROAQQmDZOc8=; b=gn9GriVCsRYxG99QR3qEow0V2It9OTRj9c4IPYDWS8lGlehnlw3juOZSHdyF7CpKUdhCHvMzSrvWcyLV1ZYKQSveakFAIMdXHBudhK57gA7oBfjF8lSYiaE3f3wSjfE1UJWmzVJ4YFpr6K6T9hZKgzpYep8vzb1WGRK45sE8p10qzEDeA8IcK5rq72+/kN/5HPfzIF+hAloXrVGrS+g3B9VMCCjK3uyZRMOkBS6k764g0w/6QQJre2bsThkoQVe0VmsS0yTOUOShXIqe4iIGhpJRD6TPJ+r80QgYa9SV+/Iy4pJrBtCwsQkvvdmfrCBkT1tdW2T9WtB/qI4L2//QUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=khOSCZxqUUQNoJaZgiZSlEC4S3HSuFUnROAQQmDZOc8=; b=Bc8QkxzDCUo3McYhD6q7qEoNL7Uun8PKz1alL6ivmdawM2Q4iCmWiWzoIhZvVRS7QZhjGJwaZOcYFsbwLi42PuaF/w5JLdjXvpMq7X+AnwoPEd+y0h83uSLdfKUs8lHVaKSSQykJHQEs4VmfKhiEMjArrWqi2SPS6lqo2tha4Jc= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:19:36 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:19:36 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 27/35] KVM: SVM: Add support for booting APs for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:41 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SA9PR10CA0015.namprd10.prod.outlook.com (2603:10b6:806:a7::20) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SA9PR10CA0015.namprd10.prod.outlook.com (2603:10b6:806:a7::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:19:35 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 9507546d-3b7d-421d-9295-08d858eb7fe3 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3631; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: k5BMIhPb3wZ3JikOIY9bHPXuzXezbqx3HIG0CfCX2Y3yyuhxkT2TU2f29yzlrb+Df+BEHRUJ26UzdVY+M2Vv/wuZ8XPaxtsiRcDaj7NUl1jcOqr8uz/Pih1aLzhYSmM/sulkl83bfD1yINHB6jgVPQpnhKyofGyMOsoWwFyD3BYyLsvAHSCZvKMxrh75E+qYJJTrj68LxtyLaBhjDcPqrnaixl6mVnw0qt9om1joXW+FHd8PSGstJ+mle35jToLHFJmqOjoIbXPFjSuScBO4VhJSTQcQY2Vge7/eiB32G7+My8NAA5d4bC1NNcHTxH6s7UL8E0XUZM/YOQWZHwVptw== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: awY5CWO2AcQQv5cCc5HOlcQfVKHR91ZXGFOys9F+9/9+NmYQs0gKTauSgWRxQyD1MoJ7EobBpSsl0jgiLQwzj4eyRkCWaBj1VfDpVao0K7ZGD/CrXPyWQ/q9Ld2tvgoRespfe4w2QbhT2gHmXUY+gtDNwBhkn3sb1O5N+hbJwjOY2FczysEmEuaNiVBNOzKMlXIz1iVnLhr+BZlc0x7CGqyVJFohnNK2vzhPIJM3XDn1djJBXRNA9Sd4OExGFsDTBXBWkkCGD2+UnYNOYkrFK8is9EhCgi6tiOqerHAdY0QPlxoDNTgD+3TjyvFQYk4hJbHBBYuQM/efOLB5YP9j/go+X1FA0R0saeIZOCD2MM5Y4j2cMy6fsNcjEoPGalr7s6we9mP5hLfRCZVm5yglqP7ZYQloMVSGNbr5w1dH8tqKsuIjr5aHsWAELqXtcIi1/xT5V1HpVY3mstXQeWyRxMTo3aZyjzExALIwDhPhQKwvg1n4TfHkxZjc4YVGjJ8dLmiy54U0bgAHo5utHhdfPpjE+7MLJ6X2OhW8vA0uk2YsZWnvuL9tkOu6vux/Lj+o9kRmIeS+h4jFdTMrRHOh8cedduByP1sSYGAftBkx/DQe+UmCbKv3HgVFd2JjXCsjf5u7YFk/JGiuNijoLv1oug== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9507546d-3b7d-421d-9295-08d858eb7fe3 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:19:36.0793 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: VB5Bkovnv+z6CcQxanCxeCDDL4Ecm8vSaVpWUo+ywkuAXCUMfGCmgOxdCu8t4+alEblg3bspau7oflspReQBvg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky Typically under KVM, an AP is booted using the INIT-SIPI-SIPI sequence, where the guest vCPU register state is updated and then the vCPU is VMRUN to begin execution of the AP. For an SEV-ES guest, this won't work because the guest register state is encrypted. Following the GHCB specification, the hypervisor must not alter the guest register state, so KVM must track an AP/vCPU boot. Should the guest want to park the AP, it must use the AP Reset Hold exit event in place of, for example, a HLT loop. First AP boot (first INIT-SIPI-SIPI sequence): Execute the AP (vCPU) as it was initialized and measured by the SEV-ES support. It is up to the guest to transfer control of the AP to the proper location. Subsequent AP boot: KVM will expect to receive an AP Reset Hold exit event indicating that the vCPU is being parked and will require an INIT-SIPI-SIPI sequence to awaken it. When the AP Reset Hold exit event is received, KVM will place the vCPU into a simulated HLT mode. Upon receiving the INIT-SIPI-SIPI sequence, KVM will make the vCPU runnable. It is again up to the guest to then transfer control of the AP to the proper location. The GHCB specification also requires the hypervisor to save the address of an AP Jump Table so that, for example, vCPUs that have been parked by UEFI can be started by the OS. Provide support for the AP Jump Table set/get exit code. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/svm/sev.c | 48 +++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 7 +++++ arch/x86/kvm/svm/svm.h | 3 +++ arch/x86/kvm/x86.c | 9 +++++++ 5 files changed, 69 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 790659494aae..003f257d2155 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1237,6 +1237,8 @@ struct kvm_x86_ops { unsigned long val); bool (*allow_debug)(struct kvm *kvm); + + void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index da1736d228a6..cbb5f1b191bb 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -16,6 +16,8 @@ #include #include +#include + #include "x86.h" #include "svm.h" #include "trace.h" @@ -1472,6 +1474,35 @@ int sev_handle_vmgexit(struct vcpu_svm *svm) control->exit_info_2, svm->ghcb_sa); break; + case SVM_VMGEXIT_AP_HLT_LOOP: + svm->ap_hlt_loop = true; + ret = kvm_emulate_halt(&svm->vcpu); + break; + case SVM_VMGEXIT_AP_JUMP_TABLE: { + struct kvm_sev_info *sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info; + + switch (control->exit_info_1) { + case 0: + /* Set AP jump table address */ + sev->ap_jump_table = control->exit_info_2; + break; + case 1: + /* Get AP jump table address */ + ghcb_set_sw_exit_info_2(ghcb, sev->ap_jump_table); + break; + default: + pr_err("svm: vmgexit: unsupported AP jump table request - exit_info_1=%#llx\n", + control->exit_info_1); + ghcb_set_sw_exit_info_1(ghcb, 1); + ghcb_set_sw_exit_info_2(ghcb, + X86_TRAP_UD | + SVM_EVTINJ_TYPE_EXEPT | + SVM_EVTINJ_VALID); + } + + ret = 1; + break; + } case SVM_VMGEXIT_UNSUPPORTED_EVENT: pr_err("vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n", control->exit_info_1, @@ -1492,3 +1523,20 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in) return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->ghcb_sa, svm->ghcb_sa_len, in); } + +void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + /* First SIPI: Use the the values as initially set by the VMM */ + if (!svm->ap_hlt_loop) + return; + + /* + * Subsequent SIPI: Return from an AP Reset Hold VMGEXIT, where + * the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a + * non-zero value. + */ + ghcb_set_sw_exit_info_2(svm->ghcb, 1); + svm->ap_hlt_loop = false; +} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 48699c41b62a..ce1707dc9464 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4343,6 +4343,11 @@ static bool svm_allow_debug(struct kvm *kvm) return !sev_es_guest(kvm); } +static void svm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) +{ + sev_vcpu_deliver_sipi_vector(vcpu, vector); +} + static void svm_vm_destroy(struct kvm *kvm) { avic_vm_destroy(kvm); @@ -4486,6 +4491,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .reg_write_override = svm_reg_write_override, .allow_debug = svm_allow_debug, + + .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, }; static struct kvm_x86_init_ops svm_init_ops __initdata = { diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 9f1c8ed88c79..a0b226c90feb 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -67,6 +67,7 @@ struct kvm_sev_info { int fd; /* SEV device fd */ unsigned long pages_locked; /* Number of pages locked */ struct list_head regions_list; /* List of registered regions */ + u64 ap_jump_table; /* SEV-ES AP Jump Table address */ }; struct kvm_svm { @@ -165,6 +166,7 @@ struct vcpu_svm { struct vmcb_save_area *vmsa; struct ghcb *ghcb; struct kvm_host_map ghcb_map; + bool ap_hlt_loop; /* SEV-ES scratch area support */ void *ghcb_sa; @@ -565,6 +567,7 @@ void __init sev_hardware_setup(void); void sev_hardware_teardown(void); int sev_handle_vmgexit(struct vcpu_svm *svm); int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in); +void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); /* VMSA Accessor functions */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a53e24c1c5d1..23564d02d158 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9772,6 +9772,15 @@ void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) { struct kvm_segment cs; + /* + * For SEV-ES, the register state can't be altered by KVM. If the VMSA + * is encrypted, call the vcpu_deliver_sipi_vector() x86 op. + */ + if (vcpu->arch.vmsa_encrypted) { + kvm_x86_ops.vcpu_deliver_sipi_vector(vcpu, vector); + return; + } + kvm_get_segment(vcpu, &cs, VCPU_SREG_CS); cs.selector = vector << 8; cs.base = vector << 12; From patchwork Mon Sep 14 20:15:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774797 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F8966CA for ; Mon, 14 Sep 2020 20:21:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F382A217BA for ; Mon, 14 Sep 2020 20:21:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="OD4ASn+9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726368AbgINUVs (ORCPT ); Mon, 14 Sep 2020 16:21:48 -0400 Received: from mail-dm6nam11on2069.outbound.protection.outlook.com ([40.107.223.69]:56449 "EHLO NAM11-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726093AbgINUVM (ORCPT ); Mon, 14 Sep 2020 16:21:12 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FQ94M5LBAjoQ8VjmXctEV71VgaRgzUtDY7FWAWTVIdtzKocgUKJo3Y7EJDkfK7o0gwA1qILonQykhGBuHqtPJYunwc9TrybiZg16WoBZvho+w1fs2sgBOgosKnhlPO31U4ssCo2KIQXBCZE5LmqUlCvkFatsX5mQO0zAXVF7hYNY6aIuzQ3lrh1+h0DK8BZrd3LBCBJp0pvHDcBaejZlNuKV5T2J8DiRwMtFPy9cazs34VHJ/UoihbUbrdMpw1smwgLRqAryW3o+OSKDWwdV13pvFfKdF3y4KPh4WXUabhoAZsh8F/FlqtXNhl/VXaMxwlSgdiYEs78BjAytFnV1eA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wjrf2G55w0tPZqwal/5bfG5DbYxgP1yRtNxlPhfX3RA=; b=GBtJgEOhoCXX8fT8EadZS1jVNfoRjddba7OSv6rOEtbfDDtZtLJsCW3sNQvaNH1MgiaUc7f+x0Gtev8FSsiu5GyBUrQ03EFx+cLHT6hfnRPXLVkVscR0gPGwVC+MqjCbLwtVcpSGPEyMQQtq09P7zRFUK6nkeMmRvTg70kJ07s3VPR6P6tGkFHeoOrn1/hR4E9GtYsnAFi5Al8nh1rjGVdZA4b4ra07u6obLDQH6gwrNh+EMvUBHC4tpbWwQXpVLxQDItx/eAHkr+E6UEJmRVqP6shMwbGvvrbqeAGO1Pc5Kx7dhBiHiOjMUwbX5tJC7XrCrtWoblyKLICzMSCT1zg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wjrf2G55w0tPZqwal/5bfG5DbYxgP1yRtNxlPhfX3RA=; b=OD4ASn+9GdSSj4DyZWzbpWhKUiEZbswOXVoCp5VP2JJUGnvaXuSSwlm8JDQ6NwWdrJx+TjRUUKES4OXOMWfKIOJt05lNqwGeGF0h51CCF1ehuQ5RsTSZHEGwuumcAqmQJQ3rQkaVLsWdiSgBLHGQSU8ZrTVLed+n/4QMtpTVIUo= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:19:44 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:19:44 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 28/35] KVM: X86: Update kvm_skip_emulated_instruction() for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:42 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN1PR12CA0063.namprd12.prod.outlook.com (2603:10b6:802:20::34) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN1PR12CA0063.namprd12.prod.outlook.com (2603:10b6:802:20::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3348.15 via Frontend Transport; Mon, 14 Sep 2020 20:19:43 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 34bc036d-1e65-4c35-9f38-08d858eb84a3 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wFQjlMaGol7e3QuAsJjuipgQbA6Rx13BADAQuv7grtU0oPpXw3eMImxwpA7mXIkpIKmv8yoe1BfB8umC3es31wrTXNDWyo/+pw9pv3sRqzUyT0wP7/IEnWQq79VrXV7XGZVYXVfl5ps43y4LsBCAEm/wRUgqCl2b3dR58I4mXzhdsJbpvNExnCpc2ZUi+PhhfwQfhL/ge4V6gI3l82FZ+3ZsyLv04uzeQwt8D2ITpX/otBrjeFmEQKUwH706qAujaFfXn/bG1i0C69JkuzgVc2/QHGLcgDT9Iis8BOAV9+9lrAS2oGrituaxhvFe9HQE3iKC/+Og1XN6i3tR4/Z/tQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(15650500001)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: tknN3+R06CR63Fz4XmzEFwQof/kPY9QpCXrBIyKOXJ+Hp3CDBlKBraDdz9tToZz9nhOxN3B2lz/HqabtazBUc8nCBmkS+TpnNh1ct153xOqCMbtneUTioLaGzss0rMbiJjvlTjmMroAEjg4mAIJIvWOnqBcritwsg82Lb07yuNWT5qvkkrpbvviUpfPb2CUYAwGSuMFDihthz4k5x+pPKQOrrNc1kBwOWIkGTOpOH78KMDOOMBszPlVaj/SyctpiUUArSvN5weXTcRVApUYk+gOjKMARTG5WQk3FDg4/+iI1BV5h+FtcTgF2pcEom30kfmYiGvMuoteaDJHJTq3/IhoPUvtc1h+WALvTgYi1UYPalKDpGwpZ2ElyKOXaTL4fzZ+jOy/jrDFzunmQ4uE6OiLP9i4q/3EuLjSMcTYxe6lPribb3PVcDh5DmyHuKOLDO1IaSLMefEM/iOtc9gm4HapnuI1BXSbLSAPZpyxQya/cNDWQA8g3Z61T+pCs6Tohx13rwlDqA8LKfp4qxEwtZMOyNhW1W58PfsRBcLnBeRENzTSMP19wMUVFbB4uLlLlE/u6DV0roduzuBaMcwY+GtcTsS5FwSPfP8sJXYWhdAanu54sIgNeQjSmTrzzkjB3dfJvEtYQdRjuwiksYNoJrQ== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 34bc036d-1e65-4c35-9f38-08d858eb84a3 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:19:44.1518 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: C52H4tXeeQIUj3T/E6sVlyg51W4oJ6vMxARVM+MCjkfsmte5ei/7tkTuh+vG9i7Hf3vL9Wl49ch+wRoEf1oJmQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The register state for an SEV-ES guest is encrypted so the value of the RIP cannot be updated. For an automatic exit, the RIP will be advanced as necessary. For a non-automatic exit, it is up to the #VC handler in the guest to advance the RIP. Add support to skip any RIP updates in kvm_skip_emulated_instruction() for an SEV-ES guest. Signed-off-by: Tom Lendacky --- arch/x86/kvm/x86.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 23564d02d158..1dbdca607511 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6874,13 +6874,17 @@ static int kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu) int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu) { - unsigned long rflags = kvm_x86_ops.get_rflags(vcpu); + unsigned long rflags; int r; r = kvm_x86_ops.skip_emulated_instruction(vcpu); if (unlikely(!r)) return 0; + if (vcpu->arch.vmsa_encrypted) + return 1; + + rflags = kvm_x86_ops.get_rflags(vcpu); /* * rflags is the old, "raw" value of the flags. The new value has * not been saved yet. From patchwork Mon Sep 14 20:15:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774799 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F22146CA for ; Mon, 14 Sep 2020 20:23:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D2CC1215A4 for ; Mon, 14 Sep 2020 20:23:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="OHcgqB0g" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726330AbgINUXS (ORCPT ); Mon, 14 Sep 2020 16:23:18 -0400 Received: from mail-bn8nam12on2078.outbound.protection.outlook.com ([40.107.237.78]:9896 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726131AbgINUVs (ORCPT ); Mon, 14 Sep 2020 16:21:48 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f9oP5w6PNAwBsMNEvHPu5FCiheXyejJe/Ta/LJYIi4rKEnUZFeVU4czPlcAIfzdVUKWyfc0wdK0rPpZB9Rwy9/7aBudOB/nnXLWHSD9f+72s6eSfPkvHU6GIZf+TedtA1lPvTcV7Cw4P+2Wpzh6tDeQkVT7eRNgejMl6TPP1DY6wasVip6KwcEne0r7Bm3NBXw/82goadi8+rZ2Rqf2xr4ckni0DO4q0dEDSWnUIgK12bW3w0JuKQg0d429PrHQwe5f1n/CUqQKC9lYMAjn8PwqY41zhrqF/EewTH//v8SEeWOngnWP9kFJG0xS9xntbkw+lX8SjURoXxBOTdxzy3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fkopvjtBpI8D16bfJNa+vKVFafoGHkHzSg5T0u5TC68=; b=eiDe8gwSTCE1kGpuL58zHCZvZXkURQJ/13xuBMfbDu82cGq8+yABHtyzubyjxu5Xt/FxKC1wkVITme5gMZf5a/Z+s/dqVcC1X4xUVeklVS3s6s6P9Hv9NOTxnZk5PqiMoPv/ALuC9ALoaHWD6G/d7FPkUIZFvfd2+WvVe0PWYFj7sf/rasd3Tlh+UI3KMGEELk1r2oFa94W3wouMaVMdkvuNDL7MUQGVvimU6tKUsilZ8fQI7AwiMhwkLbfSBpthh31lwMOpYTnw5zDlzRv5pkrgq8cc4eRwJFgDjzzd7IQ18IILxftNPsjVgV2X06wDt8cPJsAjbiLk2ag2KP7N1w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fkopvjtBpI8D16bfJNa+vKVFafoGHkHzSg5T0u5TC68=; b=OHcgqB0g+ykEKKYsWPy6BftOwN/n43Ih7LmcT7lAWth3A8nr/FDSu92FgTzmO+bM8r/KzMzr6NLJ5mBjGjFeGVdHFgvrMy4zcyv1gqZw1ep3bYVtRU6ZVrNlQFg0Xt5XzlSAVLzGwATLEYuVv6z99xq5TNa0cwYJ4xJo3kOmzv0= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:19:52 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:19:52 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 29/35] KVM: SVM: Add NMI support for an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:43 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN4PR0801CA0009.namprd08.prod.outlook.com (2603:10b6:803:29::19) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN4PR0801CA0009.namprd08.prod.outlook.com (2603:10b6:803:29::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:19:51 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 702eb045-205e-4e3c-b20d-08d858eb89a7 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hKMS6/oBc+EsRGkgcKbkwSti0SotRqCbgYoLdn+p4woHy+2MBC2fUJgOSZlZRMCNO0cdJHlYd8SuSfnB4qXd9Y+eM/5KSC74yN/vKDC/mqqLJrIpU1LIaxhzOTbQakjSNEza5tp77nwfK0az38f2ph4asr4RY0H4ETg5yvwbQZVP6EqOezQmgGEf0PE3j92uxUvu7bD8NYXSCNr2FnG/tcru2VxFKbJyuot6VdLkJ+xUAHEk1CmlAwYEMrQXbo6eQyQtcDDYONsjkGc3ybWaYj1KatKQ/Iiy7hdjI5WLiw+dA7GQ3d6J+EMoytYXdd9LVhQXvkOAsp95rhjf7JpB9A== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: u12uy16qJ2gyjgtr9HtlC8TJnDXjCgrCcYoijO1FDynPg8zOKyI5OIeAnpyMyKsQvXOno4himIoGNDiW4qE0I4oC0VqjiUFLKPk1fTeHO7sUY1Fr71MFuKfz1lZIF6UC+mL7qRPrgEXf7scnBCR7zfDhgnIkkCgRm99UPv0Iu5mRok5xEQPPlzciWPlQdnWhN7Oyr7WGszKCj/xx1J+tb477Hvz3b1INJGErGckdfEk9GKQKGiFGRV9bY1e4/MiNg0ImHJyJPxVo6oEOC0ybytevhzMGIOULAvQ5jFRE4ndhS5ReWoNenuTOssT0DBdDWzw+7NCU5YwM2JnazBoxf6aLw1AeJkbMa9Fj4dlxKlOUSqtlL18QIRadZ1EYwp53Z5ps7uKVl/OjPDgkoyObUM9L4CBMv0rBJpLmtrRcDNI3g3GjoOTei1y4vbvhtFI0WsMBHUPFHVmmLZzqQGeCVrelTqq6w+G8STHMRR0OWTDfAW6TyrRz3OzJqkn44Ekp2+/7nIhlZdfD5Z1o9Q78wqVJix9pc345dBECSzP6wbpkaaeLl7CNkUQxd2yJtWSurGsrIz+OlUjkppy9HkCpYUIGPyr/JeUNXYQOzCFg6ejHd1CVQHfH/fGbSG8nAdxH6GV/p1mzhuhnb4sim/96Qg== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 702eb045-205e-4e3c-b20d-08d858eb89a7 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:19:52.4991 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: o6nlbwYrXI0ggxc+U9UeSmyeArTS/Lhu67J1iaF05CAdoPfEtnow77csTDx4MHMjIm8WzPzrQAyXGIKJCMpjug== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The GHCB specification defines how NMIs are to be handled for an SEV-ES guest. To detect the completion of an NMI the hypervisor must not intercept the IRET instruction (because a #VC while running the NMI will issue an IRET) and, instead, must receive an NMI Complete exit event from the guest. Update the KVM support for detecting the completion of NMIs in the guest to follow the GHCB specification. When an SEV-ES guest is active, the IRET instruction will no longer be intercepted. Now, when the NMI Complete exit event is received, the iret_interception() function will be called to simulate the completion of the NMI. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 3 +++ arch/x86/kvm/svm/svm.c | 20 +++++++++++++------- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index cbb5f1b191bb..9bf7411a4b5d 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1474,6 +1474,9 @@ int sev_handle_vmgexit(struct vcpu_svm *svm) control->exit_info_2, svm->ghcb_sa); break; + case SVM_VMGEXIT_NMI_COMPLETE: + ret = svm_invoke_exit_handler(svm, SVM_EXIT_IRET); + break; case SVM_VMGEXIT_AP_HLT_LOOP: svm->ap_hlt_loop = true; ret = kvm_emulate_halt(&svm->vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ce1707dc9464..fcd4f0d983e9 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2268,9 +2268,11 @@ static int cpuid_interception(struct vcpu_svm *svm) static int iret_interception(struct vcpu_svm *svm) { ++svm->vcpu.stat.nmi_window_exits; - svm_clr_intercept(svm, INTERCEPT_IRET); svm->vcpu.arch.hflags |= HF_IRET_MASK; - svm->nmi_iret_rip = kvm_rip_read(&svm->vcpu); + if (!sev_es_guest(svm->vcpu.kvm)) { + svm_clr_intercept(svm, INTERCEPT_IRET); + svm->nmi_iret_rip = kvm_rip_read(&svm->vcpu); + } kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); return 1; } @@ -3242,7 +3244,8 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu) svm->vmcb->control.event_inj = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_NMI; vcpu->arch.hflags |= HF_NMI_MASK; - svm_set_intercept(svm, INTERCEPT_IRET); + if (!sev_es_guest(svm->vcpu.kvm)) + svm_set_intercept(svm, INTERCEPT_IRET); ++vcpu->stat.nmi_injections; } @@ -3326,10 +3329,12 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) if (masked) { svm->vcpu.arch.hflags |= HF_NMI_MASK; - svm_set_intercept(svm, INTERCEPT_IRET); + if (!sev_es_guest(svm->vcpu.kvm)) + svm_set_intercept(svm, INTERCEPT_IRET); } else { svm->vcpu.arch.hflags &= ~HF_NMI_MASK; - svm_clr_intercept(svm, INTERCEPT_IRET); + if (!sev_es_guest(svm->vcpu.kvm)) + svm_clr_intercept(svm, INTERCEPT_IRET); } } @@ -3507,8 +3512,9 @@ static void svm_complete_interrupts(struct vcpu_svm *svm) * If we've made progress since setting HF_IRET_MASK, we've * executed an IRET and can allow NMI injection. */ - if ((svm->vcpu.arch.hflags & HF_IRET_MASK) - && kvm_rip_read(&svm->vcpu) != svm->nmi_iret_rip) { + if ((svm->vcpu.arch.hflags & HF_IRET_MASK) && + (sev_es_guest(svm->vcpu.kvm) || + kvm_rip_read(&svm->vcpu) != svm->nmi_iret_rip)) { svm->vcpu.arch.hflags &= ~(HF_NMI_MASK | HF_IRET_MASK); kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); } From patchwork Mon Sep 14 20:15:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774837 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 841E46CA for ; Mon, 14 Sep 2020 20:26:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61D55215A4 for ; Mon, 14 Sep 2020 20:26:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="p0wk6Owy" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726338AbgINU02 (ORCPT ); Mon, 14 Sep 2020 16:26:28 -0400 Received: from mail-bn8nam12on2068.outbound.protection.outlook.com ([40.107.237.68]:3544 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726319AbgINUVP (ORCPT ); Mon, 14 Sep 2020 16:21:15 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ehc3Nt6UrFpyj5MrivPZTICkZkmbF/U+0iBsRTbiiWAO/jMBImY+vi7/HLZ3GRwcXBiH7gOAEpAZdtMOo0Z4/zbq5Cg57sA3caYmpFdiWigX04X0AwrvIgzUeQ+yqNIET2WDJtQcyaOIr74NjfH6nTuTAG1RlgGHYoqr8D5Kkd+Vxsd4/szKhX+1JqabOLstsTBvstagJr63yzwevjqGgduLkLEm3zB515p8BCHYWEnxE2lZoqXQNW4I5+vM+GqP4NxxEdHYGEqy+E7zx1OQN04DiC8nnFeZta+OsGxbxxnqHDfWRcVcmbyM7gIU17talx7jbNYWzR4PEJjpCC3vrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2IfzXz5pCeBVSNqO2P8ueuO8d48gH8QtoPw4fXPxrsc=; b=br4uDpAoeSDPedXKT3G+jjvXjp/TKEsz1hIrqogRh6WT+osC653hChkuUIk3fgRG2bnNA/kooWn5oLAsHpGMRmJYlbxsj3Ko4QGPA9BZTslIm4QyQIV3xd0GxXAfWZCATZkottNJFP4CysImJgkjxC0117aUNqbEhDy0LOMkOLITC85Qi3wiTLmgKxEWhIcedK2gSauKnt1N22hBLut1dAmspUZ8rRe7RE65XprwZW165ffFwvIZ9AA26GYyo1fIINberl7iFRmU/nMCMHXWPp3fmcv4PxiliCzSq+QPMIQU/r0seSwzBkU1lwt1jQPdaesn0lwowISvB3zujx5E8w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2IfzXz5pCeBVSNqO2P8ueuO8d48gH8QtoPw4fXPxrsc=; b=p0wk6OwyeHQeMBz+tcsTrXh8xOPZA2XqdQxNRr9VZ4DbFPb4lBr3mt3ecQvHe9nvTGcs5R0FSuz0t5cVqzzFL5ygpD07pgSOZcQvE05G4qmPCNkizN8T1Y3fHdv5fpQMS/1CkHR+B89QeFrEm1KYHQBIxoXmY+5pIoNXTVx8xGc= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:20:00 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:20:00 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 30/35] KVM: SVM: Set the encryption mask for the SVM host save area Date: Mon, 14 Sep 2020 15:15:44 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: SN4PR0501CA0034.namprd05.prod.outlook.com (2603:10b6:803:40::47) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by SN4PR0501CA0034.namprd05.prod.outlook.com (2603:10b6:803:40::47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.5 via Frontend Transport; Mon, 14 Sep 2020 20:19:59 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 816b2110-93cb-46d4-8d38-08d858eb8e98 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1091; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VkPUagT79VCMpRkkQorkd0q/OlpwqF4aH76yobUgooK4TZdWZnJ24wVfjIlisCX3sYEZtPc9tffUe8+sBIvoBpTe8rtKB+mA/MyM0I5VmCMrYjbACMThqL1Xyvk1j+b5aOKANq7qFXll7giz1TgtBzqqj/s6D1aeet3iMa+Mm3aLnbfkQrUKs1fZbQqvfo/N7g28t3TxLa99aB00Q8VOZ2b2qbRxyy1ME/uFyBz0J7hudVWPeBjs5hhkKKwf9SS+WNB8642pZXXOgquRMoMPe/hlWtCAOTjY8EAx7Vu9ngjh1JjESktD8uh7+92fZHMpVm5AIkCBPLvdT7hZU6X/dw== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 2/qVl+ayttaQsz9eKD9ESShVupTCkJyeMSZO/cjKB+gTEni6p/ZdMigJtB9L7IRX1a31iVn0G86jN6WTIrYyNS5j1UpT3D4J6e4s8gvtFRdDEIUvukACFaB+qHhr+063lvuhS3ERLecykJdYhb7jNTpqtWMKeC0iiukuibWCxkKfu6DfOxnPnxXg242jGU79rEC7lvWDfcGvvGX4YVLmAbleQB/2x2p39CswlZJyg5rmglxWrgTdgxslFtFsvTjngBXkCOJ5lph3+HEP1VFHYOprVKPc49ifglpMoba+TrhAzh3HtpKIA4Ma6VUz3b3yN9rw65IuqG7OneadEO1Fct3MKFhC15wKaAw3kWcSPf0LyQQaY7T3XFkey4SpxU2nSzl2rfL0NC3fT9uVZek8Pz98aaKaZCpZutoqO2YCVKSIXTDfKwSZcmxMvWYuxaHip1nqhyUjoNlFZLHwBXv9VZ22cZ/iDNvgro0IXHIb7SLdwiBtYOw2m77b+2Afiurp4ycfaRu+hPvCLcyDxTkvlmyqmwkSGZqeQT43ukUsZykvR8DmEh/ukyJ3mZma04w9dz/WJZ7T9k0GbrrplQcMcNBLoqteMo2H+dEL++pEcCwHGP39j/4JupNGN8LEldUmMJIwFGd3ApuL/98dm9uLnQ== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 816b2110-93cb-46d4-8d38-08d858eb8e98 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:20:00.8455 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: jj5yWKE3t/aXeRj5fxD8+zD4bL8l1ZjLNo2voSexksZLGqACMYdBVeJzI77dM13DIcgFe4DY9bbZYWr6mcT9LA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The SVM host save area is used to restore some host state on VMEXIT of an SEV-ES guest. After allocating the save area, clear it and add the encryption mask to the SVM host save area physical address that is programmed into the VM_HSAVE_PA MSR. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 1 - arch/x86/kvm/svm/svm.c | 3 ++- arch/x86/kvm/svm/svm.h | 2 ++ 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 9bf7411a4b5d..15be71b30e2a 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -30,7 +30,6 @@ unsigned int max_sev_asid; static unsigned int min_sev_asid; static unsigned long *sev_asid_bitmap; static unsigned long *sev_reclaim_asid_bitmap; -#define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) struct enc_region { struct list_head list; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index fcd4f0d983e9..fcb59d0b3c52 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -478,7 +478,7 @@ static int svm_hardware_enable(void) wrmsrl(MSR_EFER, efer | EFER_SVME); - wrmsrl(MSR_VM_HSAVE_PA, page_to_pfn(sd->save_area) << PAGE_SHIFT); + wrmsrl(MSR_VM_HSAVE_PA, __sme_page_pa(sd->save_area)); if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) { wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT); @@ -546,6 +546,7 @@ static int svm_cpu_init(int cpu) sd->save_area = alloc_page(GFP_KERNEL); if (!sd->save_area) goto free_cpu_data; + clear_page(page_address(sd->save_area)); if (svm_sev_enabled()) { sd->sev_vmcbs = kmalloc_array(max_sev_asid + 1, diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index a0b226c90feb..e3b4b0368bd8 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -21,6 +21,8 @@ #include +#define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) + static const u32 host_save_user_msrs[] = { #ifdef CONFIG_X86_64 MSR_STAR, MSR_LSTAR, MSR_CSTAR, MSR_SYSCALL_MASK, MSR_KERNEL_GS_BASE, From patchwork Mon Sep 14 20:15:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E306112E for ; Mon, 14 Sep 2020 20:26:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E594215A4 for ; Mon, 14 Sep 2020 20:26:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="k3zzsAkV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726250AbgINU0O (ORCPT ); Mon, 14 Sep 2020 16:26:14 -0400 Received: from mail-dm6nam11on2069.outbound.protection.outlook.com ([40.107.223.69]:56449 "EHLO NAM11-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726365AbgINUVo (ORCPT ); Mon, 14 Sep 2020 16:21:44 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PUXgxgJHCdhSmyQF/KOypXs1zCbsgu2UDcpD71GPaAexWsfBDw2zC4fw9btksLLIoegQc66Ryn/OJpDe1DZzqURT5X8MM8OWL3xmMTvNmG7ijR4XvODOwBVU6OIG+8IozHEWs40Wb1wt2yGwK1zHTya9v7Qs9PhRpa7TjtnHfPQclvgWYj13/jvS9mRSDXVJ0AgLy1sFDBuMPbRsYih2dATrvkfwIRAtlQlqvKta2WebjD1lpC2IlEDni2TJSlmSUGnxmpMTXkiyVxMw4jkK3sCKVorytgW4bL7+lYJCQQkOGs6VowPnpNqve0+i1yRzndt6PQokZaPXPeeFlh3S3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pEQ6FB9VGwkgnblCuzjmRlKwfh3nECSEgvXesiXIoxI=; b=HogTov1VrgFn02Do2lRphZtAQvB/KiXSLmW94j+rwurd1vnS0+2SYnvJzdql6t8z9BpZ5ptbZ4YcQ1FxY89gDZLzljoG8bqWtfaH4dQoMAnb5uWDOHl7q1VQYantiS4G8BQq7aHKh/elk8n1EU/UDQSehZ1Q2y9lxTmSklvCLmt3fK5GE0qdmGw7fBE8PbtALeUe42xH6Rw904m48TuV6BgRK2XWZs8tvhPtwHz8jVGq35cjeaSXBkfb3ADMWBya/vxFPTdKZGeD53C6M2qsMOTiu5NekKHXnZza5TBytbBWxRj+WtWdZ0W7SjsxuLw5Oq7zOLV+dB+b3EG8qna+4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pEQ6FB9VGwkgnblCuzjmRlKwfh3nECSEgvXesiXIoxI=; b=k3zzsAkVsatwiaxfaXeyMqD6DZApH5fOiqHlTLIUXyRKKx1VIR5EEWOwmukFfaHZrhlBHStKeeEE4CXAA047xbJLXgn0yuRbqmHwXqAZMFtErAn7GERjuVP/VadXx6phVN9ir41W7cRb79HCtAeIGJdwfAR0I+M4ZxXjSJNAFok= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:20:08 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:20:08 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 31/35] KVM: SVM: Update ASID allocation to support SEV-ES guests Date: Mon, 14 Sep 2020 15:15:45 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR13CA0014.namprd13.prod.outlook.com (2603:10b6:3:23::24) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR13CA0014.namprd13.prod.outlook.com (2603:10b6:3:23::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.5 via Frontend Transport; Mon, 14 Sep 2020 20:20:07 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 2a8d3509-a933-4f82-a702-08d858eb9352 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Z2b3d7+zsMODOGaHJAx0xa+N+6beV/f93Ziapmk1mjEM8MDW4CD+mrBX0cUxIHv03mM3C2rXKREKlRmA1tyzH8+KmLTAldJNjLAr7Pqbb948biwI67EQxHX6TQyMBDAIqStuBDAwTbTitJ8AqA1YFusT2WlVbWOxp2boHpb2h5GjCUhywWoXk05h9bk5mxZPbHBxILEaDIR0l2qOzh8IBY/q2LED6ioTIy7+gy04UTA7QXsknjBk92AttEhvizfRYnQwbI7ppfteXNHO55faA2P0AKOVpzKCEMAvdV03WiS5j3ujPBimF/bm5DBC6YdHHHuCDcrFi/AyqIA+uQD/2A== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(15650500001)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: vRtdn6OB+hsxTgteiz5114VWzLZK11RkMaYd457U5RCshf71a64ioVmjcNP9HjOQZOY8BK1mbiBhi6CIZIYNJ2YBYLueImtZK6mx81BFYXGDL29sxaFycmvVMoFXO9NAPbWtuleb+pnUtsI+pytjkZzobLvqo5uFeLAd2XbyQVklJf7KtCdf07jbkTb1ifeg+mep+LtJJiyDoZeWYKHprnm2/MFTXQmAybzQ5ClCguekgPYayIK9kz2l8Y+VXzCC/JtdWfHY0pYaVZMJxvF8hWDo8O/sJSLXKd20KwgjpCpROaoHbMwJjXzTBSSfqBaI+Z7Q7u2+FJY3e26oKdmNUVeOKaoPPsvOmGgvKscbdK42dH064CqhRNd0rUGzLY40kMzqvtH1P73vuFPREWJmsQlW3XYxTQ1XFY7RfLuyyE/O3LP7qLHGAoMl1jvTOaPX5gxryZfZx4U/H71O1l63VX7db7Qu1BSilZx1vb71mXT/7gEzEHr2Rit1zzR3cFNjYeDi5ePTUy8ULgalF/yGlAo6GMym4wFeH4hrxkqDU4WuggzhuKBwbTa7sQZEPZ+sBrBz5EC89xa/Kd9DHjxOORhvxbML/weyVVkZP5/CR5+Y+PlHotNVpm4lZGO2GGlpBXo6G87zx4RM2NOsDrZK9g== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2a8d3509-a933-4f82-a702-08d858eb9352 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:20:08.7090 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Fwsz+NI0UP0JbxbxOUQr/sNP9b1uY0GGQx4gi+kWJvEisieYVVJwvVqu0foYyrHUQ0pd0r82ZC/drmk2numEfQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky SEV and SEV-ES guests each have dedicated ASID ranges. Update the ASID allocation routine to return an ASID in the respective range. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 15be71b30e2a..73d2a3f6c83c 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -61,19 +61,19 @@ static int sev_flush_asids(void) } /* Must be called with the sev_bitmap_lock held */ -static bool __sev_recycle_asids(void) +static bool __sev_recycle_asids(int min_asid, int max_asid) { int pos; /* Check if there are any ASIDs to reclaim before performing a flush */ - pos = find_next_bit(sev_reclaim_asid_bitmap, - max_sev_asid, min_sev_asid - 1); - if (pos >= max_sev_asid) + pos = find_next_bit(sev_reclaim_asid_bitmap, max_sev_asid, min_asid); + if (pos >= max_asid) return false; if (sev_flush_asids()) return false; + /* The flush process will flush all reclaimable SEV and SEV-ES ASIDs */ bitmap_xor(sev_asid_bitmap, sev_asid_bitmap, sev_reclaim_asid_bitmap, max_sev_asid); bitmap_zero(sev_reclaim_asid_bitmap, max_sev_asid); @@ -81,20 +81,23 @@ static bool __sev_recycle_asids(void) return true; } -static int sev_asid_new(void) +static int sev_asid_new(struct kvm_sev_info *sev) { + int pos, min_asid, max_asid; bool retry = true; - int pos; mutex_lock(&sev_bitmap_lock); /* - * SEV-enabled guest must use asid from min_sev_asid to max_sev_asid. + * SEV-enabled guests must use asid from min_sev_asid to max_sev_asid. + * SEV-ES-enabled guest can use from 1 to min_sev_asid - 1. */ + min_asid = sev->es_active ? 0 : min_sev_asid - 1; + max_asid = sev->es_active ? min_sev_asid - 1 : max_sev_asid; again: - pos = find_next_zero_bit(sev_asid_bitmap, max_sev_asid, min_sev_asid - 1); - if (pos >= max_sev_asid) { - if (retry && __sev_recycle_asids()) { + pos = find_next_zero_bit(sev_asid_bitmap, max_sev_asid, min_asid); + if (pos >= max_asid) { + if (retry && __sev_recycle_asids(min_asid, max_asid)) { retry = false; goto again; } @@ -176,7 +179,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp) if (unlikely(sev->active)) return ret; - asid = sev_asid_new(); + asid = sev_asid_new(sev); if (asid < 0) return ret; From patchwork Mon Sep 14 20:15:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774801 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71A7B112E for ; Mon, 14 Sep 2020 20:24:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A511215A4 for ; Mon, 14 Sep 2020 20:24:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="vFIYQUUc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726123AbgINUXS (ORCPT ); Mon, 14 Sep 2020 16:23:18 -0400 Received: from mail-bn8nam12on2068.outbound.protection.outlook.com ([40.107.237.68]:3544 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726093AbgINUVy (ORCPT ); Mon, 14 Sep 2020 16:21:54 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dqSvjgC7Q/JnDcpdykHqCJE8WFdXPvuqFNLD4+3buP1DutC2ChswqaGhqDRymq4GjfdsfNX9IqfrQ1W/KZ8nmnNRlYOaOQkG3mUVn3fSibY3eEJs3EKaULnAe9aN+su4Uq9clgaQ6NGvF2c2EkwJHEkMK1ZBXiRTruumEYz55QLx9x30xIJ/FHKOvlKXU9wPqerQ3uFrDVyroaEu6Ck2km2uUYm/dF5y5pDXY4+YN9x6azRdV3QQluJXgbzbvv9FpAE0OfffNnSmNd3SBCRdjLE9q66Zo/RVCVFIHBrilr6Z8x64R7nSt7jmepiSDMAfEvDndzd81PX0eRGdANk6lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aPr/lAmGTL1wS8lhn+JlZSk/MssLMcw1C3gPjIcD8Mo=; b=KMJylNJqrT8xarTbarB8uIDFowwgAFy3kpapScIMlzHqE5FADGfkaY25bxeefcaWP7lZyGkmG+tZkCDnJ06quZfRAJTz/WZ2r1FASFxXCAGQpphuA3u0k8oIiarjo09Xj+nazxjGM5Hmoz7+VPg7kuDNkTJdgxfoWe7tv1yBYEykvAQuXkyPcJn5OLnob0EYZGSkf5eT5hX0VhFRUxzqySOU7NMKp4U2/l40pwWOo8yLQW8T1pTy4WpKUqHzwGCtfPVXtwREd+dy8WtzNlZ61VBuRpQRgzclOeFGq0lLEpRhbkR1d1gxch232g2lsRgCfNGTqWuPXs2hrpBfvPg91g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aPr/lAmGTL1wS8lhn+JlZSk/MssLMcw1C3gPjIcD8Mo=; b=vFIYQUUcPiGnCaccrQN0qF9NKZCcV339VTNkigFycp/BLD5lYn1v80kgQOtAukJndojo+JXy8JTmxhKJe0w6iOiE4paFrSv0iR4KF+DFw9n7IAsQtR8jpZZRGTg8LczZyxh4VaiJWJJU1qzyvV7UiR/1ayem9CZwLXzP//LmYI0= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:20:16 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:20:16 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 32/35] KVM: SVM: Provide support for SEV-ES vCPU creation/loading Date: Mon, 14 Sep 2020 15:15:46 -0500 Message-Id: <1a1d0acfd879c11e567ac757656e6c5f03832472.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR16CA0044.namprd16.prod.outlook.com (2603:10b6:4:15::30) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR16CA0044.namprd16.prod.outlook.com (2603:10b6:4:15::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:20:15 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: bb458691-0b41-4b74-f5bf-08d858eb97f4 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Jq4a7uWhmOMHt6h+laSK9g5mrJyvSMGlvnNRukf7u/tQyxKAgdIVJESJDlkD6EyMi+JkYWsa4L4uxKsY2XqSmcjNO2joOovCpyQxS1xYxn8SojaT7zuytPcrWX/6z2uicl3tvqzgDIbxIAIMSMCXtss8JGrsyc/w7K76BozTem85nk79GtR4uMcpBIRDT2yyce+iIR3qAjx0CmHjwPPBkFLhpd3OO/VcG9BlACtqI9mVobWrYWnHKXjOd6SqbNwKgXw/PgQ8zLpaasM833c2AcH9XEhpS78G2e089GqwfkHcYy+Xlcf04b15G7S2seF/Ao7n3etMYe5QDk44tyfnIrSpziy0RutE/gcHUThgeQ6lkM1pbMNH9a4Zl4fTCxAJ X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002)(309714004);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: OviLfGZDOdjMz6VghcKn//BtwFwMkF3iEVkQprQVCPmImWnKqPpR/+2aMXZM0xuZPA62vFGfjNacj2WpVP6iME849Rhms3w2SPQI7Z2W1admODpbM3UpeXRtBWSsfNF4Gn3G/NEt8XIPjbeWoyelvfXlkErC5Uv7IUM4YZ9ehJ2CWT/ZCfIbqd/4yXA5u7WxeBajoSA2k1SPvhSlyf2vuKy44u8oBP4G6eOJKAx0AjtZnB+k8viBnOO0UEc246mwi/ZTTQWt8gar5ohw22Lh9f6ApSQ/TNuW+3pkEFfibHBMJSSO33Mqu4oNnDkk6xFKepdylJo8qhA18xmeohQwXp1xSBPL7WDmgEXn3maeVkpHhw6u6RGgsWqXHKY30Y3XA8aXbK5Hbx9Hb2bgYsqQS5vNY1a1+0gmQEI9Ma+wbV2SaW9B7Hkf0QBBPTQ+DS8a+SDXsv5/OibRnSPeqgvLwgp5fiXzcVvp+JJo/+9He40AjazfwdSsqWSb7EdWFRBZLEhmS/zL1tLkzwWw0icIYhHcVVJ+/XpLkKhj1nm08nCzkOSc/Vgwj4dE3c1ZioCH+T2K5gA1Vl9vvfcgdiDRB5Kw0LHfdntj4VPDW7os+UifEdkyG1Qc3AYmbwgh8wImxL8seeeDbN0tfYjH/SBKYA== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: bb458691-0b41-4b74-f5bf-08d858eb97f4 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:20:16.5116 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: cKPsGcxOxqA7RATUcvl/2imK5fNrLhWxkLsY51z6n+MIojEES2/kU2BXzTSNDTSxJVqpFuJ9JGfZLQn4/oOaVg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky An SEV-ES vCPU requires additional VMCB initialization requirements for vCPU creation and vCPU load/put requirements. This includes: General VMCB initialization changes: - Set a VMCB control bit to enable SEV-ES support on the vCPU. - Set the VMCB encrypted VM save area address. - CRx registers are part of the encrypted register state and cannot be updated. Remove CRx the register read and write intercepts and replace them with CRx register write traps to track the CRx register values. - Certain MSR values are part of the encrypted register state and cannot be updated. Remove certain MSR intercepts (EFER, CR_PAT, etc.). - Remove the #GP intercept (no support for "enable_vmware_backdoor"). - Remove the XSETBV intercept since the hypervisor cannot modify XCR0. General vCPU creation changes: - Set the initial GHCB gpa value as per the GHCB specification. General vCPU load changes: - SEV-ES hardware will restore certain registers on VMEXIT, but not save them on VMRUN (see Table B-3 and Table B-4 of the AMD64 APM Volume 2). During vCPU loading, perform a VMSAVE to the per-CPU SVM save area and save the current value of XCR0 to the per-CPU SVM save area. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/svm.h | 15 ++++++++++- arch/x86/kvm/svm/sev.c | 54 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 19 +++++++++++--- arch/x86/kvm/svm/svm.h | 3 +++ 4 files changed, 87 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index 07b4ac1e7179..06bb3a83edce 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -53,6 +53,16 @@ enum { INTERCEPT_MWAIT_COND, INTERCEPT_XSETBV, INTERCEPT_RDPRU, + TRAP_EFER_WRITE, + TRAP_CR0_WRITE, + TRAP_CR1_WRITE, + TRAP_CR2_WRITE, + TRAP_CR3_WRITE, + TRAP_CR4_WRITE, + TRAP_CR5_WRITE, + TRAP_CR6_WRITE, + TRAP_CR7_WRITE, + TRAP_CR8_WRITE, }; @@ -96,6 +106,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area { u8 reserved_6[8]; /* Offset 0xe8 */ u64 avic_logical_id; /* Offset 0xf0 */ u64 avic_physical_id; /* Offset 0xf8 */ + u8 reserved_7[8]; + u64 vmsa_pa; /* Used for an SEV-ES guest */ }; @@ -150,6 +162,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area { #define SVM_NESTED_CTL_NP_ENABLE BIT(0) #define SVM_NESTED_CTL_SEV_ENABLE BIT(1) +#define SVM_NESTED_CTL_SEV_ES_ENABLE BIT(2) struct vmcb_seg { u16 selector; @@ -249,7 +262,7 @@ struct ghcb { static inline void __unused_size_checks(void) { BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 1032); - BUILD_BUG_ON(sizeof(struct vmcb_control_area) != 256); + BUILD_BUG_ON(sizeof(struct vmcb_control_area) != 272); BUILD_BUG_ON(sizeof(struct ghcb) != 4096); } diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 73d2a3f6c83c..7ed88f2e8d93 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1545,3 +1545,57 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) ghcb_set_sw_exit_info_2(svm->ghcb, 1); svm->ap_hlt_loop = false; } + +void sev_es_init_vmcb(struct vcpu_svm *svm) +{ + svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ES_ENABLE; + svm->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK; + + /* + * An SEV-ES guest requires a VMSA area that is a separate from the + * VMCB page. Do not include the encryption mask on the VMSA physical + * address since hardware will access it using the guest key. + */ + svm->vmcb->control.vmsa_pa = __pa(svm->vmsa); + + /* Can't intercept CR register access, HV can't modify CR registers */ + clr_cr_intercept(svm, INTERCEPT_CR0_READ); + clr_cr_intercept(svm, INTERCEPT_CR4_READ); + clr_cr_intercept(svm, INTERCEPT_CR8_READ); + clr_cr_intercept(svm, INTERCEPT_CR0_WRITE); + clr_cr_intercept(svm, INTERCEPT_CR4_WRITE); + clr_cr_intercept(svm, INTERCEPT_CR8_WRITE); + + svm_clr_intercept(svm, INTERCEPT_SELECTIVE_CR0); + + /* Track EFER/CR register changes */ + svm_set_intercept(svm, TRAP_EFER_WRITE); + svm_set_intercept(svm, TRAP_CR0_WRITE); + svm_set_intercept(svm, TRAP_CR4_WRITE); + svm_set_intercept(svm, TRAP_CR8_WRITE); + + /* No support for enable_vmware_backdoor */ + clr_exception_intercept(svm, GP_VECTOR); + + /* Can't intercept XSETBV, HV can't modify XCR0 directly */ + svm_clr_intercept(svm, INTERCEPT_XSETBV); + + /* Clear intercepts on selected MSRs */ + set_msr_interception(svm->msrpm, MSR_EFER, 1, 1); + set_msr_interception(svm->msrpm, MSR_IA32_CR_PAT, 1, 1); + set_msr_interception(svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 1, 1); + set_msr_interception(svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1); + set_msr_interception(svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1); + set_msr_interception(svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1); +} + +void sev_es_create_vcpu(struct vcpu_svm *svm) +{ + /* + * Set the GHCB MSR value as per the GHCB specification when creating + * a vCPU for an SEV-ES guest. + */ + set_ghcb_msr(svm, GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX, + GHCB_VERSION_MIN, + sev_enc_bit)); +} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index fcb59d0b3c52..cb9b1d281adb 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -91,7 +91,7 @@ static DEFINE_PER_CPU(u64, current_tsc_ratio); static const struct svm_direct_access_msrs { u32 index; /* Index of the MSR */ - bool always; /* True if intercept is always on */ + bool always; /* True if intercept is initially cleared */ } direct_access_msrs[] = { { .index = MSR_STAR, .always = true }, { .index = MSR_IA32_SYSENTER_CS, .always = true }, @@ -109,6 +109,9 @@ static const struct svm_direct_access_msrs { { .index = MSR_IA32_LASTBRANCHTOIP, .always = false }, { .index = MSR_IA32_LASTINTFROMIP, .always = false }, { .index = MSR_IA32_LASTINTTOIP, .always = false }, + { .index = MSR_EFER, .always = false }, + { .index = MSR_IA32_CR_PAT, .always = false }, + { .index = MSR_AMD64_SEV_ES_GHCB, .always = true }, { .index = MSR_INVALID, .always = false }, }; @@ -598,8 +601,7 @@ static bool msr_write_intercepted(struct kvm_vcpu *vcpu, unsigned msr) return !!test_bit(bit_write, &tmp); } -static void set_msr_interception(u32 *msrpm, unsigned msr, - int read, int write) +void set_msr_interception(u32 *msrpm, unsigned int msr, int read, int write) { u8 bit_read, bit_write; unsigned long tmp; @@ -1147,6 +1149,11 @@ static void init_vmcb(struct vcpu_svm *svm) if (sev_guest(svm->vcpu.kvm)) { svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE; clr_exception_intercept(svm, UD_VECTOR); + + if (sev_es_guest(svm->vcpu.kvm)) { + /* Perform SEV-ES specific VMCB updates */ + sev_es_init_vmcb(svm); + } } vmcb_mark_all_dirty(svm->vmcb); @@ -1253,6 +1260,10 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm_init_osvw(vcpu); vcpu->arch.microcode_version = 0x01000065; + if (sev_es_guest(svm->vcpu.kvm)) + /* Perform SEV-ES specific VMCB creation updates */ + sev_es_create_vcpu(svm); + return 0; free_page5: @@ -1375,6 +1386,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu) loadsegment(gs, svm->host.gs); #endif #endif + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); } @@ -3039,6 +3051,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu) pr_err("%-20s%016llx\n", "avic_backing_page:", control->avic_backing_page); pr_err("%-20s%016llx\n", "avic_logical_id:", control->avic_logical_id); pr_err("%-20s%016llx\n", "avic_physical_id:", control->avic_physical_id); + pr_err("%-20s%016llx\n", "vmsa_pa:", control->vmsa_pa); pr_err("VMCB State Save Area:\n"); pr_err("%-5s s: %04x a: %04x l: %08x b: %016llx\n", "es:", diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index e3b4b0368bd8..465e14a7146f 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -412,6 +412,7 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu); bool svm_interrupt_blocked(struct kvm_vcpu *vcpu); void svm_set_gif(struct vcpu_svm *svm, bool value); int svm_invoke_exit_handler(struct vcpu_svm *svm, u64 exit_code); +void set_msr_interception(u32 *msrpm, unsigned int msr, int read, int write); /* nested.c */ @@ -570,6 +571,8 @@ void sev_hardware_teardown(void); int sev_handle_vmgexit(struct vcpu_svm *svm); int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in); void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); +void sev_es_init_vmcb(struct vcpu_svm *svm); +void sev_es_create_vcpu(struct vcpu_svm *svm); /* VMSA Accessor functions */ From patchwork Mon Sep 14 20:15:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774807 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 691186CA for ; Mon, 14 Sep 2020 20:25:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 436E6217BA for ; Mon, 14 Sep 2020 20:25:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="fh2JCuMf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726213AbgINUZD (ORCPT ); Mon, 14 Sep 2020 16:25:03 -0400 Received: from mail-bn8nam12on2041.outbound.protection.outlook.com ([40.107.237.41]:17441 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726353AbgINUWj (ORCPT ); Mon, 14 Sep 2020 16:22:39 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LliNsVkhlht62f3r5iQQwIPGUMCHwmWkiUcibI0Rzq+5DFqchTyhzNzHj+cv6OrG+D8jybaU+IYMWVC1nt11aZX36p55q/t5KKKpb2Yd2T0LyhW4Z+xaFHPr9/FedvosnwrAzvAzqhEiBrbi5gUJIqLZLPd/aOi2s1xq77/RoBzzSu5UYTDhPukdUYWLYTGSY6gg4al+aiD2S7dcb2AzwxWEbPO8OI1LObBcL9gvIJg2cqfJpUyY+TZUIhGkXTZbSabEbKZlVuntmYE1vL+JmHe1Sw62w+zmWCllq/clEkNE8EmqniwV422Vnl8phzaqf0jBICV+Ax+LErOCwtvBjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hHwKzkkLI71whcgMnO6Iti9vVTTwAMGjwi/au0xd3VU=; b=CIUYOnViygEVAi5sulPQ9fjE5cmwN3BBkXyw0TRwAIwt9zinIehL7+QonVq1UupnVOLrErGSpTEOR6HPSo2Aw7wV/EGLQC0JC4Q1GJ13fyA+q/l8ipytCOWaipOk7iHIh+iWCbKAV8pcm668DvXiX0hEZ1YLoiTOf5HL+nAPtmOSir3kV/zcBlZNqLSWqVfwZU0V0KlQIHAHue3nQH/dAfsDvERuVLe702GzeJOI5XdYk/GKqW7eLqHGtn01DSpjirymAwj4z0txIZ5U4MFC9ifHrEv6XEtAsxPKxdFNsFoMbqvTtQ1ynJQRaCFvAWnI7Whxsy++HeLawoQ4qMj+lA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hHwKzkkLI71whcgMnO6Iti9vVTTwAMGjwi/au0xd3VU=; b=fh2JCuMfD1onzrBV1bvYwZgpAPRFsEXMmI0PhhkaNwBShzFsWAGGJqhPqNtNk8TOmUcD8tPNVQyt0OuDEwfjqEm6eI+2xdokmPCz18UnKo25CJ3v8JRbeQ/RcZuTM5yJLMNDbWVPO2dMDU0lEPVBqh84pFqsuu52WNDZfZg4j+w= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:20:24 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:20:24 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 33/35] KVM: SVM: Provide support for SEV-ES vCPU loading Date: Mon, 14 Sep 2020 15:15:47 -0500 Message-Id: <3da36e824a62f2874b21dea496d50892611a8bdd.1600114548.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR2201CA0003.namprd22.prod.outlook.com (2603:10b6:4:14::13) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR2201CA0003.namprd22.prod.outlook.com (2603:10b6:4:14::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:20:23 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: f9c62c72-3411-46ac-7c33-08d858eb9caa X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5236; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: c76Vra5ilml4C00LvXus/S2hv99um++G3DqVRnIxsg24w5dkSyEuAbWbsOo/FGf+eA2VKQvtjoJnOo0e/GNSVD6TcdTphMYL1PsHb3kMq/C3LzhfxVsU7OEDZngvHmcV8H53sSZf0zfMuNnXv3NtZa5bOAT9+UrQdC2+2S7PVxSo9lInBY7XwrOWpFhj+mfmrwo0IB63OBGS4a6zNCRnFDoiFF6+11JNw5zdEXS5MDch0pztQngTkCV9IjS46c5Ypi2MB3wYnFVQdZKK1kgWoSe0sI4WaFH9jSTHIoVnp0Ne3IestoL/4Y8iKM1x9Fi7SnwBT38npqZ/UGLqKx7e4Q== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 4Lvtx4XocySH+z8FSJ0aYBz+VpDl3rJBAwqkZhNJNd71np8lOk7p108Y4HJAbBwhVyn0KktNr1vSuv/rT9Xp+uBse73C1cwMdcd++DyEocUFId/MZ/6yaa8JSdejktmb3U1A/35n3otsFr6/h/LWJI8D9oQynhCp1Vam+eTLaMZpQm7Y5+Pb/8OSWfFXKxJPIgibZU+TlpB44oBZbJgswTAoAKQPrYDjGCVukPsWRweZzGFGlvs0BSOpHWeJHOOUvBHxeOhs/yYLdjz8bVEZJH3NpPJRVlVxT+o/qHqMG9AXJDa3iT8zKflY6C1WFe6zPYNCXd5fxAJsT45cvf1hwvNu7e3N2inLFHq7FQL5kf3nfe+G+VCQ/lriy6D215khzE1lKlnZVdWbZsAzhrt1G+Homf0Wz7xo7sy/8PKzOxRpBEd7Fp5aQc3rZCuf7nTzQzTR/NmfDfisg1QGkMn2vD1WJpbiaSVQq7VJVeP441opmOEqg1HqKvGSiTVI+07f+b4RFQJDBVw6N/+EbQi1y/2WIcNmi/0DhDskcxXD7iiM9N1LI3lSviohO2ypIS/D73f+Im1sbrSOhHyLKaH4VRLdsHXbEKwG7aXmFS67v/e70wHXb+d1GdMkY2AVnucoxh+hDzqq89WLQKsXTv7Q9g== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: f9c62c72-3411-46ac-7c33-08d858eb9caa X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:20:24.4301 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ySIMt4kmeOVJS30EJBv03D5iqXY8cWZfI+K17CVw8iyVpvzstyOJn3o5L8k+dgG0T8mdC0NTsA9r4vIoVLKJWw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky An SEV-ES vCPU requires additional VMCB vCPU load/put requirements. SEV-ES hardware will restore certain registers on VMEXIT, but not save them on VMRUM (see Table B-3 and Table B-4 of the AMD64 APM Volume 2), so make the following changes: General vCPU load changes: - During vCPU loading, perform a VMSAVE to the per-CPU SVM save area and save the current value of XCR0 to the per-CPU SVM save area as these registers will be restored on VMEXIT. General vCPU put changes: - Do not attempt to restore registers that SEV-ES hardware has already restored on VMEXIT. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 48 ++++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 36 +++++++++++++++++++------------ arch/x86/kvm/svm/svm.h | 22 +++++++++++++------ 3 files changed, 87 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 7ed88f2e8d93..50018436863b 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -17,11 +17,14 @@ #include #include +#include #include "x86.h" #include "svm.h" #include "trace.h" +#define __ex(x) __kvm_handle_fault_on_reboot(x) + static u8 sev_enc_bit; static int sev_flush_asids(void); static DECLARE_RWSEM(sev_deactivate_lock); @@ -1599,3 +1602,48 @@ void sev_es_create_vcpu(struct vcpu_svm *svm) GHCB_VERSION_MIN, sev_enc_bit)); } + +void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu) +{ + struct svm_cpu_data *sd = per_cpu(svm_data, cpu); + struct vmcb_save_area *hostsa; + unsigned int i; + + /* + * As an SEV-ES guest, hardware will restore the host state on VMEXIT, + * of which one step is to perform a VMLOAD. Since hardware does not + * perform a VMSAVE on VMRUN, the host savearea must be updated. + */ + asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory"); + + /* + * Certain MSRs are restored on VMEXIT, only save ones that aren't + * saved via the vmsave above. + */ + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) { + if (host_save_user_msrs[i].sev_es_restored) + continue; + + rdmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]); + } + + /* XCR0 is restored on VMEXIT, save the current host value */ + hostsa = (struct vmcb_save_area *)(page_address(sd->save_area) + 0x400); + hostsa->xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); +} + +void sev_es_vcpu_put(struct vcpu_svm *svm) +{ + unsigned int i; + + /* + * Certain MSRs are restored on VMEXIT and were saved with vmsave in + * sev_es_vcpu_load() above. Only restore ones that weren't. + */ + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) { + if (host_save_user_msrs[i].sev_es_restored) + continue; + + wrmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]); + } +} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index cb9b1d281adb..efefe8ba9759 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1340,15 +1340,20 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vmcb_mark_all_dirty(svm->vmcb); } + if (sev_es_guest(svm->vcpu.kvm)) { + sev_es_vcpu_load(svm, cpu); + } else { #ifdef CONFIG_X86_64 - rdmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base); + rdmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base); #endif - savesegment(fs, svm->host.fs); - savesegment(gs, svm->host.gs); - svm->host.ldt = kvm_read_ldt(); + savesegment(fs, svm->host.fs); + savesegment(gs, svm->host.gs); + svm->host.ldt = kvm_read_ldt(); - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) + rdmsrl(host_save_user_msrs[i].index, + svm->host_user_msrs[i]); + } if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) { u64 tsc_ratio = vcpu->arch.tsc_scaling_ratio; @@ -1376,19 +1381,24 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu) avic_vcpu_put(vcpu); ++vcpu->stat.host_state_reload; - kvm_load_ldt(svm->host.ldt); + if (sev_es_guest(svm->vcpu.kvm)) { + sev_es_vcpu_put(svm); + } else { + kvm_load_ldt(svm->host.ldt); #ifdef CONFIG_X86_64 - loadsegment(fs, svm->host.fs); - wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gsbase); - load_gs_index(svm->host.gs); + loadsegment(fs, svm->host.fs); + wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gsbase); + load_gs_index(svm->host.gs); #else #ifdef CONFIG_X86_32_LAZY_GS - loadsegment(gs, svm->host.gs); + loadsegment(gs, svm->host.gs); #endif #endif - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) + wrmsrl(host_save_user_msrs[i].index, + svm->host_user_msrs[i]); + } } static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 465e14a7146f..0812d70085d7 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -23,15 +23,23 @@ #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) -static const u32 host_save_user_msrs[] = { +static const struct svm_host_save_msrs { + u32 index; /* Index of the MSR */ + bool sev_es_restored; /* True if MSR is restored on SEV-ES VMEXIT */ +} host_save_user_msrs[] = { #ifdef CONFIG_X86_64 - MSR_STAR, MSR_LSTAR, MSR_CSTAR, MSR_SYSCALL_MASK, MSR_KERNEL_GS_BASE, - MSR_FS_BASE, + { .index = MSR_STAR, .sev_es_restored = true }, + { .index = MSR_LSTAR, .sev_es_restored = true }, + { .index = MSR_CSTAR, .sev_es_restored = true }, + { .index = MSR_SYSCALL_MASK, .sev_es_restored = true }, + { .index = MSR_KERNEL_GS_BASE, .sev_es_restored = true }, + { .index = MSR_FS_BASE, .sev_es_restored = true }, #endif - MSR_IA32_SYSENTER_CS, MSR_IA32_SYSENTER_ESP, MSR_IA32_SYSENTER_EIP, - MSR_TSC_AUX, + { .index = MSR_IA32_SYSENTER_CS, .sev_es_restored = true }, + { .index = MSR_IA32_SYSENTER_ESP, .sev_es_restored = true }, + { .index = MSR_IA32_SYSENTER_EIP, .sev_es_restored = true }, + { .index = MSR_TSC_AUX, .sev_es_restored = false }, }; - #define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs) #define MSRPM_OFFSETS 16 @@ -573,6 +581,8 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in); void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); void sev_es_init_vmcb(struct vcpu_svm *svm); void sev_es_create_vcpu(struct vcpu_svm *svm); +void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu); +void sev_es_vcpu_put(struct vcpu_svm *svm); /* VMSA Accessor functions */ From patchwork Mon Sep 14 20:15:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774813 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07D42112E for ; Mon, 14 Sep 2020 20:25:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D59E120E65 for ; Mon, 14 Sep 2020 20:25:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="sY68hCGu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726369AbgINUZE (ORCPT ); Mon, 14 Sep 2020 16:25:04 -0400 Received: from mail-dm6nam11on2069.outbound.protection.outlook.com ([40.107.223.69]:56449 "EHLO NAM11-DM6-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726009AbgINUWh (ORCPT ); Mon, 14 Sep 2020 16:22:37 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ea2KsbsG7G88uAV3yQnGCIm9ePSzGBVACV9WlLffvI02+mJJ4+Aof0bFzwgPM2anbwyV/63DooGC2QchAjbcPEMMYObNA2p7tjkjgvX4uNN+x7Vkfcg9XEHi2JY2V/bhrSD/UGXR6WjMzRO8eFzIVNMAj4GhFVk2ER4NzDEkXIzT9y6d7Nu3xPH7BdBT0X+rIEt00K6pniuind9O4nhB7UYgQpmalT2SDP9ZEoiaOLvq8ikelkr39tWwqTPFcIA6u2rgLp81sGfpA21KNzbAw2nJgxOE2xb4S0QwxamNS4iA+VGYdb4uswxEzG+9YUey/2OiCZEiU2MWTrITlvsR/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2loTHIadKXv4rgJr2xkrZTTID9rTWCYJhpqu92TmBzM=; b=a6BlBHZZKjdm5Gk+ZFDXGyYidzFU8m4neYSZ0GQmd9tlNPPjxCrgBthDSM15SpW2JYrxvhR6bwMxB8aSr/EIOfrRdjyEP1VJ1eSZE8W9VHwsWW6qdy87Pzi1jieQj8sPLxAwKoYHm1QuyXyAn7whAou9Ie25Ff42NoPwEPSWHfy03svut08V6TWrZjL/BgsmfOH+UFalFLSRtfW7o3qomtquEG2LXCfhhIbBzR6ymViJzdJ9/8ooT2dKep+smpii1CyX00kZixosv5ChLv5dz9raPEzixLJdGU28wqgnkUAfaqWcsbmD902GxGnq7AFMIyKE0gP0G5kzni/pDGYnyA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2loTHIadKXv4rgJr2xkrZTTID9rTWCYJhpqu92TmBzM=; b=sY68hCGu+hkZZjB/iEZosvBi+N6CNkSPnNnLZnD6fYffmTbO3xLLJuYgBI0Qdx71ohvmRX7JZaO7/vMCKhMVJCAVIAPqrHzK6tWJNN6xvnDL7O6Pb2Qxdgd4AdbpigyeF6DOq19p59aOiJS4MAOpNuHndfd+KkBvGv3qtwho24c= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:20:32 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:20:32 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 34/35] KVM: SVM: Provide an updated VMRUN invocation for SEV-ES guests Date: Mon, 14 Sep 2020 15:15:48 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM5PR04CA0071.namprd04.prod.outlook.com (2603:10b6:3:ef::33) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM5PR04CA0071.namprd04.prod.outlook.com (2603:10b6:3:ef::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:20:31 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: c3269fe6-638c-4e4d-6263-08d858eba158 X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tZN1fonH/GEaQntUqP5qPI6PZdnMZal26clPLjYnkOi3p3ZSjCtmXZL6uE15e66ZL8n8nq02g6uZiNe439K3xn2USz2YioPax3KwE57DhknC6MklbKPuCseUe7qi0kVh/Isg3WUzT0wBVj8n9P415dWpSrqAelX3uww4IlGJ6l/uSfWEr2ZKZWCZrnpHJ61AwHz2A3FjMTdpM4NzSUHnutUAzG6/Xq8xoT+VrTs5eBxgsshHm+ByHb2gbrXPvo040IE09CV2PJlCOzMJAxwuN96T9+PX3UsbUGcX5/Z0j1kJI5clHXghUmgYcOcRdF9J X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(15650500001)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: sHyYQxtXk26BDUa7Cfc/plXv8KUQIQjBU8gUZt20nXwD39rQjav3BlaPF63UHYsB3BgqvoKuMb6pVacDoT5KjqJlrei9Z489YWFznTwB2u2uzJFRSCJ8gxlmsJUUc1o7VlAOdXxPScPsKs/qMZiSYDWdn3CLxx8Ediho06njcrtH0lXn38y3oyf5/2ajgHPNMa1aXwby/5UEzRrbjk6nMv+9/9Qt1kjBVqLYA4XII4fjJJPFakuR3IBuSidhBL9EQU6t+3nmOWfmILW4f+6XWQpNdo+9MfpgZyHy/gvnYgAz1/ql3kp0G39IIjQ5r0v9FK/TFoOHaX5EFKesujdDrq0pxJ3eQVfd8x6+o9g0Z5n1nYHJZtWB3DNbnJH+N0dLMJJoE12VQl8/6c+azRHxn2Va77LZOkX+Ytd3++OHoCugiKTSkhooc11qsm2Nly3FQLREyGm4uBds1xkpDwtzuozUyKVeC5t36zQU5F68wbG+Mo6/OFtwSKiOEs35U/+dqaBkpvUTlqRFfLAlkuYJ9jyEBQ3TOpo/Un9veK2e32s+ssOKWzbLJcuLgLQJSZmfqJzQUNy/gwRybMDqHapklRRSpWHyEop3FwupxILgzmSleWP3+tMBjprIBUWD499zB0TqoKBFqLqWbbwgsS5Wjw== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: c3269fe6-638c-4e4d-6263-08d858eba158 X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:20:32.2237 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 6bxPEzT2lYWnrUNmQ53jP98/Z1m6vx7LPksiC8I03XIoKcXhkexzdpWSI5bQ4FdcKDsjLUhQO1l7J8N6Hbp3Xw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky The guest vCPU register state of an SEV-ES guest will be restored on VMRUN and save saved on VMEXIT. Therefore, there is no need to restore the guest registers directly and through VMLOAD before VMRUN and no need to save the guest registers directly and through VMSAVE on VMEXIT. Update the svm_vcpu_run() function to skip register state saving and restoring and provide an alternative function for running an SEV-ES guest in vmenter.S Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/svm.c | 36 +++++++++++++++++---------- arch/x86/kvm/svm/svm.h | 5 ++++ arch/x86/kvm/svm/vmenter.S | 50 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 78 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index efefe8ba9759..5e5f67dd293a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3640,16 +3640,20 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, guest_enter_irqoff(); lockdep_hardirqs_on(CALLER_ADDR0); - __svm_vcpu_run(svm->vmcb_pa, (unsigned long *)&svm->vcpu.arch.regs); + if (sev_es_guest(svm->vcpu.kvm)) { + __svm_sev_es_vcpu_run(svm->vmcb_pa); + } else { + __svm_vcpu_run(svm->vmcb_pa, (unsigned long *)&svm->vcpu.arch.regs); #ifdef CONFIG_X86_64 - native_wrmsrl(MSR_GS_BASE, svm->host.gs_base); + native_wrmsrl(MSR_GS_BASE, svm->host.gs_base); #else - loadsegment(fs, svm->host.fs); + loadsegment(fs, svm->host.fs); #ifndef CONFIG_X86_32_LAZY_GS - loadsegment(gs, svm->host.gs); + loadsegment(gs, svm->host.gs); #endif #endif + } /* * VMEXIT disables interrupts (host state), but tracing and lockdep @@ -3676,9 +3680,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) fastpath_t exit_fastpath; struct vcpu_svm *svm = to_svm(vcpu); - svm_rax_write(svm, vcpu->arch.regs[VCPU_REGS_RAX]); - svm_rsp_write(svm, vcpu->arch.regs[VCPU_REGS_RSP]); - svm_rip_write(svm, vcpu->arch.regs[VCPU_REGS_RIP]); + if (!sev_es_guest(svm->vcpu.kvm)) { + svm_rax_write(svm, vcpu->arch.regs[VCPU_REGS_RAX]); + svm_rsp_write(svm, vcpu->arch.regs[VCPU_REGS_RSP]); + svm_rip_write(svm, vcpu->arch.regs[VCPU_REGS_RIP]); + } /* * Disable singlestep if we're injecting an interrupt/exception. @@ -3700,7 +3706,8 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) sync_lapic_to_cr8(vcpu); - svm_cr2_write(svm, vcpu->arch.cr2); + if (!sev_es_guest(svm->vcpu.kvm)) + svm_cr2_write(svm, vcpu->arch.cr2); /* * Run with all-zero DR6 unless needed, so that we can get the exact cause @@ -3748,14 +3755,17 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); - reload_tss(vcpu); + if (!sev_es_guest(svm->vcpu.kvm)) + reload_tss(vcpu); x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl); - vcpu->arch.cr2 = svm_cr2_read(svm); - vcpu->arch.regs[VCPU_REGS_RAX] = svm_rax_read(svm); - vcpu->arch.regs[VCPU_REGS_RSP] = svm_rsp_read(svm); - vcpu->arch.regs[VCPU_REGS_RIP] = svm_rip_read(svm); + if (!sev_es_guest(svm->vcpu.kvm)) { + vcpu->arch.cr2 = svm_cr2_read(svm); + vcpu->arch.regs[VCPU_REGS_RAX] = svm_rax_read(svm); + vcpu->arch.regs[VCPU_REGS_RSP] = svm_rsp_read(svm); + vcpu->arch.regs[VCPU_REGS_RIP] = svm_rip_read(svm); + } if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) kvm_before_interrupt(&svm->vcpu); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0812d70085d7..1405ea3549b8 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -584,6 +584,11 @@ void sev_es_create_vcpu(struct vcpu_svm *svm); void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu); void sev_es_vcpu_put(struct vcpu_svm *svm); +/* vmenter.S */ + +void __svm_sev_es_vcpu_run(unsigned long vmcb_pa); +void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs); + /* VMSA Accessor functions */ static inline struct vmcb_save_area *get_vmsa(struct vcpu_svm *svm) diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 1ec1ac40e328..6feb8c08f45a 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -168,3 +168,53 @@ SYM_FUNC_START(__svm_vcpu_run) pop %_ASM_BP ret SYM_FUNC_END(__svm_vcpu_run) + +/** + * __svm_sev_es_vcpu_run - Run a SEV-ES vCPU via a transition to SVM guest mode + * @vmcb_pa: unsigned long + */ +SYM_FUNC_START(__svm_sev_es_vcpu_run) + push %_ASM_BP +#ifdef CONFIG_X86_64 + push %r15 + push %r14 + push %r13 + push %r12 +#else + push %edi + push %esi +#endif + push %_ASM_BX + + /* Enter guest mode */ + mov %_ASM_ARG1, %_ASM_AX + sti + +1: vmrun %_ASM_AX + jmp 3f +2: cmpb $0, kvm_rebooting + jne 3f + ud2 + _ASM_EXTABLE(1b, 2b) + +3: cli + +#ifdef CONFIG_RETPOLINE + /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ + FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE +#endif + + pop %_ASM_BX + +#ifdef CONFIG_X86_64 + pop %r12 + pop %r13 + pop %r14 + pop %r15 +#else + pop %esi + pop %edi +#endif + pop %_ASM_BP + ret +SYM_FUNC_END(__svm_sev_es_vcpu_run) From patchwork Mon Sep 14 20:15:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 11774833 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6E70B112E for ; Mon, 14 Sep 2020 20:25:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E1C6208DB for ; Mon, 14 Sep 2020 20:25:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amdcloud.onmicrosoft.com header.i=@amdcloud.onmicrosoft.com header.b="ojPbvoTp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726381AbgINUZI (ORCPT ); Mon, 14 Sep 2020 16:25:08 -0400 Received: from mail-bn8nam12on2078.outbound.protection.outlook.com ([40.107.237.78]:9896 "EHLO NAM12-BN8-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726267AbgINUWi (ORCPT ); Mon, 14 Sep 2020 16:22:38 -0400 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BIL68BiJKOCXeRWpgGjdDTBqdKBcR5tZDj0b4RR5Q71Lyrsj/dskQXWLYOl3+i/Cu1QT5jxYqHYrP+BpS4J4WMTTuCZj+kpdabrc1JM+ADyLQKnDplnYEmn0lBgVKpHSgALYKatg3IEN6qcdurnYt3P0nO3ZYAtA43kGC/ioEqFHqWIHaZz5vABlE4KiIR9MdLjD9zmR9usFf1Tz9mRDseyLMTNfitIyaq7EbryDTude3a5hX/GP7csVLySaY5p12yBQ/Iqn8cXFhGnwOWj+ptM7sYwm8PpWdqrP9qMC7wv1VlhxP9z80lTmg3roY4qApGINPcQGZgjAnfQ/Q0VOkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yc0OUir6g3L4bospm8phOu0Mk7q6hE/6vkpkRes5N+c=; b=oNYb1z/Vah/OPO9El5gB/tFZEuC9XYht9p0hEXkojdmgX0npRsjZEQvGsKseV6WRJvJ836klvRXvNJ4oEfBYEJKowS+S2oOs7Mx7bPfstg/YX5LSZ9fz1I59hz6WlzfcFmEy4l4KN2rv6hDkqQdjY5LxWyPuf6W3RyXuPrzhJRDJNOIIgI+OxLPVbiakZ+O13U2W152EGjYQqh0ZpVQ+k8XEukBIYE94H4JtkYmvsZQq/R1vzbpJLZJX/4XMIpckSUF2/JElvvwBfAwRP5K1QUsjUzuQfjuaySsp08Tit45rOa0DyneweGdCzyKMogcVD0OpGGZMkWJTPXqu6M5m1g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Yc0OUir6g3L4bospm8phOu0Mk7q6hE/6vkpkRes5N+c=; b=ojPbvoTpKseSuW0L8bCYvKVL9Vsp7mv0NHzH+Z6fhcUFtzjpPBMmJnwP+5+Vl+ZyrZAlw6qRejEvA0mgqFhXKt+/jFa+HrOWK1hQgU+ykSqrMKrnFVNWu+4cOHRX7wFcTbKC9kFTs2UNV74Rm3qytsck/MA7q6KQhzFrmPtFgqQ= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by DM6PR12MB2988.namprd12.prod.outlook.com (2603:10b6:5:3d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16; Mon, 14 Sep 2020 20:20:40 +0000 Received: from DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346]) by DM5PR12MB1355.namprd12.prod.outlook.com ([fe80::299a:8ed2:23fc:6346%3]) with mapi id 15.20.3370.019; Mon, 14 Sep 2020 20:20:40 +0000 From: Tom Lendacky To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , Jim Mattson , Joerg Roedel , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Borislav Petkov , Ingo Molnar , Thomas Gleixner , Brijesh Singh Subject: [RFC PATCH 35/35] KVM: SVM: Provide support to launch and run an SEV-ES guest Date: Mon, 14 Sep 2020 15:15:49 -0500 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: X-ClientProxiedBy: DM6PR02CA0116.namprd02.prod.outlook.com (2603:10b6:5:1b4::18) To DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from tlendack-t1.amd.com (165.204.77.1) by DM6PR02CA0116.namprd02.prod.outlook.com (2603:10b6:5:1b4::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3370.16 via Frontend Transport; Mon, 14 Sep 2020 20:20:39 +0000 X-Mailer: git-send-email 2.28.0 X-Originating-IP: [165.204.77.1] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 6188e6d6-123a-42c1-107c-08d858eba5fc X-MS-TrafficTypeDiagnostic: DM6PR12MB2988: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 62fq3umFS+A2DNEjpyVsjzEJwf/sjE0GQtjp92WvJQrcUmNdEAGrUR9r97xSDzce38ifNvGva4XkQ9hvOCLPVXaGClJp1MiOnfBTBUnWvq+iWlV5bAaOdEEe+7sCfSWTZzgHBu/LEnBMrjmsH3W83olCGFnHVBsvSiHeq5hT+78bjS2XmzuMej297v4xdWr/iAtUT93os6lBp2Cdr628ceFGK+PP3wcyxMivkOKWxncp9aCXQPPBVzAzauC1CfG+Rbegwi38DxXrBMd/w9VkvpV1DylXBd1CzulxYMF1une+Nwj7XG1z1AXmTpLZlq0kLMndPBJHISBGnnwSaUmCMQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(8936002)(36756003)(316002)(478600001)(5660300002)(2616005)(956004)(66556008)(66946007)(186003)(4326008)(26005)(54906003)(2906002)(16526019)(66476007)(6666004)(8676002)(83380400001)(86362001)(52116002)(7696005)(7416002)(6486002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData: hKZyuqnCNRjxouKtW18roeIE/NTxkP38ZA/cI6RqqnHaEAcwWn2iFsCrQ5J6pVqb84AMUa9UDjAK2M2iDH/iJ7NUId2p6GoarNMaREO+hNesEjWTuOfhfMnYdm6vjWT65qU9ymkamvKNvGKwfka2iGWlfLT7yLj4/5jOcwqrMUhvryjS4pWsmL3siK+I4yTxa1SmTcWk34LcJ9YrT6NFf29Acm+D1g0XL3VynQSkxJtT3dyf74bija1H9+U3v+NkzEWAXG0z4GLW9lddDUAs1ixkdM327OJbhqFLUQqZUAGHmBQEsXNrYBwusSnHcHLmGVoK6MLpLRdnjOcJY+u1Gkd+2kuuTBjMpIavFfQdTGQbKDN6K2wheDMe//oNpMNBw9BnDfle14vYNWbJbGAhM0/FrRuy3Yt82GUdlNmuLBPc3dEiVHNNl7YQX152FANOqeetTLaeUb1AARZZK7At6MAD2gljo72UreMOyQCXJG7MI8698I+sqWk0NXtmZOvVpEyFxwqD0oY2KL310KK3eYINUUJGh10mBRP5+6TY/9IToeStT3oBWFy5EmSpRy8znerXYuhIsEmLfNektzWubzPDZTH09UjUfbDgOI107uif9TYcoKSI46jFnrZyGP/0TgsPmVXM4kD7+Rh4yCujbg== X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6188e6d6-123a-42c1-107c-08d858eba5fc X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Sep 2020 20:20:40.0303 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: lOsi1T2cNWglytFNxY9KCtoSfa3djpTYauE9hNxik8tfIlAl/LG5zmMqBaZfTFEA4lMthzE2/oFtM86mX+uM5A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2988 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tom Lendacky An SEV-ES guest requires some additional steps to be launched as compared to an SEV guest: - Implement additional VMCB initialization requirements for SEV-ES. - Update MSR_VM_HSAVE_PA to include the encryption bit if SME is active. - Add additional MSRs to the list of direct access MSRs so that the intercepts can be disabled. - Measure all vCPUs using the LAUNCH_UPDATE_VMSA SEV command after all calls to LAUNCH_UPDATE_DATA have been performed but before the call to LAUNCH_MEASURE has been performed. - Use VMSAVE to save host information that is not saved on VMRUN but is restored on VMEXIT. - Modify the VMRUN path to eliminate guest register state restoring and saving. At this point the guest can be run. However, the run sequence is different for an SEV-ES guest compared to a normal or even an SEV guest. Because the guest register state is encrypted, it is all saved as part of VMRUN/VMEXIT and does not require restoring before or saving after a VMRUN instruction. As a result, all that is required to perform a VMRUN is to save the RBP and RAX registers, issue the VMRUN and then restore RAX and RBP. Additionally, certain state is automatically saved and restored with an SEV-ES VMRUN. As a result certain register states are not required to be restored upon VMEXIT (e.g. FS, GS, etc.), so only do that if the guest is not an SEV-ES guest. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 60 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 50018436863b..eaa669c16345 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -201,6 +201,16 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp) return ret; } +static int sev_es_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp) +{ + if (!sev_es) + return -ENOTTY; + + to_kvm_svm(kvm)->sev_info.es_active = true; + + return sev_guest_init(kvm, argp); +} + static int sev_bind_asid(struct kvm *kvm, unsigned int handle, int *error) { struct sev_data_activate *data; @@ -501,6 +511,50 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp) return ret; } +static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + struct sev_data_launch_update_vmsa *vmsa; + int i, ret; + + if (!sev_es_guest(kvm)) + return -ENOTTY; + + vmsa = kzalloc(sizeof(*vmsa), GFP_KERNEL); + if (!vmsa) + return -ENOMEM; + + for (i = 0; i < kvm->created_vcpus; i++) { + struct vcpu_svm *svm = to_svm(kvm->vcpus[i]); + struct vmcb_save_area *save = get_vmsa(svm); + + /* Set XCR0 before encrypting */ + save->xcr0 = svm->vcpu.arch.xcr0; + + /* + * The LAUNCH_UPDATE_VMSA command will perform in-place + * encryption of the VMSA memory content (i.e it will write + * the same memory region with the guest's key), so invalidate + * it first. + */ + clflush_cache_range(svm->vmsa, PAGE_SIZE); + + vmsa->handle = sev->handle; + vmsa->address = __sme_pa(svm->vmsa); + vmsa->len = PAGE_SIZE; + ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, vmsa, + &argp->error); + if (ret) + goto e_free; + + svm->vcpu.arch.vmsa_encrypted = true; + } + +e_free: + kfree(vmsa); + return ret; +} + static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp) { void __user *measure = (void __user *)(uintptr_t)argp->data; @@ -948,12 +1002,18 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp) case KVM_SEV_INIT: r = sev_guest_init(kvm, &sev_cmd); break; + case KVM_SEV_ES_INIT: + r = sev_es_guest_init(kvm, &sev_cmd); + break; case KVM_SEV_LAUNCH_START: r = sev_launch_start(kvm, &sev_cmd); break; case KVM_SEV_LAUNCH_UPDATE_DATA: r = sev_launch_update_data(kvm, &sev_cmd); break; + case KVM_SEV_LAUNCH_UPDATE_VMSA: + r = sev_launch_update_vmsa(kvm, &sev_cmd); + break; case KVM_SEV_LAUNCH_MEASURE: r = sev_launch_measure(kvm, &sev_cmd); break;