From patchwork Tue Jun 28 11:38:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manali Shukla X-Patchwork-Id: 12898121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD8AFC43334 for ; Tue, 28 Jun 2022 11:39:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344348AbiF1Ljk (ORCPT ); Tue, 28 Jun 2022 07:39:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229739AbiF1Lji (ORCPT ); Tue, 28 Jun 2022 07:39:38 -0400 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78D60B873 for ; Tue, 28 Jun 2022 04:39:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hmemPEy1CRVunUckbZiQU2jUSKwq+gc/yn3oGp6eaznQmqkMCPqZ9NyBDwWwEUfQhVwbzhfL2znDNDjEdRn134d5DvVSV9f1H72YY0gz/kxn2+ISckKaCjOa2HbYiKrTucBPGmdmbTensSbIqp0VEe515D0pm3aw+tOiMAD/XEl8FSA0WRbEbqjnN8lZcvx17pHWGVWnZQU/dPZ9Ht3LisSACMXYiAx8RtiOCQ/5GiTzO39+/Fls+36mFYUE/OpGWSw57x1tsRn5EWnPsaxYmdfn1FZjWL3T7U1z1jjNIOx5mBMOYZPeBazn/RxhbSTueJoRJjdD8tEt9FJkGVHFLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KJchslNM/Lw0iZzln1SACiguBdV7XtL7FfQRHgckVjI=; b=M9ymw46FrLKHHTU/vLMsLfcIi2ZVxY4ubPbyzhtLGTr5lr4jqwRie1tzTI1L6SZW6566FPsnb9yMt7nfyARvhCezgaYdlz6wd0+lXYMod7oOmrCqup4OOyU/sb1EQ4BOpqoYlWuu9FmxyVgQCZ9Vr0D3QeyAOsstDXwMExdri4/HnCMcachSclBXW7E2ea6xzVMleS2c22zxCREQQW0Uc9wjAFCi8z69dlGAoFuyQud6xusXJUOdiAXNeqHKAzDAl36pHqajG0B9EFGVXumhZf6+MTweL/I7yL8hKayqljcqnJlEXxgMgoV2wlXCPTgzDfvUKDMw1CJNdJCG34nfww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KJchslNM/Lw0iZzln1SACiguBdV7XtL7FfQRHgckVjI=; b=31zLgPTrq23g5Ef4DBcUZR9dEb/Sz+ox3dhEf7EZfsDV4JrNAdrKDJxO17HvprSwqimud2C74iAoiQ/gOJGz14s2HCNxUuOlIULR8KU2ONEE9rXX7BoD5f6L/8aKwRLrfggtGzYADhn6cqQQjhkxiBwA7zP6FIxuAXwtwM9tzLs= Received: from BN6PR19CA0052.namprd19.prod.outlook.com (2603:10b6:404:e3::14) by LV2PR12MB5871.namprd12.prod.outlook.com (2603:10b6:408:174::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16; Tue, 28 Jun 2022 11:39:35 +0000 Received: from BN8NAM11FT014.eop-nam11.prod.protection.outlook.com (2603:10b6:404:e3:cafe::9) by BN6PR19CA0052.outlook.office365.com (2603:10b6:404:e3::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21 via Frontend Transport; Tue, 28 Jun 2022 11:39:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT014.mail.protection.outlook.com (10.13.177.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:39:35 +0000 Received: from bhadra.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 28 Jun 2022 06:39:33 -0500 From: Manali Shukla To: , CC: Subject: [kvm-unit-tests PATCH v5 1/8] x86: nSVM: Move common functionality of the main() to helper run_svm_tests Date: Tue, 28 Jun 2022 11:38:46 +0000 Message-ID: <20220628113853.392569-2-manali.shukla@amd.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220628113853.392569-1-manali.shukla@amd.com> References: <20220628113853.392569-1-manali.shukla@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0fa989a8-b490-4e1f-a1b3-08da58fae05e X-MS-TrafficTypeDiagnostic: LV2PR12MB5871:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8GBmt9bZfzIxDAsD/PydicGUVHc0jI/GId9x5A6iapmY7xC90NLqw6vfd5xW6rDPpOf7B/Z9dsx0iVdN6cKDsVcWfd+LwK6TweovZMiGLtWr1c0rCxqzGZhQEMViQenRpK+62aoKX0zqTtdLCAVWRrNfE+zXeZjaPO14eiSeQfSzYZil4zY1K210fpmZqB19RHW2NrPk8HEe3aJ3XAqW/+eL9VIclJ/69WAHO2Y9aVPKAMrf1DaMb5gbpzyMoQ9smBlaeFlXdpgWvtco7p/m+jgQYEE3D4bnpTZI9F7SCdA0e+AMv1jKpJK+VzVSor6l9Lg1HumniZk1PGp0q7wdEhVbHsbCDLW68KA/ocsc7ToULRl26FcHeusxOwOSAy2vDDz3xsQqbjt03Y5eWXp51u04w0YSr4YhssHz0RjHjuQpceQr4PKLH5dylTU9yaOf1J23s9fQ2Z/U90LTjRz5Jc8mrz5IZkcJ03WOiKf2pHGoX94bMAOLPeMH90y9Vrlf91gqpzVcM1k7lAI30mF2t3kj1RtCBFqclfeuwet7OgUCYnOuveJyrD855OLZP+aemlGZ7tnfXHY2ZGJ+SWrXR+jBVTeAFf4ff0TVAIWDq062L16wyd5IhSOf5kCDv6URymjmmBRKXT6lB0ewX+Wu9LxEeifdlfpeZPPhoUwdIbhjJpERs5gTUj3mKJv3kibK2Mu0JWKN9GEc93XSM0db2miUFiEPHbnbX5CpzXFWw8WdbwPPp9w8p6P0zdbo8v799R/rpW3Gl2meXaPoEVKUNEP/HO9j3QWbZSefjKI9ZsShYvipZIMWbZzhVXyyhtDOJF19AawYQevn6V0HFrAJWQ== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(376002)(396003)(346002)(136003)(39860400002)(36840700001)(46966006)(40470700004)(478600001)(426003)(70206006)(1076003)(47076005)(2616005)(336012)(40460700003)(16526019)(44832011)(82310400005)(41300700001)(186003)(6666004)(36860700001)(26005)(8936002)(8676002)(4326008)(81166007)(86362001)(82740400003)(83380400001)(7696005)(40480700001)(110136005)(2906002)(356005)(316002)(5660300002)(70586007)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 11:39:35.5088 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0fa989a8-b490-4e1f-a1b3-08da58fae05e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT014.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5871 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move common functionalities of main() to run_svm_tests(), so that nNPT tests can be moved to their own file to make other test cases run without nNPT test cases fiddling with page table midway. The quick and dirty approach would be to turn the current main() into a small helper, minus its call to __setup_vm() and call the helper function run_svm_tests() from main() function. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Manali Shukla --- x86/svm.c | 14 +++++++++----- x86/svm.h | 1 + 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/x86/svm.c b/x86/svm.c index 93794fd..36ba05e 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -397,17 +397,13 @@ test_wanted(const char *name, char *filters[], int filter_count) } } -int main(int ac, char **av) +int run_svm_tests(int ac, char **av) { - /* Omit PT_USER_MASK to allow tested host.CR4.SMEP=1. */ - pteval_t opt_mask = 0; int i = 0; ac--; av++; - __setup_vm(&opt_mask); - if (!this_cpu_has(X86_FEATURE_SVM)) { printf("SVM not available\n"); return report_summary(); @@ -444,3 +440,11 @@ int main(int ac, char **av) return report_summary(); } + +int main(int ac, char **av) +{ + pteval_t opt_mask = 0; + + __setup_vm(&opt_mask); + return run_svm_tests(ac, av); +} diff --git a/x86/svm.h b/x86/svm.h index e93822b..123e64f 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -403,6 +403,7 @@ struct regs { typedef void (*test_guest_func)(struct svm_test *); +int run_svm_tests(int ac, char **av); u64 *npt_get_pte(u64 address); u64 *npt_get_pde(u64 address); u64 *npt_get_pdpe(void); From patchwork Tue Jun 28 11:38:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manali Shukla X-Patchwork-Id: 12898122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2BB1C433EF for ; Tue, 28 Jun 2022 11:40:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344139AbiF1LkG (ORCPT ); Tue, 28 Jun 2022 07:40:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229739AbiF1LkF (ORCPT ); Tue, 28 Jun 2022 07:40:05 -0400 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2054.outbound.protection.outlook.com [40.107.212.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AB422E9F2 for ; Tue, 28 Jun 2022 04:40:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LuxMOagFswq1fzRBL4TsX9JMDJqA4FXJ1P66ssySdejZtRpSSMW9qdHWbhZDeNFoHRqJRcTUgjayVqHhwdswoFzxa+ocnxFVCo1KtFp3cQUsQhuRjuEVKyEjtaQxM6iC/vg7YRmzYjyVz0TztnnW++ZG2MHuVAY7XLFJtYl83vOV+H/XdEt/e41J+JPsN/+62VCAqOkLEaWYJSlHSDNX2VFNd6OeTDmn1DiQmerzlbD1ocJQlklJnxBk1K6eKp1owG97mBex8+TTuFvkgV/k6cvDDuxymHnqSCtxkwphhRN/OKrjN0GUFfvI40MIV18y+24QQuV5/PQ5qLEOw8jTNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yNzVPbgqpqkDaXlc5tDucSGZbHhWEGxZnqTbFiCZit0=; b=Z6BfBuhAy95oPHibRKRVMAZ5b01ZPPJdlFgQu3/GaaFoYxeHEXKo4aFbCV1JBpcGtB3tCv9cImeciBlF83e73j1D34ZytpKQLb6RJrAafYzjpvNbK6J9ay1W2fIy8B549oT4hbjTTNnPF2581LQ6Yrs2JB/UeR1iOqWwyn2O6C6Cvi8azL8ZPRsu5HA/gQPLtGUABpetFpv4SHYWxOztp26EugHjVoVfmV1irTRVZAfBwkTyAQ+ux9sOCGb/FzeN4A/DgcehfITka6HOYIE80mc4ADjXBRmu2Y1LUgvjh/Pg9HhO1dT2g9WkrwTDyN+Q9lVbXFW4PMx4uQjxeZY3rA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yNzVPbgqpqkDaXlc5tDucSGZbHhWEGxZnqTbFiCZit0=; b=Rjiu+40ogPIxx4HzpJoy0JVfCKhN5iPVqoTiL/O376whW2+fbdYzy6AUQ+snXr1khGnvQhIGsEoHcSIz41/mU7cpHF2Ha0KgCbOD8q0w0RB60UtIznsqToekuoBXRXq8WODKM0WRsVXHtYhzHdOlZeslHll/MiAxN/nTx09xwK4= Received: from BN0PR03CA0035.namprd03.prod.outlook.com (2603:10b6:408:e7::10) by DM4PR12MB6470.namprd12.prod.outlook.com (2603:10b6:8:b8::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15; Tue, 28 Jun 2022 11:39:59 +0000 Received: from BN8NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e7:cafe::b0) by BN0PR03CA0035.outlook.office365.com (2603:10b6:408:e7::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:39:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT029.mail.protection.outlook.com (10.13.177.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:39:59 +0000 Received: from bhadra.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 28 Jun 2022 06:39:57 -0500 From: Manali Shukla To: , CC: Subject: [kvm-unit-tests PATCH v5 2/8] x86: nSVM: Move all nNPT test cases from svm_tests.c to a separate file. Date: Tue, 28 Jun 2022 11:38:47 +0000 Message-ID: <20220628113853.392569-3-manali.shukla@amd.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220628113853.392569-1-manali.shukla@amd.com> References: <20220628113853.392569-1-manali.shukla@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 44390279-47e8-4fe5-71b0-08da58faeea0 X-MS-TrafficTypeDiagnostic: DM4PR12MB6470:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TAICRe0x/vezzFS2Tz6QBKKeYW3EjAwYOgIXEvph/u78WEHSRcP9u9Ge8B48MCGfHVZ7KjgaBfoCKHr45SNOg+xB3c0pR7b9p7J3zIxqPEWg6VoSrCPdxXxskQCBzXSuXSlx7U3eeb0mZuHGwBT47mfhOeWfvMDBDMSjIzrjHvUCwsD3aPWaBvJnZkXwHW3wVJ3prH2Qxa8wIWsKQmUmfag+Co3fyP6VEZbhkJCu0FGC+Oe5rkYMv79pNNl/mzjduk0guqGz31hDuJ54dibXbOmcJtNUBry5x2P1H5IaFX45dwHgosxvO5O/y3nyKDu1ZdFskGRAAwz4ZfFyGHG6YT6soBuzRGC+KWjlxi9/jwYAGLbjTTzl29EM/hoWfyw2PxZIiEiPqclKk+rD9NNq0nTmkdVI2D/lDnjxk4+Dq3jJh2pxu9B09LPu8+qzkNUNMJao5Q66ifSHMRYUsCmFwk3gFgrTY9OtAHEGJqyxBq9CSgdYs0ziSc1Xc9AyG3CbcmcxbLbs0yEPnhYqx8IARqafI9+uHJ1Tcc8edHA5+Qk48+M6DUauJSlIFyx3c0cLIrUbewEGiV/pUpOiJctaLrCtn+H9m0ThGqiWIo557XHyLHvrkT2/h6aadGRYnMJZwD83CE/ghRwCUslH/d6rfMAPRZ8oW0G5uUc8sb8+BKcSlAKZZyy1/ajelDJ9ChEQGzOodEdtnQr6LAPHKYbA05G+/Ki3m5iSVqsCGmIiaY5BzF1xXDQD6bZlafeQMMIAjUtNVl2Hz0cWcAoilOIMkBH3Sgd0numWjKe2G3bZ29ARR7kUnEhaKDzUjuoScvmooZOEbx3hgp+W3gzmDp14ZQ== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(376002)(396003)(39860400002)(346002)(136003)(46966006)(40470700004)(36840700001)(426003)(44832011)(82310400005)(186003)(7696005)(81166007)(83380400001)(41300700001)(40480700001)(47076005)(70206006)(316002)(30864003)(40460700003)(4326008)(36860700001)(2906002)(2616005)(356005)(1076003)(86362001)(82740400003)(8676002)(26005)(478600001)(36756003)(16526019)(8936002)(70586007)(336012)(6666004)(5660300002)(110136005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 11:39:59.4282 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 44390279-47e8-4fe5-71b0-08da58faeea0 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6470 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the nNPT testcases to their own test and file, svm_npt.c, so that the nNPT tests can run without the USER bit set in the host PTEs (in order to toggle CR4.SMEP) without preventing other nSVM testcases from running code in usersmode. Suggested-by: Sean Christopherson Signed-off-by: Manali Shukla --- x86/Makefile.common | 2 + x86/Makefile.x86_64 | 2 + x86/svm.c | 8 - x86/svm_npt.c | 390 ++++++++++++++++++++++++++++++++++++++++++++ x86/svm_tests.c | 371 +---------------------------------------- x86/unittests.cfg | 6 + 6 files changed, 409 insertions(+), 370 deletions(-) create mode 100644 x86/svm_npt.c diff --git a/x86/Makefile.common b/x86/Makefile.common index a600c72..b7010e2 100644 --- a/x86/Makefile.common +++ b/x86/Makefile.common @@ -108,6 +108,8 @@ $(TEST_DIR)/access_test.$(bin): $(TEST_DIR)/access.o $(TEST_DIR)/vmx.$(bin): $(TEST_DIR)/access.o +$(TEST_DIR)/svm_npt.$(bin): $(TEST_DIR)/svm.o + $(TEST_DIR)/kvmclock_test.$(bin): $(TEST_DIR)/kvmclock.o $(TEST_DIR)/hyperv_synic.$(bin): $(TEST_DIR)/hyperv.o diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64 index e19284a..8f9463c 100644 --- a/x86/Makefile.x86_64 +++ b/x86/Makefile.x86_64 @@ -44,6 +44,7 @@ endif ifneq ($(CONFIG_EFI),y) tests += $(TEST_DIR)/access_test.$(exe) tests += $(TEST_DIR)/svm.$(exe) +tests += $(TEST_DIR)/svm_npt.$(exe) tests += $(TEST_DIR)/vmx.$(exe) endif @@ -57,3 +58,4 @@ $(TEST_DIR)/hyperv_clock.$(bin): $(TEST_DIR)/hyperv_clock.o $(TEST_DIR)/vmx.$(bin): $(TEST_DIR)/vmx_tests.o $(TEST_DIR)/svm.$(bin): $(TEST_DIR)/svm_tests.o +$(TEST_DIR)/svm_npt.$(bin): $(TEST_DIR)/svm_npt.o diff --git a/x86/svm.c b/x86/svm.c index 36ba05e..b586807 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -440,11 +440,3 @@ int run_svm_tests(int ac, char **av) return report_summary(); } - -int main(int ac, char **av) -{ - pteval_t opt_mask = 0; - - __setup_vm(&opt_mask); - return run_svm_tests(ac, av); -} diff --git a/x86/svm_npt.c b/x86/svm_npt.c new file mode 100644 index 0000000..53e8a90 --- /dev/null +++ b/x86/svm_npt.c @@ -0,0 +1,390 @@ +#include "svm.h" +#include "vm.h" +#include "alloc_page.h" +#include "vmalloc.h" + +static void *scratch_page; + +static void null_test(struct svm_test *test) +{ +} + +static void npt_np_prepare(struct svm_test *test) +{ + u64 *pte; + + scratch_page = alloc_page(); + pte = npt_get_pte((u64) scratch_page); + + *pte &= ~1ULL; +} + +static void npt_np_test(struct svm_test *test) +{ + (void)*(volatile u64 *)scratch_page; +} + +static bool npt_np_check(struct svm_test *test) +{ + u64 *pte = npt_get_pte((u64) scratch_page); + + *pte |= 1ULL; + + return (vmcb->control.exit_code == SVM_EXIT_NPF) + && (vmcb->control.exit_info_1 == 0x100000004ULL); +} + +static void npt_nx_prepare(struct svm_test *test) +{ + u64 *pte; + + test->scratch = rdmsr(MSR_EFER); + wrmsr(MSR_EFER, test->scratch | EFER_NX); + + /* Clear the guest's EFER.NX, it should not affect NPT behavior. */ + vmcb->save.efer &= ~EFER_NX; + + pte = npt_get_pte((u64) null_test); + + *pte |= PT64_NX_MASK; +} + +static bool npt_nx_check(struct svm_test *test) +{ + u64 *pte = npt_get_pte((u64) null_test); + + wrmsr(MSR_EFER, test->scratch); + + *pte &= ~PT64_NX_MASK; + + return (vmcb->control.exit_code == SVM_EXIT_NPF) + && (vmcb->control.exit_info_1 == 0x100000015ULL); +} + +static void npt_us_prepare(struct svm_test *test) +{ + u64 *pte; + + scratch_page = alloc_page(); + pte = npt_get_pte((u64) scratch_page); + + *pte &= ~(1ULL << 2); +} + +static void npt_us_test(struct svm_test *test) +{ + (void)*(volatile u64 *)scratch_page; +} + +static bool npt_us_check(struct svm_test *test) +{ + u64 *pte = npt_get_pte((u64) scratch_page); + + *pte |= (1ULL << 2); + + return (vmcb->control.exit_code == SVM_EXIT_NPF) + && (vmcb->control.exit_info_1 == 0x100000005ULL); +} + +static void npt_rw_prepare(struct svm_test *test) +{ + + u64 *pte; + + pte = npt_get_pte(0x80000); + + *pte &= ~(1ULL << 1); +} + +static void npt_rw_test(struct svm_test *test) +{ + u64 *data = (void *)(0x80000); + + *data = 0; +} + +static bool npt_rw_check(struct svm_test *test) +{ + u64 *pte = npt_get_pte(0x80000); + + *pte |= (1ULL << 1); + + return (vmcb->control.exit_code == SVM_EXIT_NPF) + && (vmcb->control.exit_info_1 == 0x100000007ULL); +} + +static void npt_rw_pfwalk_prepare(struct svm_test *test) +{ + + u64 *pte; + + pte = npt_get_pte(read_cr3()); + + *pte &= ~(1ULL << 1); +} + +static bool npt_rw_pfwalk_check(struct svm_test *test) +{ + u64 *pte = npt_get_pte(read_cr3()); + + *pte |= (1ULL << 1); + + return (vmcb->control.exit_code == SVM_EXIT_NPF) + && (vmcb->control.exit_info_1 == 0x200000007ULL) + && (vmcb->control.exit_info_2 == read_cr3()); +} + +static void npt_l1mmio_prepare(struct svm_test *test) +{ +} + +u32 nested_apic_version1; +u32 nested_apic_version2; + +static void npt_l1mmio_test(struct svm_test *test) +{ + volatile u32 *data = (volatile void *)(0xfee00030UL); + + nested_apic_version1 = *data; + nested_apic_version2 = *data; +} + +static bool npt_l1mmio_check(struct svm_test *test) +{ + volatile u32 *data = (volatile void *)(0xfee00030); + u32 lvr = *data; + + return nested_apic_version1 == lvr && nested_apic_version2 == lvr; +} + +static void npt_rw_l1mmio_prepare(struct svm_test *test) +{ + + u64 *pte; + + pte = npt_get_pte(0xfee00080); + + *pte &= ~(1ULL << 1); +} + +static void npt_rw_l1mmio_test(struct svm_test *test) +{ + volatile u32 *data = (volatile void *)(0xfee00080); + + *data = *data; +} + +static bool npt_rw_l1mmio_check(struct svm_test *test) +{ + u64 *pte = npt_get_pte(0xfee00080); + + *pte |= (1ULL << 1); + + return (vmcb->control.exit_code == SVM_EXIT_NPF) + && (vmcb->control.exit_info_1 == 0x100000007ULL); +} + +static void basic_guest_main(struct svm_test *test) +{ +} + +static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer, + ulong cr4, u64 guest_efer, ulong guest_cr4) +{ + u64 pxe_orig = *pxe; + int exit_reason; + u64 pfec; + + wrmsr(MSR_EFER, efer); + write_cr4(cr4); + + vmcb->save.efer = guest_efer; + vmcb->save.cr4 = guest_cr4; + + *pxe |= rsvd_bits; + + exit_reason = svm_vmrun(); + + report(exit_reason == SVM_EXIT_NPF, + "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits, + exit_reason); + + if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) { + /* + * The guest's page tables will blow up on a bad PDPE/PML4E, + * before starting the final walk of the guest page. + */ + pfec = 0x20000000full; + } else { + /* RSVD #NPF on final walk of guest page. */ + pfec = 0x10000000dULL; + + /* PFEC.FETCH=1 if NX=1 *or* SMEP=1. */ + if ((cr4 & X86_CR4_SMEP) || (efer & EFER_NX)) + pfec |= 0x10; + + } + + report(vmcb->control.exit_info_1 == pfec, + "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx. " + "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u", + pfec, vmcb->control.exit_info_1, *pxe, + !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP), + !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP)); + + *pxe = pxe_orig; +} + +static void _svm_npt_rsvd_bits_test(u64 * pxe, u64 pxe_rsvd_bits, u64 efer, + ulong cr4, u64 guest_efer, ulong guest_cr4) +{ + u64 rsvd_bits; + int i; + + /* + * RDTSC or RDRAND can sometimes fail to generate a valid reserved bits + */ + if (!pxe_rsvd_bits) { + report_skip + ("svm_npt_rsvd_bits_test: Reserved bits are not valid"); + return; + } + + /* + * Test all combinations of guest/host EFER.NX and CR4.SMEP. If host + * EFER.NX=0, use NX as the reserved bit, otherwise use the passed in + * @pxe_rsvd_bits. + */ + for (i = 0; i < 16; i++) { + if (i & 1) { + rsvd_bits = pxe_rsvd_bits; + efer |= EFER_NX; + } else { + rsvd_bits = PT64_NX_MASK; + efer &= ~EFER_NX; + } + if (i & 2) + cr4 |= X86_CR4_SMEP; + else + cr4 &= ~X86_CR4_SMEP; + if (i & 4) + guest_efer |= EFER_NX; + else + guest_efer &= ~EFER_NX; + if (i & 8) + guest_cr4 |= X86_CR4_SMEP; + else + guest_cr4 &= ~X86_CR4_SMEP; + + __svm_npt_rsvd_bits_test(pxe, rsvd_bits, efer, cr4, + guest_efer, guest_cr4); + } +} + +static u64 get_random_bits(u64 hi, u64 low) +{ + unsigned retry = 5; + u64 rsvd_bits = 0; + + if (this_cpu_has(X86_FEATURE_RDRAND)) { + do { + rsvd_bits = (rdrand() << low) & GENMASK_ULL(hi, low); + retry--; + } while (!rsvd_bits && retry); + } + + if (!rsvd_bits) { + retry = 5; + do { + rsvd_bits = (rdtsc() << low) & GENMASK_ULL(hi, low); + retry--; + } while (!rsvd_bits && retry); + } + + return rsvd_bits; +} + +static void svm_npt_rsvd_bits_test(void) +{ + u64 saved_efer, host_efer, sg_efer, guest_efer; + ulong saved_cr4, host_cr4, sg_cr4, guest_cr4; + + if (!npt_supported()) { + report_skip("NPT not supported"); + return; + } + + saved_efer = host_efer = rdmsr(MSR_EFER); + saved_cr4 = host_cr4 = read_cr4(); + sg_efer = guest_efer = vmcb->save.efer; + sg_cr4 = guest_cr4 = vmcb->save.cr4; + + test_set_guest(basic_guest_main); + + /* + * 4k PTEs don't have reserved bits if MAXPHYADDR >= 52, just skip the + * sub-test. The NX test is still valid, but the extra bit of coverage + * isn't worth the extra complexity. + */ + if (cpuid_maxphyaddr() >= 52) + goto skip_pte_test; + + _svm_npt_rsvd_bits_test(npt_get_pte((u64) basic_guest_main), + get_random_bits(51, cpuid_maxphyaddr()), + host_efer, host_cr4, guest_efer, guest_cr4); + +skip_pte_test: + _svm_npt_rsvd_bits_test(npt_get_pde((u64) basic_guest_main), + get_random_bits(20, 13) | PT_PAGE_SIZE_MASK, + host_efer, host_cr4, guest_efer, guest_cr4); + + _svm_npt_rsvd_bits_test(npt_get_pdpe(), + PT_PAGE_SIZE_MASK | + (this_cpu_has(X86_FEATURE_GBPAGES) ? + get_random_bits(29, 13) : 0), host_efer, + host_cr4, guest_efer, guest_cr4); + + _svm_npt_rsvd_bits_test(npt_get_pml4e(), BIT_ULL(8), + host_efer, host_cr4, guest_efer, guest_cr4); + + wrmsr(MSR_EFER, saved_efer); + write_cr4(saved_cr4); + vmcb->save.efer = sg_efer; + vmcb->save.cr4 = sg_cr4; +} + +int main(int ac, char **av) +{ + pteval_t opt_mask = 0; + + __setup_vm(&opt_mask); + return run_svm_tests(ac, av); +} + +#define TEST(name) { #name, .v2 = name } + +struct svm_test svm_tests[] = { + { "npt_nx", npt_supported, npt_nx_prepare, + default_prepare_gif_clear, null_test, + default_finished, npt_nx_check }, + { "npt_np", npt_supported, npt_np_prepare, + default_prepare_gif_clear, npt_np_test, + default_finished, npt_np_check }, + { "npt_us", npt_supported, npt_us_prepare, + default_prepare_gif_clear, npt_us_test, + default_finished, npt_us_check }, + { "npt_rw", npt_supported, npt_rw_prepare, + default_prepare_gif_clear, npt_rw_test, + default_finished, npt_rw_check }, + { "npt_rw_pfwalk", npt_supported, npt_rw_pfwalk_prepare, + default_prepare_gif_clear, null_test, + default_finished, npt_rw_pfwalk_check }, + { "npt_l1mmio", npt_supported, npt_l1mmio_prepare, + default_prepare_gif_clear, npt_l1mmio_test, + default_finished, npt_l1mmio_check }, + { "npt_rw_l1mmio", npt_supported, npt_rw_l1mmio_prepare, + default_prepare_gif_clear, npt_rw_l1mmio_test, + default_finished, npt_rw_l1mmio_check }, + TEST(svm_npt_rsvd_bits_test), + { NULL, NULL, NULL, NULL, NULL, NULL, NULL } +}; diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 1bd4d3b..37ca792 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -10,11 +10,10 @@ #include "isr.h" #include "apic.h" #include "delay.h" +#include "vmalloc.h" #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f -static void *scratch_page; - #define LATENCY_RUNS 1000000 extern u16 cpu_online_count; @@ -698,181 +697,6 @@ static bool sel_cr0_bug_check(struct svm_test *test) return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE; } -static void npt_nx_prepare(struct svm_test *test) -{ - u64 *pte; - - test->scratch = rdmsr(MSR_EFER); - wrmsr(MSR_EFER, test->scratch | EFER_NX); - - /* Clear the guest's EFER.NX, it should not affect NPT behavior. */ - vmcb->save.efer &= ~EFER_NX; - - pte = npt_get_pte((u64)null_test); - - *pte |= PT64_NX_MASK; -} - -static bool npt_nx_check(struct svm_test *test) -{ - u64 *pte = npt_get_pte((u64)null_test); - - wrmsr(MSR_EFER, test->scratch); - - *pte &= ~PT64_NX_MASK; - - return (vmcb->control.exit_code == SVM_EXIT_NPF) - && (vmcb->control.exit_info_1 == 0x100000015ULL); -} - -static void npt_np_prepare(struct svm_test *test) -{ - u64 *pte; - - scratch_page = alloc_page(); - pte = npt_get_pte((u64)scratch_page); - - *pte &= ~1ULL; -} - -static void npt_np_test(struct svm_test *test) -{ - (void) *(volatile u64 *)scratch_page; -} - -static bool npt_np_check(struct svm_test *test) -{ - u64 *pte = npt_get_pte((u64)scratch_page); - - *pte |= 1ULL; - - return (vmcb->control.exit_code == SVM_EXIT_NPF) - && (vmcb->control.exit_info_1 == 0x100000004ULL); -} - -static void npt_us_prepare(struct svm_test *test) -{ - u64 *pte; - - scratch_page = alloc_page(); - pte = npt_get_pte((u64)scratch_page); - - *pte &= ~(1ULL << 2); -} - -static void npt_us_test(struct svm_test *test) -{ - (void) *(volatile u64 *)scratch_page; -} - -static bool npt_us_check(struct svm_test *test) -{ - u64 *pte = npt_get_pte((u64)scratch_page); - - *pte |= (1ULL << 2); - - return (vmcb->control.exit_code == SVM_EXIT_NPF) - && (vmcb->control.exit_info_1 == 0x100000005ULL); -} - -static void npt_rw_prepare(struct svm_test *test) -{ - - u64 *pte; - - pte = npt_get_pte(0x80000); - - *pte &= ~(1ULL << 1); -} - -static void npt_rw_test(struct svm_test *test) -{ - u64 *data = (void*)(0x80000); - - *data = 0; -} - -static bool npt_rw_check(struct svm_test *test) -{ - u64 *pte = npt_get_pte(0x80000); - - *pte |= (1ULL << 1); - - return (vmcb->control.exit_code == SVM_EXIT_NPF) - && (vmcb->control.exit_info_1 == 0x100000007ULL); -} - -static void npt_rw_pfwalk_prepare(struct svm_test *test) -{ - - u64 *pte; - - pte = npt_get_pte(read_cr3()); - - *pte &= ~(1ULL << 1); -} - -static bool npt_rw_pfwalk_check(struct svm_test *test) -{ - u64 *pte = npt_get_pte(read_cr3()); - - *pte |= (1ULL << 1); - - return (vmcb->control.exit_code == SVM_EXIT_NPF) - && (vmcb->control.exit_info_1 == 0x200000007ULL) - && (vmcb->control.exit_info_2 == read_cr3()); -} - -static void npt_l1mmio_prepare(struct svm_test *test) -{ -} - -u32 nested_apic_version1; -u32 nested_apic_version2; - -static void npt_l1mmio_test(struct svm_test *test) -{ - volatile u32 *data = (volatile void*)(0xfee00030UL); - - nested_apic_version1 = *data; - nested_apic_version2 = *data; -} - -static bool npt_l1mmio_check(struct svm_test *test) -{ - volatile u32 *data = (volatile void*)(0xfee00030); - u32 lvr = *data; - - return nested_apic_version1 == lvr && nested_apic_version2 == lvr; -} - -static void npt_rw_l1mmio_prepare(struct svm_test *test) -{ - - u64 *pte; - - pte = npt_get_pte(0xfee00080); - - *pte &= ~(1ULL << 1); -} - -static void npt_rw_l1mmio_test(struct svm_test *test) -{ - volatile u32 *data = (volatile void*)(0xfee00080); - - *data = *data; -} - -static bool npt_rw_l1mmio_check(struct svm_test *test) -{ - u64 *pte = npt_get_pte(0xfee00080); - - *pte |= (1ULL << 1); - - return (vmcb->control.exit_code == SVM_EXIT_NPF) - && (vmcb->control.exit_info_1 == 0x100000007ULL); -} - #define TSC_ADJUST_VALUE (1ll << 32) #define TSC_OFFSET_VALUE (~0ull << 48) static bool ok; @@ -2672,169 +2496,6 @@ static void svm_test_singlestep(void) vmcb->save.rip == (u64)&guest_end, "Test EFLAGS.TF on VMRUN: guest execution completion"); } -static void __svm_npt_rsvd_bits_test(u64 *pxe, u64 rsvd_bits, u64 efer, - ulong cr4, u64 guest_efer, ulong guest_cr4) -{ - u64 pxe_orig = *pxe; - int exit_reason; - u64 pfec; - - wrmsr(MSR_EFER, efer); - write_cr4(cr4); - - vmcb->save.efer = guest_efer; - vmcb->save.cr4 = guest_cr4; - - *pxe |= rsvd_bits; - - exit_reason = svm_vmrun(); - - report(exit_reason == SVM_EXIT_NPF, - "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits, exit_reason); - - if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) { - /* - * The guest's page tables will blow up on a bad PDPE/PML4E, - * before starting the final walk of the guest page. - */ - pfec = 0x20000000full; - } else { - /* RSVD #NPF on final walk of guest page. */ - pfec = 0x10000000dULL; - - /* PFEC.FETCH=1 if NX=1 *or* SMEP=1. */ - if ((cr4 & X86_CR4_SMEP) || (efer & EFER_NX)) - pfec |= 0x10; - - } - - report(vmcb->control.exit_info_1 == pfec, - "Wanted PFEC = 0x%lx, got PFEC = %lx, PxE = 0x%lx. " - "host.NX = %u, host.SMEP = %u, guest.NX = %u, guest.SMEP = %u", - pfec, vmcb->control.exit_info_1, *pxe, - !!(efer & EFER_NX), !!(cr4 & X86_CR4_SMEP), - !!(guest_efer & EFER_NX), !!(guest_cr4 & X86_CR4_SMEP)); - - *pxe = pxe_orig; -} - -static void _svm_npt_rsvd_bits_test(u64 *pxe, u64 pxe_rsvd_bits, u64 efer, - ulong cr4, u64 guest_efer, ulong guest_cr4) -{ - u64 rsvd_bits; - int i; - - /* - * RDTSC or RDRAND can sometimes fail to generate a valid reserved bits - */ - if (!pxe_rsvd_bits) { - report_skip("svm_npt_rsvd_bits_test: Reserved bits are not valid"); - return; - } - - /* - * Test all combinations of guest/host EFER.NX and CR4.SMEP. If host - * EFER.NX=0, use NX as the reserved bit, otherwise use the passed in - * @pxe_rsvd_bits. - */ - for (i = 0; i < 16; i++) { - if (i & 1) { - rsvd_bits = pxe_rsvd_bits; - efer |= EFER_NX; - } else { - rsvd_bits = PT64_NX_MASK; - efer &= ~EFER_NX; - } - if (i & 2) - cr4 |= X86_CR4_SMEP; - else - cr4 &= ~X86_CR4_SMEP; - if (i & 4) - guest_efer |= EFER_NX; - else - guest_efer &= ~EFER_NX; - if (i & 8) - guest_cr4 |= X86_CR4_SMEP; - else - guest_cr4 &= ~X86_CR4_SMEP; - - __svm_npt_rsvd_bits_test(pxe, rsvd_bits, efer, cr4, - guest_efer, guest_cr4); - } -} - -static u64 get_random_bits(u64 hi, u64 low) -{ - unsigned retry = 5; - u64 rsvd_bits = 0; - - if (this_cpu_has(X86_FEATURE_RDRAND)) { - do { - rsvd_bits = (rdrand() << low) & GENMASK_ULL(hi, low); - retry--; - } while (!rsvd_bits && retry); - } - - if (!rsvd_bits) { - retry = 5; - do { - rsvd_bits = (rdtsc() << low) & GENMASK_ULL(hi, low); - retry--; - } while (!rsvd_bits && retry); - } - - return rsvd_bits; -} - - -static void svm_npt_rsvd_bits_test(void) -{ - u64 saved_efer, host_efer, sg_efer, guest_efer; - ulong saved_cr4, host_cr4, sg_cr4, guest_cr4; - - if (!npt_supported()) { - report_skip("NPT not supported"); - return; - } - - saved_efer = host_efer = rdmsr(MSR_EFER); - saved_cr4 = host_cr4 = read_cr4(); - sg_efer = guest_efer = vmcb->save.efer; - sg_cr4 = guest_cr4 = vmcb->save.cr4; - - test_set_guest(basic_guest_main); - - /* - * 4k PTEs don't have reserved bits if MAXPHYADDR >= 52, just skip the - * sub-test. The NX test is still valid, but the extra bit of coverage - * isn't worth the extra complexity. - */ - if (cpuid_maxphyaddr() >= 52) - goto skip_pte_test; - - _svm_npt_rsvd_bits_test(npt_get_pte((u64)basic_guest_main), - get_random_bits(51, cpuid_maxphyaddr()), - host_efer, host_cr4, guest_efer, guest_cr4); - -skip_pte_test: - _svm_npt_rsvd_bits_test(npt_get_pde((u64)basic_guest_main), - get_random_bits(20, 13) | PT_PAGE_SIZE_MASK, - host_efer, host_cr4, guest_efer, guest_cr4); - - _svm_npt_rsvd_bits_test(npt_get_pdpe(), - PT_PAGE_SIZE_MASK | - (this_cpu_has(X86_FEATURE_GBPAGES) ? get_random_bits(29, 13) : 0), - host_efer, host_cr4, guest_efer, guest_cr4); - - _svm_npt_rsvd_bits_test(npt_get_pml4e(), BIT_ULL(8), - host_efer, host_cr4, guest_efer, guest_cr4); - - wrmsr(MSR_EFER, saved_efer); - write_cr4(saved_cr4); - vmcb->save.efer = sg_efer; - vmcb->save.cr4 = sg_cr4; -} - static bool volatile svm_errata_reproduced = false; static unsigned long volatile physical = 0; @@ -3634,6 +3295,14 @@ static void svm_intr_intercept_mix_smi(void) svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI); } +int main(int ac, char **av) +{ + pteval_t opt_mask = 0; + + __setup_vm(&opt_mask); + return run_svm_tests(ac, av); +} + struct svm_test svm_tests[] = { { "null", default_supported, default_prepare, default_prepare_gif_clear, null_test, @@ -3677,27 +3346,6 @@ struct svm_test svm_tests[] = { { "sel_cr0_bug", default_supported, sel_cr0_bug_prepare, default_prepare_gif_clear, sel_cr0_bug_test, sel_cr0_bug_finished, sel_cr0_bug_check }, - { "npt_nx", npt_supported, npt_nx_prepare, - default_prepare_gif_clear, null_test, - default_finished, npt_nx_check }, - { "npt_np", npt_supported, npt_np_prepare, - default_prepare_gif_clear, npt_np_test, - default_finished, npt_np_check }, - { "npt_us", npt_supported, npt_us_prepare, - default_prepare_gif_clear, npt_us_test, - default_finished, npt_us_check }, - { "npt_rw", npt_supported, npt_rw_prepare, - default_prepare_gif_clear, npt_rw_test, - default_finished, npt_rw_check }, - { "npt_rw_pfwalk", npt_supported, npt_rw_pfwalk_prepare, - default_prepare_gif_clear, null_test, - default_finished, npt_rw_pfwalk_check }, - { "npt_l1mmio", npt_supported, npt_l1mmio_prepare, - default_prepare_gif_clear, npt_l1mmio_test, - default_finished, npt_l1mmio_check }, - { "npt_rw_l1mmio", npt_supported, npt_rw_l1mmio_prepare, - default_prepare_gif_clear, npt_rw_l1mmio_test, - default_finished, npt_rw_l1mmio_check }, { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare, default_prepare_gif_clear, tsc_adjust_test, default_finished, tsc_adjust_check }, @@ -3749,7 +3397,6 @@ struct svm_test svm_tests[] = { vgif_check }, TEST(svm_cr4_osxsave_test), TEST(svm_guest_state_test), - TEST(svm_npt_rsvd_bits_test), TEST(svm_vmrun_errata_test), TEST(svm_vmload_vmsave), TEST(svm_test_singlestep), diff --git a/x86/unittests.cfg b/x86/unittests.cfg index d6dc19f..01d775e 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -260,6 +260,12 @@ extra_params = -cpu max,+svm -overcommit cpu-pm=on -m 4g -append pause_filter_te arch = x86_64 groups = svm +[svm_npt] +file = svm_npt.flat +smp = 2 +extra_params = -cpu max,+svm -m 4g +arch = x86_64 + [taskswitch] file = taskswitch.flat arch = i386 From patchwork Tue Jun 28 11:38:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manali Shukla X-Patchwork-Id: 12898123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 898CFC433EF for ; Tue, 28 Jun 2022 11:40:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343934AbiF1Lk0 (ORCPT ); Tue, 28 Jun 2022 07:40:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230385AbiF1LkZ (ORCPT ); Tue, 28 Jun 2022 07:40:25 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2071.outbound.protection.outlook.com [40.107.101.71]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47A612E9D1 for ; Tue, 28 Jun 2022 04:40:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CdIsWgHZfXkODHjWymOdYhv31GY7b3VKdP7grgnDBcARo17dBXzmmGFU5SrmheBkgNdAnpzLZ4qDGK5su4Y5xS3q9dmaasWuYFPIqzxaNnN5t2qBBxdUODCLXTvVZnDhOE1NIJ7KYHxmeICorQ8eoRkVdwN21UCU6ZzSdnDxMKBLXPnpOZgAsQYE+9biPBeaVWqIkwSe4geqdWg1i9ikt963JcCoqzY1tGtFDNEkukMpTRM/4OtXL3kxQ+DQj0X38NKQYcgw7r9Jto2MWp/SLDyrgXmqMT7Hkap4B/9+9lkECBKnJh4nrpMqm9QCgmKubUzheqGzKo10LosirXu6pA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7vikDCO8PlHmYkpTMybBvJ/Bj2MpOt7/psYgsi6IutU=; b=Mb+jur7Nmrh6JubpEJCQtO2p9bWuBzr5vnjpbEnHltDLxAcfmilYATuyrpp9Zdw92d1MDluBw17SJTcgmNHZqnRepUyyKaUf1w7ozrJ+xTJm56UgfDQLQ8QPThmz8WQl3D2KSEwJ1XAIcikjLQcwVqPI9embBmK3SOU2d6uw2mQemhnijbj/aem3F0Y89I5QP/JSNiPN2ShoWKHrnwTlNaZd7YN1cGyUzmBGZLn4oD47BYdITexIu1dS0udSVEgIBzLrOCgJmbCF2/45BLYs+Mh/zmfTtNmCv5lW3mGOQFE/QcipOpp0/hA0qNjes1fJt4nysWS1eZhySeDtkMiBeg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7vikDCO8PlHmYkpTMybBvJ/Bj2MpOt7/psYgsi6IutU=; b=uF+xKViGmq31UAfcePAFg3bdZCBvmkL14oaSberQlEsdjEAzWGhcokJSfxAlUPGYmSqa6wvvKvArYlPpEbZNN95T7h+GnLxS04G6e7cAabQiMFce9gz9aUqGEDplvNIfR7GSKtYrfrf14a9RFnJS5kJaa3jVYxBoXb9NuAItEQg= Received: from BN1PR13CA0006.namprd13.prod.outlook.com (2603:10b6:408:e2::11) by MN2PR12MB3742.namprd12.prod.outlook.com (2603:10b6:208:16a::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun 2022 11:40:22 +0000 Received: from BN8NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e2:cafe::9e) by BN1PR13CA0006.outlook.office365.com (2603:10b6:408:e2::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5395.13 via Frontend Transport; Tue, 28 Jun 2022 11:40:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT055.mail.protection.outlook.com (10.13.177.62) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:40:22 +0000 Received: from bhadra.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 28 Jun 2022 06:40:20 -0500 From: Manali Shukla To: , CC: Subject: [kvm-unit-tests PATCH v5 3/8] x86: nSVM: Allow nSVM tests run with PT_USER_MASK enabled Date: Tue, 28 Jun 2022 11:38:48 +0000 Message-ID: <20220628113853.392569-4-manali.shukla@amd.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220628113853.392569-1-manali.shukla@amd.com> References: <20220628113853.392569-1-manali.shukla@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 18ba9912-f4bb-4800-24d1-08da58fafc17 X-MS-TrafficTypeDiagnostic: MN2PR12MB3742:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eUitJf5hLaFnIBrYm1g8HI4ZMDi7BgXunSDrtuQ0dSTXxwWZVbIDvDaiokRyid2VmJhbP3wjCcEAs8jvxIDhDc5R8DuP542VO+3Mo6azkny1a28x0WUwHzovSUA3LEUc5yPE64fDYwZQaOpMAtRvTIodcOgfQL9lnlkmv5krAahVU9pJPnjO61zreI2F5QMVlltbo+rIwn3fvhYXE0FV5cGnDvslleUbafvTC/ZiE5bhubOpHpTKoeg3JLrtmzeZ+Ma9MgXxvh3q4TFXJ+9onvAH7AuhBWeWtNEhw1DmVkaundgXARwxDWP+yRivlE+5qxgPv07TGkrAvNM/UooGFmRY9u5yoXYlvj7FZUahNltG6wwo56TjvRzl+tw6jmFGux5xVqal9POfBT0VAvHHMgOIqkvQERc9A+OhhhDmxCEsgHZ4nxT3RdPzTan+iqJ7l4lEW2hd8MsGqWKOWfakArWS1zvQvJwM9fUDYj1LvweX3n9WoukVeQs42G/JhCxOvqeeBnEu8uU23d8ZVNXwx2zTcYC3cEkIwKUSNpsHIdr4iVPQpU9BevGHnzLlCd8EJyOd+WnadV5yV0HBTw66vEoyaJuPQYDHiO6BJtH8VTWfZtP71LTC2+0NcRt88MziqNNYxF+mV95QfYsxNIwyK7G96zxiXNl+Exqzb6JjnxThOYNfb5d2MZGcRuDWCSo6cwIj6gF1sseSZyUPVWizwxQdM9UhYIw6w1Sk+531KFUkRCjeMXECqV88Z0SlXFTNz6d+FDq3oE/H40P6o8uEQPG656Z7oIhwA7VRDKsw9O2b81PBGi6LZMte/z2kEDrk08lDiMvqGP4HI+KpdzLaVw== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(376002)(136003)(346002)(39860400002)(396003)(46966006)(40470700004)(36840700001)(83380400001)(110136005)(47076005)(36860700001)(86362001)(81166007)(5660300002)(26005)(6666004)(16526019)(1076003)(336012)(82740400003)(44832011)(316002)(356005)(7696005)(2906002)(4744005)(426003)(36756003)(4326008)(478600001)(186003)(41300700001)(2616005)(40480700001)(8676002)(70586007)(82310400005)(8936002)(70206006)(40460700003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 11:40:22.0165 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 18ba9912-f4bb-4800-24d1-08da58fafc17 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3742 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that nNPT testcases, which need to run without USER page tables, live in their own test, use the default setup_vm() to create the page tables with the USER bit set so that usermode testcases can be added in the future. Suggested-by: Sean Christopherson Signed-off-by: Manali Shukla --- x86/svm_tests.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 37ca792..1692912 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -10,7 +10,6 @@ #include "isr.h" #include "apic.h" #include "delay.h" -#include "vmalloc.h" #define SVM_EXIT_MAX_DR_INTERCEPT 0x3f @@ -3297,9 +3296,7 @@ static void svm_intr_intercept_mix_smi(void) int main(int ac, char **av) { - pteval_t opt_mask = 0; - - __setup_vm(&opt_mask); + setup_vm(); return run_svm_tests(ac, av); } From patchwork Tue Jun 28 11:38:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manali Shukla X-Patchwork-Id: 12898124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBF7BC43334 for ; Tue, 28 Jun 2022 11:40:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344157AbiF1Lkw (ORCPT ); Tue, 28 Jun 2022 07:40:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344211AbiF1Lkv (ORCPT ); Tue, 28 Jun 2022 07:40:51 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2054.outbound.protection.outlook.com [40.107.220.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF0D12EA03 for ; Tue, 28 Jun 2022 04:40:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Sj2Y2tvjTnxXBBMBEUwOqdsyP7Oym8auDX1hY7w+2mzkPGeclhAWqFPTcfWp6f/tRVFgyDvviRcNkcQkBI4CGzsERwbN1DYYHDrudOgo3xZSDH+khwWbSFS/Zhk3envq2dSybskaolRnMc1GkdERYP9DIm5XA0j4zqgXba7HD4mpmK7bh0Y5cmpO+VO/tfANWwHJLQpYXWUzfkIXuhGhQUco1PxRMVoCw0/HtyGSSWGvpT9hkcw+Hm2MUhaWD0ahMhzI727bDkqyp0fY5eiGO3U1v9qxcubr3Ikuwii4ktT44EJ8rj2UFWa9BRWTm13Ir5i2EsOFwG6K09o6x4/m4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JpR5qEqNt974fyvG+mtHVbviGjtblSu4c/S2iw5QnY0=; b=KFDpPiwbmTfFqN2d5SnmToMb2EuLoTaceacW0DWMPfe4jZYaQS82WrHLuoajMubTi5lulD6z3OEqPqmn4jX0xSZB71pU6Z3vj8SvtMWaLEFwIKzkCjQ71fs444BEQwOx5fPtC3+7hH/uDnYOgzUZH3js+1MLzlitJyNYWJJD5qUfrBapBQqytm2J3ka9meWz+NmoGplgPcmT1MxRHuhxOQ2ImxEGGPOydI8GnT/J7ZVCAbgjuorHvBD+iv06uzvaQZwAOth2dr7rynWQt16p+LFr/0ap69MIv0hQngWYvlqXkB/LPvcbp8LqRbQpC6wZG1VwiSYEU0QLiXyAs4Dq9g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JpR5qEqNt974fyvG+mtHVbviGjtblSu4c/S2iw5QnY0=; b=qCwr7vjNbHICgC2wFhYnBDqhgLmuBsnCekMShscz8fBYQfaktCpgTS3cujlmCAqE+L6KLjYobgmbCf2hGTgjM1CCAFnN7smDzAiA/Hd72vDNKaxsYGa5Vbe7alEFXIciDINxcExQzdUfhOHYp668oW2fHF3pgt2k450aoaQcpKU= Received: from BN0PR04CA0024.namprd04.prod.outlook.com (2603:10b6:408:ee::29) by BN8PR12MB3299.namprd12.prod.outlook.com (2603:10b6:408:9a::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5353.18; Tue, 28 Jun 2022 11:40:46 +0000 Received: from BN8NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ee:cafe::4a) by BN0PR04CA0024.outlook.office365.com (2603:10b6:408:ee::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend Transport; Tue, 28 Jun 2022 11:40:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT018.mail.protection.outlook.com (10.13.176.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:40:46 +0000 Received: from bhadra.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 28 Jun 2022 06:40:44 -0500 From: Manali Shukla To: , CC: Subject: [kvm-unit-tests PATCH v5 4/8] x86: Improve set_mmu_range() to implement npt Date: Tue, 28 Jun 2022 11:38:49 +0000 Message-ID: <20220628113853.392569-5-manali.shukla@amd.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220628113853.392569-1-manali.shukla@amd.com> References: <20220628113853.392569-1-manali.shukla@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5b402b92-bbfc-4b25-0fc0-08da58fb0ac2 X-MS-TrafficTypeDiagnostic: BN8PR12MB3299:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: oHuSIyAJS4tJIwc0Y13X8wI30mmG+tCYE8dTxKB+P/kzrLiaE0tZmE5NJmrC7bWmx7vhdXRcMEaj2hbiAOJpUCJKSoyR8vh9FtpiXDGs2AF4jLI3lxxkRP2XOyFG0wIyyew2SJU2PKUY1EM+66sQR4+lLzppCVjelw3mwZhOihvmUV4+VEGNfBKVHGEKYMk5UIagg3sOcfA9VfHOlJijHrylsta2lCE5nyc5vlxH40EcAUfE2JxPT042oNzfdfpq6uYX5rhqROX6nD6PT3TbIvgo5Xp0iiTNMMw2G8e5ab2pYw++bLQx6fQ3A/xOwXBjdGRTdWUbh/MJn7NCxcOcel7p+hfjx2lArgjarwUJrAxE7J7j+m+XcSjtDneSbM/nVvTCYdlz8Jm5IyA/zyblCImn3+2BdtPntOhrPzkHqcWN+ULEmrZJX9VhzZhkgGogAreEBK8PVzaEFsVDlVxHIsIEgrQhFDmcXf9tkJp2AV/oPt/wmJlWCYVojgOPPAUnH2C2dJ62DxgY8TxamTPyB1ycyhHIpN36/4QQQqB7Nt0dXoQmPFhRo3KExsx8GGCffFo9mnTyWCKI/JUd4SamnQjzQjC+SmnAFJN7kHSOOcheShGvWBcWy3KOpwXPu2ycAill5j2YnzPY+SpDN0RzYoHJFPgZIWcADErZZepz9/xXZMt8XdZyWNSRJql4lvcZ/Rv1/l5062/VmLYB8GmFffOIybv7T/oK894lhz9gb78LGNaPmd4CrWAeOCA+rP8kv2Kw190zDF013Ty0CfplbxbiDJBQhVYh1+rd0TfoKV9zN1DDHeSf5Qjac85wYpjNiDjJPsUEZzD6OYxyrYFJPw== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(396003)(136003)(39860400002)(46966006)(40470700004)(36840700001)(426003)(47076005)(336012)(83380400001)(36860700001)(1076003)(82740400003)(186003)(2616005)(26005)(7696005)(2906002)(41300700001)(86362001)(316002)(40460700003)(81166007)(82310400005)(44832011)(70206006)(70586007)(5660300002)(110136005)(40480700001)(8936002)(356005)(36756003)(16526019)(8676002)(4326008)(478600001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 11:40:46.6247 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5b402b92-bbfc-4b25-0fc0-08da58fb0ac2 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3299 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Modify setup_mmu_range() to implement nested page table dynamically by setting PT_USER_MASK bit for all NPT pages because any nested page table accesses performed by the MMU are treated as user accesses. Suggested-by: Sean Christopherson Signed-off-by: Manali Shukla --- lib/x86/vm.c | 25 +++++++++++++++++++++---- lib/x86/vm.h | 8 ++++++++ 2 files changed, 29 insertions(+), 4 deletions(-) diff --git a/lib/x86/vm.c b/lib/x86/vm.c index 25a4f5f..46c36e5 100644 --- a/lib/x86/vm.c +++ b/lib/x86/vm.c @@ -140,16 +140,33 @@ bool any_present_pages(pgd_t *cr3, void *virt, size_t len) return false; } -static void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len) +void __setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, + unsigned long long mmu_flags) { + u64 orig_opt_mask = pte_opt_mask; u64 max = (u64)len + (u64)start; u64 phys = start; - while (phys + LARGE_PAGE_SIZE <= max) { - install_large_page(cr3, phys, (void *)(ulong)phys); - phys += LARGE_PAGE_SIZE; + /* + * Allocate 4k pages only for nested page table, PT_USER_MASK needs to + * be enabled only for nested pages. + */ + if (mmu_flags & IS_NESTED_MMU) + pte_opt_mask |= PT_USER_MASK; + + if (mmu_flags & USE_HUGEPAGES) { + while (phys + LARGE_PAGE_SIZE <= max) { + install_large_page(cr3, phys, (void *)(ulong)phys); + phys += LARGE_PAGE_SIZE; + } } install_pages(cr3, phys, max - phys, (void *)(ulong)phys); + + pte_opt_mask = orig_opt_mask; +} + +static inline void setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len) { + __setup_mmu_range(cr3, start, len, USE_HUGEPAGES); } static void set_additional_vcpu_vmregs(struct vm_vcpu_info *info) diff --git a/lib/x86/vm.h b/lib/x86/vm.h index 4c6dff9..2df19e3 100644 --- a/lib/x86/vm.h +++ b/lib/x86/vm.h @@ -4,6 +4,10 @@ #include "processor.h" #include "asm/page.h" #include "asm/io.h" +#include "asm/bitops.h" + +#define IS_NESTED_MMU BIT(0) +#define USE_HUGEPAGES BIT(1) void setup_5level_page_table(void); @@ -37,6 +41,10 @@ pteval_t *install_pte(pgd_t *cr3, pteval_t *install_large_page(pgd_t *cr3, phys_addr_t phys, void *virt); void install_pages(pgd_t *cr3, phys_addr_t phys, size_t len, void *virt); bool any_present_pages(pgd_t *cr3, void *virt, size_t len); +void set_pte_opt_mask(void); +void reset_pte_opt_mask(void); +void __setup_mmu_range(pgd_t *cr3, phys_addr_t start, size_t len, + unsigned long long mmu_flags); static inline void *current_page_table(void) { From patchwork Tue Jun 28 11:38:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manali Shukla X-Patchwork-Id: 12898125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 690A0C43334 for ; Tue, 28 Jun 2022 11:41:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344260AbiF1LlR (ORCPT ); Tue, 28 Jun 2022 07:41:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344703AbiF1LlP (ORCPT ); Tue, 28 Jun 2022 07:41:15 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2087.outbound.protection.outlook.com [40.107.220.87]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E6772F666 for ; Tue, 28 Jun 2022 04:41:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h0zJlZfhZaweoYfxr9yxQhhgaGNMvhswD72yG10XyAbNWvhHtsGi1WM5hPBcdNcrAbG//DKdZPUd4tASKYBVEBO9IvYLYGLgLGZYnHXXy1DwYvRifMkI4Ne15urFwhRM1TKtt1ZmM63PXr6Gq0RkjIoK2nl8pkwwHXRbn5rJBWRYXAEerNKn30XHxrRWfXUtVN8T6foOByHnJxTCzJC8y2mxeUZiEXGpGExpFCT3OnHQpMk2t0rFe/eRJxOaK5mOBP7H9/GVNFMNSBLtxfgFWbeNWyIno56S8UVniTh3cOlngOO/YFxVUbfI4/YB5zTwycN4H1vpPaXrTpwVl+uapg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RBFmLH8ic6qA7hZHS5iw9Sp98jkPiIaIZ5b4SZzSh7U=; b=Oizjx7maGOS1MMGWcMJHwIG8cLFkcKJ2Ieb1QwfB/0Nq/HQ+GV7k798wiQpnqJ5aYdlZBeLaM3mpO5W1+9+ZJp30evoJ2kmx8dxHZ0VCaFEG1fCB8xibJFvmdiCdfJYnRn0SiyEUMpNXmZOARSDOd45Bn3GmqhqSTMpObg6PZG5dG3/SUN/xSi3aQGuUNH/2UvxLclo7TK/4CheRolxKewvWJDIW2+J1Y68LyHS3IXuH3CBAUmcbkGBN36EQHJHjSeTNLI7a/j0Ouoh2RxG5Pa3d1qdlEjdS4/msV896OQOv/IqZVcnF7ql6KCOlvHZebYf4s6eU9aZJpw9UMuuIqA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RBFmLH8ic6qA7hZHS5iw9Sp98jkPiIaIZ5b4SZzSh7U=; b=pkdZ6TcQb2ESxe8eOqCfH4iWZqVe1O7ZKhAyh2wG63g/Uzvux+RDE1vLoR5DGxYs32SaarxNZ7VVjDaXw3lDR/DTHcMQds2M5iJw/A9WQXwLZE8vaVcSkbrcHTXYsE0opkg2gV0/+vi7Dp1yIuJ7nEvZ41V7IX5WZObJesLfSIQ= Received: from BN0PR10CA0029.namprd10.prod.outlook.com (2603:10b6:408:143::7) by MN2PR12MB4502.namprd12.prod.outlook.com (2603:10b6:208:263::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun 2022 11:41:11 +0000 Received: from BN8NAM11FT041.eop-nam11.prod.protection.outlook.com (2603:10b6:408:143:cafe::a3) by BN0PR10CA0029.outlook.office365.com (2603:10b6:408:143::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend Transport; Tue, 28 Jun 2022 11:41:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT041.mail.protection.outlook.com (10.13.177.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:41:10 +0000 Received: from bhadra.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 28 Jun 2022 06:41:09 -0500 From: Manali Shukla To: , CC: Subject: [kvm-unit-tests PATCH v5 5/8] x86: nSVM: Build up the nested page table dynamically Date: Tue, 28 Jun 2022 11:38:50 +0000 Message-ID: <20220628113853.392569-6-manali.shukla@amd.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220628113853.392569-1-manali.shukla@amd.com> References: <20220628113853.392569-1-manali.shukla@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 132a43dd-adef-4ce5-d9f2-08da58fb1925 X-MS-TrafficTypeDiagnostic: MN2PR12MB4502:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IMh8EYmnU5dh+MIWIRokgoWMdhgawYm26LYeUbkrWX8irEmOt8Xyh2XIIhOrnsSX4mDnLp3hnO3IVdjrEnbeDlW6l8sTpPdcPGwche/alUVWfaJOLKLjEPefFfdGisLD0uLoAhMPGhQz7Hx9KIqSR/vmd0ocdgYZ6ddynCpAblT8Zrrsq6mgAM4zHel0DkWZQrwQcWnIKtJ3xhOcqeJmqN6rjzfCO51ElpgM1zm0va6ZX3jY+4bapGz13rrcAVEWstasmYgr1XgpSk/Hw8lILarEsPQ7yoqUHvXlY0Uns/1gEv3RC2NKUy/+7WKEwPFaMuQ58DCjYSFEYFXvkpagVURXhD9P82gR0aCRFf+zE+BDxSlQrFPDXxhBsIizK5fMuvo1PJZyTXF2G2C7L5OtkZyw0ie/JPrHBfJmMmNYO95cFBmh5SdrM6qr+60pjyazj81yWhfEeqRFkjBxdv6TOSWglbGA6Kue11KWiHOUnEZPEZV+FRRCT5D3lv3D3Fuj6TWs38vyjCrvj7i+h68t+PpApGjFoxlG4g+uuTYRABBlZgRIg/gBjWXGJSG0f6MGASseTQvB1YSukEe5vcP1eeILQYTX76vp1Cxz+e9jHYBNAqKqu2ij5jrVZaEKFsGXYwrVX7NHDP8h/VOuHEMwo2BMKyzsPnfKwu6vKWQ2lh8HUuHdHyCdvYZt0VuQOFhvlGWT+SDs9gmuS2KRKOy5ncvYBikmL6XUtGo3PFLMBA2Eg68Kgq9TeyU6xk6jcgAMv27y+FuqC/bcoRG1lksHOW73RdmsKgkd/U+NNwXHoInC3V0SI1HIoPmViR1YsVpZgqGPt4go6GthvcWin7jLxw== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(136003)(396003)(376002)(346002)(36840700001)(46966006)(40470700004)(40480700001)(40460700003)(1076003)(110136005)(6666004)(356005)(81166007)(82740400003)(36860700001)(316002)(36756003)(44832011)(83380400001)(70586007)(26005)(82310400005)(70206006)(2906002)(47076005)(186003)(336012)(8676002)(86362001)(2616005)(5660300002)(426003)(16526019)(7696005)(4326008)(478600001)(41300700001)(8936002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 11:41:10.7668 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 132a43dd-adef-4ce5-d9f2-08da58fb1925 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT041.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4502 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Build up nested page table dynamically based on the RAM size of VM instead of building it statically with 2048 PTEs and one PML4 entry, so that nested page table can be easily extensible to provide seperate range of addressses to test various test cases, if needed. Signed-off-by: Manali Shukla --- x86/svm.c | 73 ++++++++++++++++----------------------------------- x86/svm.h | 4 ++- x86/svm_npt.c | 5 ++-- 3 files changed, 28 insertions(+), 54 deletions(-) diff --git a/x86/svm.c b/x86/svm.c index b586807..08b0b15 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -8,6 +8,7 @@ #include "desc.h" #include "msr.h" #include "vm.h" +#include "fwcfg.h" #include "smp.h" #include "types.h" #include "alloc_page.h" @@ -16,38 +17,27 @@ #include "vmalloc.h" /* for the nested page table*/ -u64 *pte[2048]; -u64 *pde[4]; -u64 *pdpe; u64 *pml4e; struct vmcb *vmcb; u64 *npt_get_pte(u64 address) { - int i1, i2; - - address >>= 12; - i1 = (address >> 9) & 0x7ff; - i2 = address & 0x1ff; - - return &pte[i1][i2]; + return get_pte(npt_get_pml4e(), (void*)address); } u64 *npt_get_pde(u64 address) { - int i1, i2; - - address >>= 21; - i1 = (address >> 9) & 0x3; - i2 = address & 0x1ff; - - return &pde[i1][i2]; + struct pte_search search; + search = find_pte_level(npt_get_pml4e(), (void*)address, 2); + return search.pte; } -u64 *npt_get_pdpe(void) +u64 *npt_get_pdpe(u64 address) { - return pdpe; + struct pte_search search; + search = find_pte_level(npt_get_pml4e(), (void*)address, 3); + return search.pte; } u64 *npt_get_pml4e(void) @@ -300,11 +290,21 @@ static void set_additional_vcpu_msr(void *msr_efer) wrmsr(MSR_EFER, (ulong)msr_efer | EFER_SVME); } +void setup_npt(void) { + u64 end_of_memory; + pml4e = alloc_page(); + + end_of_memory = fwcfg_get_u64(FW_CFG_RAM_SIZE); + if (end_of_memory < (1ul << 32)) + end_of_memory = (1ul << 32); + + __setup_mmu_range(pml4e, 0, end_of_memory, IS_NESTED_MMU); +} + static void setup_svm(void) { void *hsave = alloc_page(); - u64 *page, address; - int i,j; + int i; wrmsr(MSR_VM_HSAVE_PA, virt_to_phys(hsave)); wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME); @@ -327,36 +327,7 @@ static void setup_svm(void) * pages to get enough granularity for the NPT unit-tests. */ - address = 0; - - /* PTE level */ - for (i = 0; i < 2048; ++i) { - page = alloc_page(); - - for (j = 0; j < 512; ++j, address += 4096) - page[j] = address | 0x067ULL; - - pte[i] = page; - } - - /* PDE level */ - for (i = 0; i < 4; ++i) { - page = alloc_page(); - - for (j = 0; j < 512; ++j) - page[j] = (u64)pte[(i * 512) + j] | 0x027ULL; - - pde[i] = page; - } - - /* PDPe level */ - pdpe = alloc_page(); - for (i = 0; i < 4; ++i) - pdpe[i] = ((u64)(pde[i])) | 0x27; - - /* PML4e level */ - pml4e = alloc_page(); - pml4e[0] = ((u64)pdpe) | 0x27; + setup_npt(); } int matched; diff --git a/x86/svm.h b/x86/svm.h index 123e64f..85eff3f 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -406,7 +406,7 @@ typedef void (*test_guest_func)(struct svm_test *); int run_svm_tests(int ac, char **av); u64 *npt_get_pte(u64 address); u64 *npt_get_pde(u64 address); -u64 *npt_get_pdpe(void); +u64 *npt_get_pdpe(u64 address); u64 *npt_get_pml4e(void); bool smp_supported(void); bool default_supported(void); @@ -429,6 +429,8 @@ int __svm_vmrun(u64 rip); void __svm_bare_vmrun(void); int svm_vmrun(void); void test_set_guest(test_guest_func func); +void setup_npt(void); +u64* get_npt_pte(u64 *pml4, u64 guest_addr, int level); extern struct vmcb *vmcb; extern struct svm_test svm_tests[]; diff --git a/x86/svm_npt.c b/x86/svm_npt.c index 53e8a90..ab4dcf4 100644 --- a/x86/svm_npt.c +++ b/x86/svm_npt.c @@ -209,7 +209,8 @@ static void __svm_npt_rsvd_bits_test(u64 * pxe, u64 rsvd_bits, u64 efer, "Wanted #NPF on rsvd bits = 0x%lx, got exit = 0x%x", rsvd_bits, exit_reason); - if (pxe == npt_get_pdpe() || pxe == npt_get_pml4e()) { + if (pxe == npt_get_pdpe((u64) basic_guest_main) + || pxe == npt_get_pml4e()) { /* * The guest's page tables will blow up on a bad PDPE/PML4E, * before starting the final walk of the guest page. @@ -338,7 +339,7 @@ skip_pte_test: get_random_bits(20, 13) | PT_PAGE_SIZE_MASK, host_efer, host_cr4, guest_efer, guest_cr4); - _svm_npt_rsvd_bits_test(npt_get_pdpe(), + _svm_npt_rsvd_bits_test(npt_get_pdpe((u64) basic_guest_main), PT_PAGE_SIZE_MASK | (this_cpu_has(X86_FEATURE_GBPAGES) ? get_random_bits(29, 13) : 0), host_efer, From patchwork Tue Jun 28 11:38:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manali Shukla X-Patchwork-Id: 12898126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84781C43334 for ; Tue, 28 Jun 2022 11:41:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344744AbiF1Llh (ORCPT ); Tue, 28 Jun 2022 07:41:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344184AbiF1Llf (ORCPT ); Tue, 28 Jun 2022 07:41:35 -0400 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2074.outbound.protection.outlook.com [40.107.95.74]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4ED5C2ED4B for ; Tue, 28 Jun 2022 04:41:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jfglkAQ9kYnVToCjjMd1TEmzbgRBJDkpst2inuhKnPVAkAul7IKy5eywsinbNDA+cLw6/coD3LPgYwDxEmBJ/fJmvHnpyrsMihFFP/KnExZIScxlRZgNwe2my7Z/QydtRnC4NVKxRQzu9+ksD/HQ4brz3HFiZ0xE0SYNcuG7vjr9csn2noOpBoIZLDX+Q8FN/4ecX4r/Lc5BT6Pkk7R8etx5EUgl4OF/5L04ZhkKH81HBtP7068VX+zMfCyZsYaZnq95RRaM10WKRZMZ8OHQB9oX44BXayEjE/YIp3Y45Uco5Yqcp/jtucrtEDfJA2rzVaaEfhuTDyrsjpTE5sqTkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AAERBgaLN2Nj85nZZbMvqaU40jQ/5ZGW3llvqXlWzSE=; b=GduD231YSNm+qiEUEOm8DotLpg7ZutkQtWdK+CtPK71jWh2/DOdPNHXy6vxU8wDCbv2/eqaBTGb9cUNZVn0qdPuJdpe6RaeCb9psvGY3GcmFGbk13U2averf7OelIaezKka8V1A24AJS+WS0FLnt3nY+Kx4g5NP45S9qBKAlVAlp1gymH2fxoXDs06F+h7dS3L2RriVcgl6l7vYO0XL1OFjLSbUjo0wygnSQYrH4xosKo0B+4+vfLtES0ZS8FoOieVC2aRjLngfcsgm11F/+pLaewKBELOR6MuHpNLRODAEh2xw6+RetSqwxAdcxgfSYeb5TKLxvmvUDD9gBbJuLHQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AAERBgaLN2Nj85nZZbMvqaU40jQ/5ZGW3llvqXlWzSE=; b=nA25R43HZUja275JwB3FTL3AaJ3OR5JL/nWtDTy4HEIHb6TpDPU+12GFSO+1eTGgLyMiGARGnI93CzD3hWWeLm0PFHvkAuramuclCqMQope1MeNYiAfDGbdfWTLCq3r3BdfOWrbSVKyV34trObmDY+rLn8vu6XSNAbfClH2TLkQ= Received: from BN0PR04CA0111.namprd04.prod.outlook.com (2603:10b6:408:ec::26) by PH7PR12MB5710.namprd12.prod.outlook.com (2603:10b6:510:1e1::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Tue, 28 Jun 2022 11:41:32 +0000 Received: from BN8NAM11FT064.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ec:cafe::45) by BN0PR04CA0111.outlook.office365.com (2603:10b6:408:ec::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:41:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT064.mail.protection.outlook.com (10.13.176.160) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:41:32 +0000 Received: from bhadra.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 28 Jun 2022 06:41:30 -0500 From: Manali Shukla To: , CC: Subject: [kvm-unit-tests PATCH v5 6/8] x86: nSVM: Correct indentation for svm.c Date: Tue, 28 Jun 2022 11:38:51 +0000 Message-ID: <20220628113853.392569-7-manali.shukla@amd.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220628113853.392569-1-manali.shukla@amd.com> References: <20220628113853.392569-1-manali.shukla@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 84cded20-dcc5-4893-94b8-08da58fb25e5 X-MS-TrafficTypeDiagnostic: PH7PR12MB5710:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MqG4tvaq+uriLZ2r3CGlxtLczCXuX0yCEyp6QAfGnzoAGKmesiK8FTsHXeTiY2z597BzUXdN4yqkg7mZV4wMqOpCWQnne2bb4oKhJuLItryTHW0v3/wFI/asedmyxHg7L7M8bcAxyHyja9wooA9U2Yb4s16oINkgEPfjv79qvvjyTC9gCRblAP2qIdrj30FXvHULBuWuQ7EBkP4Cs1ivorIdiY1x75KAKBQw9/4MNCsoYJD1ilPwv9kJGN3iukUhQzh0MUUUbR48oMtRLMKzYxGiqVsi+UGs4MRSZeQ45rxRD4EpuIfEE2uIg7jwTjTF+y8nxjsk9G9+e2yHX2nkF1TsZgNEnMf0N5bqaKDf1AosLMiOguad3RICJoz5OTpt9A1DvO3Mm7v233+z78bJvUw79yPCLgLbL/yz7id35dkT6tQHioZJoC3aLdLkvVvxVBwjm+zrfN5ZDpg+OGgiwxKRk3jxMEHzKtHvTMUpGBR1qojlkmNDiSZcZVpvbrLFqakYDmTsX0XHgHdkimxxUmzb6u2/esMVDUgLTkAKihqYUB9Sh1UaRgZUWj/yhMmMvXohRyPnJQ69jvCPp/+4KVICrYG4p4yBVjxgXC2zv1tBfasgjd8nqzr+pKPdMBDQWkih/HlsRwdCLqZgnV5/Gz3wnX1TkEdO4MgQX4xJsYVNeZMQZZPVEWNkHMQp80RWWWUNtR+NBGGa9F9X11S8acchoh+jUU45j1lLECYN2T2hewvejVRWwxe4UEmzeP7ul0EbHMFPIFU0KU4ZnlWuCCeBdMjJGAz166rsXJSiIAiHypqsW4PzOUi7VfPtJdHZ8aiqv4mFi3v9rHNIp38fWw== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(376002)(136003)(39860400002)(346002)(396003)(46966006)(40470700004)(36840700001)(336012)(186003)(47076005)(1076003)(478600001)(16526019)(40460700003)(70586007)(2616005)(82310400005)(426003)(86362001)(36756003)(6666004)(41300700001)(36860700001)(2906002)(110136005)(26005)(8676002)(316002)(8936002)(7696005)(83380400001)(82740400003)(44832011)(40480700001)(70206006)(4326008)(356005)(81166007)(5660300002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 11:41:32.1559 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 84cded20-dcc5-4893-94b8-08da58fb25e5 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT064.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5710 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fixed indentation errors in svm.c No functional changes intended. Signed-off-by: Manali Shukla --- x86/svm.c | 144 +++++++++++++++++++++++++++--------------------------- 1 file changed, 72 insertions(+), 72 deletions(-) diff --git a/x86/svm.c b/x86/svm.c index 08b0b15..e0ef4ec 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -52,7 +52,7 @@ bool smp_supported(void) bool default_supported(void) { - return true; + return true; } bool vgif_supported(void) @@ -62,22 +62,22 @@ bool vgif_supported(void) bool lbrv_supported(void) { - return this_cpu_has(X86_FEATURE_LBRV); + return this_cpu_has(X86_FEATURE_LBRV); } bool tsc_scale_supported(void) { - return this_cpu_has(X86_FEATURE_TSCRATEMSR); + return this_cpu_has(X86_FEATURE_TSCRATEMSR); } bool pause_filter_supported(void) { - return this_cpu_has(X86_FEATURE_PAUSEFILTER); + return this_cpu_has(X86_FEATURE_PAUSEFILTER); } bool pause_threshold_supported(void) { - return this_cpu_has(X86_FEATURE_PFTHRESHOLD); + return this_cpu_has(X86_FEATURE_PFTHRESHOLD); } @@ -121,7 +121,7 @@ void inc_test_stage(struct svm_test *test) } static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector, - u64 base, u32 limit, u32 attr) + u64 base, u32 limit, u32 attr) { seg->selector = selector; seg->attrib = attr; @@ -159,9 +159,9 @@ void vmcb_ident(struct vmcb *vmcb) struct vmcb_save_area *save = &vmcb->save; struct vmcb_control_area *ctrl = &vmcb->control; u32 data_seg_attr = 3 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK - | SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK; + | SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK; u32 code_seg_attr = 9 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK - | SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK; + | SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK; struct descriptor_table_ptr desc_table_ptr; memset(vmcb, 0, sizeof(*vmcb)); @@ -186,8 +186,8 @@ void vmcb_ident(struct vmcb *vmcb) save->g_pat = rdmsr(MSR_IA32_CR_PAT); save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); ctrl->intercept = (1ULL << INTERCEPT_VMRUN) | - (1ULL << INTERCEPT_VMMCALL) | - (1ULL << INTERCEPT_SHUTDOWN); + (1ULL << INTERCEPT_VMMCALL) | + (1ULL << INTERCEPT_SHUTDOWN); ctrl->iopm_base_pa = virt_to_phys(io_bitmap); ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap); @@ -220,12 +220,12 @@ int __svm_vmrun(u64 rip) regs.rdi = (ulong)v2_test; asm volatile ( - ASM_PRE_VMRUN_CMD - "vmrun %%rax\n\t" \ - ASM_POST_VMRUN_CMD - : - : "a" (virt_to_phys(vmcb)) - : "memory", "r15"); + ASM_PRE_VMRUN_CMD + "vmrun %%rax\n\t" \ + ASM_POST_VMRUN_CMD + : + : "a" (virt_to_phys(vmcb)) + : "memory", "r15"); return (vmcb->control.exit_code); } @@ -253,33 +253,33 @@ static noinline void test_run(struct svm_test *test) struct svm_test *the_test = test; u64 the_vmcb = vmcb_phys; asm volatile ( - "clgi;\n\t" // semi-colon needed for LLVM compatibility - "sti \n\t" - "call *%c[PREPARE_GIF_CLEAR](%[test]) \n \t" - "mov %[vmcb_phys], %%rax \n\t" - ASM_PRE_VMRUN_CMD - ".global vmrun_rip\n\t" \ - "vmrun_rip: vmrun %%rax\n\t" \ - ASM_POST_VMRUN_CMD - "cli \n\t" - "stgi" - : // inputs clobbered by the guest: - "=D" (the_test), // first argument register - "=b" (the_vmcb) // callee save register! - : [test] "0" (the_test), - [vmcb_phys] "1"(the_vmcb), - [PREPARE_GIF_CLEAR] "i" (offsetof(struct svm_test, prepare_gif_clear)) - : "rax", "rcx", "rdx", "rsi", - "r8", "r9", "r10", "r11" , "r12", "r13", "r14", "r15", - "memory"); + "clgi;\n\t" // semi-colon needed for LLVM compatibility + "sti \n\t" + "call *%c[PREPARE_GIF_CLEAR](%[test]) \n \t" + "mov %[vmcb_phys], %%rax \n\t" + ASM_PRE_VMRUN_CMD + ".global vmrun_rip\n\t" \ + "vmrun_rip: vmrun %%rax\n\t" \ + ASM_POST_VMRUN_CMD + "cli \n\t" + "stgi" + : // inputs clobbered by the guest: + "=D" (the_test), // first argument register + "=b" (the_vmcb) // callee save register! + : [test] "0" (the_test), + [vmcb_phys] "1"(the_vmcb), + [PREPARE_GIF_CLEAR] "i" (offsetof(struct svm_test, prepare_gif_clear)) + : "rax", "rcx", "rdx", "rsi", + "r8", "r9", "r10", "r11" , "r12", "r13", "r14", "r15", + "memory"); ++test->exits; } while (!test->finished(test)); irq_enable(); report(test->succeeded(test), "%s", test->name); - if (test->on_vcpu) - test->on_vcpu_done = true; + if (test->on_vcpu) + test->on_vcpu_done = true; } static void set_additional_vcpu_msr(void *msr_efer) @@ -322,10 +322,10 @@ static void setup_svm(void) printf("NPT detected - running all tests with NPT enabled\n"); /* - * Nested paging supported - Build a nested page table - * Build the page-table bottom-up and map everything with 4k - * pages to get enough granularity for the NPT unit-tests. - */ + * Nested paging supported - Build a nested page table + * Build the page-table bottom-up and map everything with 4k + * pages to get enough granularity for the NPT unit-tests. + */ setup_npt(); } @@ -335,37 +335,37 @@ int matched; static bool test_wanted(const char *name, char *filters[], int filter_count) { - int i; - bool positive = false; - bool match = false; - char clean_name[strlen(name) + 1]; - char *c; - const char *n; - - /* Replace spaces with underscores. */ - n = name; - c = &clean_name[0]; - do *c++ = (*n == ' ') ? '_' : *n; - while (*n++); - - for (i = 0; i < filter_count; i++) { - const char *filter = filters[i]; - - if (filter[0] == '-') { - if (simple_glob(clean_name, filter + 1)) - return false; - } else { - positive = true; - match |= simple_glob(clean_name, filter); - } - } - - if (!positive || match) { - matched++; - return true; - } else { - return false; - } + int i; + bool positive = false; + bool match = false; + char clean_name[strlen(name) + 1]; + char *c; + const char *n; + + /* Replace spaces with underscores. */ + n = name; + c = &clean_name[0]; + do *c++ = (*n == ' ') ? '_' : *n; + while (*n++); + + for (i = 0; i < filter_count; i++) { + const char *filter = filters[i]; + + if (filter[0] == '-') { + if (simple_glob(clean_name, filter + 1)) + return false; + } else { + positive = true; + match |= simple_glob(clean_name, filter); + } + } + + if (!positive || match) { + matched++; + return true; + } else { + return false; + } } int run_svm_tests(int ac, char **av) From patchwork Tue Jun 28 11:38:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manali Shukla X-Patchwork-Id: 12898127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40681C43334 for ; Tue, 28 Jun 2022 11:42:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344282AbiF1LmD (ORCPT ); Tue, 28 Jun 2022 07:42:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240736AbiF1LmC (ORCPT ); Tue, 28 Jun 2022 07:42:02 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2046.outbound.protection.outlook.com [40.107.220.46]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E58A12F3BA for ; Tue, 28 Jun 2022 04:41:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mQLS5rAEYzOy1BA+rD/EzFmvaY+/XJUwKmbgTm31omfMVZyxDW8kK1kFSjUc5OlMKpaRtU+2xgX+wphLzwSkVwjfpfwEaXk4V2FQIu31YLRnuF2eYnh8uzXPYuh7rFVUhpfzX7KI5Ws+jf0+0U8IAFhMTGP3+OsZFxWt5nQ+QL0GCsmXh2KuVsBM+zQRs6dAUfLkxu3UjjbIQj+H0A/HztLm3Aflct9OJVZGpHfNYUkgeKEOO/ys2F8A3X/2cqN/QpNtpqy+RmPdha4mFOX6/AtTk0j6N+zX463Tlh+ZKP76OV5YAMo0Luv2dhxre9GoQAt2cqHYyoICMcrmabl/+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QnUWoDAB/kyS3l3pYUzDuWJPaQRj2jTQ37y7IB+TjEE=; b=mkFvOeEYgKEy+4Ss5bx3Ds5iA/UwNRT5QsIjtlmC6cwfRpcVxo0Y0/fdMEhaBweuoAue08npj42QpqxlQfKkS4qOFuwzQlu7eRvcfzexnVe6cD5mc9OmWXzLhVaoK06O88LSAr30tzXrhADcR/vMSzM1r7iuBv4Q4+ut/Pzq+/Y0Vs9/GjwxsDRYm40ZyqmHdeQuSOmALYxfXZ9/DttTxG/CCD/bqr5qKYK2B0DHzMUagyb3L/npQH67v+ahujH50/NCIcvdbdl90Po+N8IpC1a54SRnuVGgoLa5Ei86sZdd9zy3fCcyIdLs0V519tv/i6A30vUWVY7pa+2u5aanyg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QnUWoDAB/kyS3l3pYUzDuWJPaQRj2jTQ37y7IB+TjEE=; b=NaL9cTNxnEwM7zK+O8AeHoJSFC0VIfua0akrywqwxBXGgOSUB78OVXo6JWgbMZncKAkG+UMLtEYSGd9viB9N+QxAtBq28cflrrXxC8fAZIh1neWgTjKyIHmjEUHYaIJcNiE6W1JSa0rAh+uy8CMfIqQIhHW6WvKy+lp5mtYL0R8= Received: from BN6PR16CA0046.namprd16.prod.outlook.com (2603:10b6:405:14::32) by MW4PR12MB5603.namprd12.prod.outlook.com (2603:10b6:303:16a::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.18; Tue, 28 Jun 2022 11:41:56 +0000 Received: from BN8NAM11FT064.eop-nam11.prod.protection.outlook.com (2603:10b6:405:14:cafe::8b) by BN6PR16CA0046.outlook.office365.com (2603:10b6:405:14::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.21 via Frontend Transport; Tue, 28 Jun 2022 11:41:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT064.mail.protection.outlook.com (10.13.176.160) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:41:55 +0000 Received: from bhadra.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 28 Jun 2022 06:41:52 -0500 From: Manali Shukla To: , CC: Subject: [kvm-unit-tests PATCH v5 7/8] x86: nSVM: Correct indentation for svm_tests.c part-1 Date: Tue, 28 Jun 2022 11:38:52 +0000 Message-ID: <20220628113853.392569-8-manali.shukla@amd.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220628113853.392569-1-manali.shukla@amd.com> References: <20220628113853.392569-1-manali.shukla@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 141cfadc-8a56-4c86-d15b-08da58fb33f7 X-MS-TrafficTypeDiagnostic: MW4PR12MB5603:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 39M45anaf7gorplOsLzfoJO1faNIl166HVjtsFuvsVQpNwggkWgXrfRT6Zi/Lua95oTFdtinhZo6i8zxSe/1B4GeriRVY/BMHcjzE0qYoVk4WDEJ4Dvy0apuS41DRDFMl5Z5A00IebNo0aFXJpVxzW9Pwv/Ff5THwU638ES3WKWGKDU1dW2tCynN1TE9jelh9V/lDf5dPih7IYj2zUkvAPe9IwFZU8ZschNOoDIHcn1/em74WrSu9h+Nf0nhKkb521MI7LhL8FjK+ms2MAEJVrUy3MsB4oOKzAfPiwriIJ6U7mvlJIJUGNEqav7ko/MURibCNTJCnHhpffJbeiDAhCuY3ONhxXnY2VNCj7BDXMmmAEneQh4sTJnc60VHZbmYyex2xx3IfrtH3xiCD90IU3G+ULPI/Q+wPdmnYB495HNa9ZaB8oCtdFcNx0aGo5haGWR8ndxaAqqUnq6p+wUfa4/8z0zKeFRX9wVMMOiMqIgde4JXfWdwqoXYTthGBdz1ngA9XmVkHJYNbEwvMI/kca2lG1okkyiV0fXBWVcfS2ayMUU3CxwRg93U3N4CSVSiGzaYhnghoHRHOh3qSAhdUfyF/mAkvbH9/SrJwmqECRwZ9a6UTT6As4+4hzpqh+z47OC6m+xAM7euEff/SyeGzNZNDwK/1Ydd8jKxXpOdbWvHWVr80GNVNQ9o93utw5ppdPHor+2Wnyxn6121xlc2jvTAwEWocGcvU33LqD6WnBn4fqWnnkHMVIu0hFgBXHFheKgpA9sdsmxjBLmzwxUODIHXPQTnk0mF0ygWoEeGVJPNMTY6DduCoKcJ915BNV0wjVKQgy5R8VGcBHcCaZSi6aBpJFGLREwGtls8O6cAQ3A= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(396003)(39860400002)(136003)(376002)(346002)(40470700004)(46966006)(36840700001)(36860700001)(41300700001)(7696005)(2616005)(2906002)(1076003)(82310400005)(16526019)(186003)(426003)(336012)(47076005)(40480700001)(8676002)(83380400001)(26005)(30864003)(86362001)(4326008)(81166007)(356005)(966005)(82740400003)(36756003)(8936002)(70586007)(70206006)(110136005)(316002)(40460700003)(5660300002)(478600001)(44832011)(36900700001)(579004)(559001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 11:41:55.7636 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 141cfadc-8a56-4c86-d15b-08da58fb33f7 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT064.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB5603 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fixed indentation errors in svm_tests.c. No functional changes intended. Signed-off-by: Manali Shukla --- x86/svm_tests.c | 2174 +++++++++++++++++++++++------------------------ 1 file changed, 1087 insertions(+), 1087 deletions(-) diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 1692912..f9e3f36 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -43,492 +43,492 @@ static void null_test(struct svm_test *test) static bool null_check(struct svm_test *test) { - return vmcb->control.exit_code == SVM_EXIT_VMMCALL; + return vmcb->control.exit_code == SVM_EXIT_VMMCALL; } static void prepare_no_vmrun_int(struct svm_test *test) { - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN); + vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN); } static bool check_no_vmrun_int(struct svm_test *test) { - return vmcb->control.exit_code == SVM_EXIT_ERR; + return vmcb->control.exit_code == SVM_EXIT_ERR; } static void test_vmrun(struct svm_test *test) { - asm volatile ("vmrun %0" : : "a"(virt_to_phys(vmcb))); + asm volatile ("vmrun %0" : : "a"(virt_to_phys(vmcb))); } static bool check_vmrun(struct svm_test *test) { - return vmcb->control.exit_code == SVM_EXIT_VMRUN; + return vmcb->control.exit_code == SVM_EXIT_VMRUN; } static void prepare_rsm_intercept(struct svm_test *test) { - default_prepare(test); - vmcb->control.intercept |= 1 << INTERCEPT_RSM; - vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR); + default_prepare(test); + vmcb->control.intercept |= 1 << INTERCEPT_RSM; + vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR); } static void test_rsm_intercept(struct svm_test *test) { - asm volatile ("rsm" : : : "memory"); + asm volatile ("rsm" : : : "memory"); } static bool check_rsm_intercept(struct svm_test *test) { - return get_test_stage(test) == 2; + return get_test_stage(test) == 2; } static bool finished_rsm_intercept(struct svm_test *test) { - switch (get_test_stage(test)) { - case 0: - if (vmcb->control.exit_code != SVM_EXIT_RSM) { - report_fail("VMEXIT not due to rsm. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - vmcb->control.intercept &= ~(1 << INTERCEPT_RSM); - inc_test_stage(test); - break; + switch (get_test_stage(test)) { + case 0: + if (vmcb->control.exit_code != SVM_EXIT_RSM) { + report_fail("VMEXIT not due to rsm. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + vmcb->control.intercept &= ~(1 << INTERCEPT_RSM); + inc_test_stage(test); + break; - case 1: - if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) { - report_fail("VMEXIT not due to #UD. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - vmcb->save.rip += 2; - inc_test_stage(test); - break; + case 1: + if (vmcb->control.exit_code != SVM_EXIT_EXCP_BASE + UD_VECTOR) { + report_fail("VMEXIT not due to #UD. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + vmcb->save.rip += 2; + inc_test_stage(test); + break; - default: - return true; - } - return get_test_stage(test) == 2; + default: + return true; + } + return get_test_stage(test) == 2; } static void prepare_cr3_intercept(struct svm_test *test) { - default_prepare(test); - vmcb->control.intercept_cr_read |= 1 << 3; + default_prepare(test); + vmcb->control.intercept_cr_read |= 1 << 3; } static void test_cr3_intercept(struct svm_test *test) { - asm volatile ("mov %%cr3, %0" : "=r"(test->scratch) : : "memory"); + asm volatile ("mov %%cr3, %0" : "=r"(test->scratch) : : "memory"); } static bool check_cr3_intercept(struct svm_test *test) { - return vmcb->control.exit_code == SVM_EXIT_READ_CR3; + return vmcb->control.exit_code == SVM_EXIT_READ_CR3; } static bool check_cr3_nointercept(struct svm_test *test) { - return null_check(test) && test->scratch == read_cr3(); + return null_check(test) && test->scratch == read_cr3(); } static void corrupt_cr3_intercept_bypass(void *_test) { - struct svm_test *test = _test; - extern volatile u32 mmio_insn; + struct svm_test *test = _test; + extern volatile u32 mmio_insn; - while (!__sync_bool_compare_and_swap(&test->scratch, 1, 2)) - pause(); - pause(); - pause(); - pause(); - mmio_insn = 0x90d8200f; // mov %cr3, %rax; nop + while (!__sync_bool_compare_and_swap(&test->scratch, 1, 2)) + pause(); + pause(); + pause(); + pause(); + mmio_insn = 0x90d8200f; // mov %cr3, %rax; nop } static void prepare_cr3_intercept_bypass(struct svm_test *test) { - default_prepare(test); - vmcb->control.intercept_cr_read |= 1 << 3; - on_cpu_async(1, corrupt_cr3_intercept_bypass, test); + default_prepare(test); + vmcb->control.intercept_cr_read |= 1 << 3; + on_cpu_async(1, corrupt_cr3_intercept_bypass, test); } static void test_cr3_intercept_bypass(struct svm_test *test) { - ulong a = 0xa0000; + ulong a = 0xa0000; - test->scratch = 1; - while (test->scratch != 2) - barrier(); + test->scratch = 1; + while (test->scratch != 2) + barrier(); - asm volatile ("mmio_insn: mov %0, (%0); nop" - : "+a"(a) : : "memory"); - test->scratch = a; + asm volatile ("mmio_insn: mov %0, (%0); nop" + : "+a"(a) : : "memory"); + test->scratch = a; } static void prepare_dr_intercept(struct svm_test *test) { - default_prepare(test); - vmcb->control.intercept_dr_read = 0xff; - vmcb->control.intercept_dr_write = 0xff; + default_prepare(test); + vmcb->control.intercept_dr_read = 0xff; + vmcb->control.intercept_dr_write = 0xff; } static void test_dr_intercept(struct svm_test *test) { - unsigned int i, failcnt = 0; + unsigned int i, failcnt = 0; - /* Loop testing debug register reads */ - for (i = 0; i < 8; i++) { + /* Loop testing debug register reads */ + for (i = 0; i < 8; i++) { - switch (i) { - case 0: - asm volatile ("mov %%dr0, %0" : "=r"(test->scratch) : : "memory"); - break; - case 1: - asm volatile ("mov %%dr1, %0" : "=r"(test->scratch) : : "memory"); - break; - case 2: - asm volatile ("mov %%dr2, %0" : "=r"(test->scratch) : : "memory"); - break; - case 3: - asm volatile ("mov %%dr3, %0" : "=r"(test->scratch) : : "memory"); - break; - case 4: - asm volatile ("mov %%dr4, %0" : "=r"(test->scratch) : : "memory"); - break; - case 5: - asm volatile ("mov %%dr5, %0" : "=r"(test->scratch) : : "memory"); - break; - case 6: - asm volatile ("mov %%dr6, %0" : "=r"(test->scratch) : : "memory"); - break; - case 7: - asm volatile ("mov %%dr7, %0" : "=r"(test->scratch) : : "memory"); - break; - } + switch (i) { + case 0: + asm volatile ("mov %%dr0, %0" : "=r"(test->scratch) : : "memory"); + break; + case 1: + asm volatile ("mov %%dr1, %0" : "=r"(test->scratch) : : "memory"); + break; + case 2: + asm volatile ("mov %%dr2, %0" : "=r"(test->scratch) : : "memory"); + break; + case 3: + asm volatile ("mov %%dr3, %0" : "=r"(test->scratch) : : "memory"); + break; + case 4: + asm volatile ("mov %%dr4, %0" : "=r"(test->scratch) : : "memory"); + break; + case 5: + asm volatile ("mov %%dr5, %0" : "=r"(test->scratch) : : "memory"); + break; + case 6: + asm volatile ("mov %%dr6, %0" : "=r"(test->scratch) : : "memory"); + break; + case 7: + asm volatile ("mov %%dr7, %0" : "=r"(test->scratch) : : "memory"); + break; + } - if (test->scratch != i) { - report_fail("dr%u read intercept", i); - failcnt++; - } - } + if (test->scratch != i) { + report_fail("dr%u read intercept", i); + failcnt++; + } + } - /* Loop testing debug register writes */ - for (i = 0; i < 8; i++) { + /* Loop testing debug register writes */ + for (i = 0; i < 8; i++) { - switch (i) { - case 0: - asm volatile ("mov %0, %%dr0" : : "r"(test->scratch) : "memory"); - break; - case 1: - asm volatile ("mov %0, %%dr1" : : "r"(test->scratch) : "memory"); - break; - case 2: - asm volatile ("mov %0, %%dr2" : : "r"(test->scratch) : "memory"); - break; - case 3: - asm volatile ("mov %0, %%dr3" : : "r"(test->scratch) : "memory"); - break; - case 4: - asm volatile ("mov %0, %%dr4" : : "r"(test->scratch) : "memory"); - break; - case 5: - asm volatile ("mov %0, %%dr5" : : "r"(test->scratch) : "memory"); - break; - case 6: - asm volatile ("mov %0, %%dr6" : : "r"(test->scratch) : "memory"); - break; - case 7: - asm volatile ("mov %0, %%dr7" : : "r"(test->scratch) : "memory"); - break; - } + switch (i) { + case 0: + asm volatile ("mov %0, %%dr0" : : "r"(test->scratch) : "memory"); + break; + case 1: + asm volatile ("mov %0, %%dr1" : : "r"(test->scratch) : "memory"); + break; + case 2: + asm volatile ("mov %0, %%dr2" : : "r"(test->scratch) : "memory"); + break; + case 3: + asm volatile ("mov %0, %%dr3" : : "r"(test->scratch) : "memory"); + break; + case 4: + asm volatile ("mov %0, %%dr4" : : "r"(test->scratch) : "memory"); + break; + case 5: + asm volatile ("mov %0, %%dr5" : : "r"(test->scratch) : "memory"); + break; + case 6: + asm volatile ("mov %0, %%dr6" : : "r"(test->scratch) : "memory"); + break; + case 7: + asm volatile ("mov %0, %%dr7" : : "r"(test->scratch) : "memory"); + break; + } - if (test->scratch != i) { - report_fail("dr%u write intercept", i); - failcnt++; - } - } + if (test->scratch != i) { + report_fail("dr%u write intercept", i); + failcnt++; + } + } - test->scratch = failcnt; + test->scratch = failcnt; } static bool dr_intercept_finished(struct svm_test *test) { - ulong n = (vmcb->control.exit_code - SVM_EXIT_READ_DR0); + ulong n = (vmcb->control.exit_code - SVM_EXIT_READ_DR0); - /* Only expect DR intercepts */ - if (n > (SVM_EXIT_MAX_DR_INTERCEPT - SVM_EXIT_READ_DR0)) - return true; + /* Only expect DR intercepts */ + if (n > (SVM_EXIT_MAX_DR_INTERCEPT - SVM_EXIT_READ_DR0)) + return true; - /* - * Compute debug register number. - * Per Appendix C "SVM Intercept Exit Codes" of AMD64 Architecture - * Programmer's Manual Volume 2 - System Programming: - * http://support.amd.com/TechDocs/24593.pdf - * there are 16 VMEXIT codes each for DR read and write. - */ - test->scratch = (n % 16); + /* + * Compute debug register number. + * Per Appendix C "SVM Intercept Exit Codes" of AMD64 Architecture + * Programmer's Manual Volume 2 - System Programming: + * http://support.amd.com/TechDocs/24593.pdf + * there are 16 VMEXIT codes each for DR read and write. + */ + test->scratch = (n % 16); - /* Jump over MOV instruction */ - vmcb->save.rip += 3; + /* Jump over MOV instruction */ + vmcb->save.rip += 3; - return false; + return false; } static bool check_dr_intercept(struct svm_test *test) { - return !test->scratch; + return !test->scratch; } static bool next_rip_supported(void) { - return this_cpu_has(X86_FEATURE_NRIPS); + return this_cpu_has(X86_FEATURE_NRIPS); } static void prepare_next_rip(struct svm_test *test) { - vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC); + vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC); } static void test_next_rip(struct svm_test *test) { - asm volatile ("rdtsc\n\t" - ".globl exp_next_rip\n\t" - "exp_next_rip:\n\t" ::: "eax", "edx"); + asm volatile ("rdtsc\n\t" + ".globl exp_next_rip\n\t" + "exp_next_rip:\n\t" ::: "eax", "edx"); } static bool check_next_rip(struct svm_test *test) { - extern char exp_next_rip; - unsigned long address = (unsigned long)&exp_next_rip; + extern char exp_next_rip; + unsigned long address = (unsigned long)&exp_next_rip; - return address == vmcb->control.next_rip; + return address == vmcb->control.next_rip; } extern u8 *msr_bitmap; static void prepare_msr_intercept(struct svm_test *test) { - default_prepare(test); - vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT); - vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR); - memset(msr_bitmap, 0xff, MSR_BITMAP_SIZE); + default_prepare(test); + vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT); + vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR); + memset(msr_bitmap, 0xff, MSR_BITMAP_SIZE); } static void test_msr_intercept(struct svm_test *test) { - unsigned long msr_value = 0xef8056791234abcd; /* Arbitrary value */ - unsigned long msr_index; - - for (msr_index = 0; msr_index <= 0xc0011fff; msr_index++) { - if (msr_index == 0xC0010131 /* MSR_SEV_STATUS */) { - /* - * Per section 15.34.10 "SEV_STATUS MSR" of AMD64 Architecture - * Programmer's Manual volume 2 - System Programming: - * http://support.amd.com/TechDocs/24593.pdf - * SEV_STATUS MSR (C001_0131) is a non-interceptable MSR. - */ - continue; - } + unsigned long msr_value = 0xef8056791234abcd; /* Arbitrary value */ + unsigned long msr_index; + + for (msr_index = 0; msr_index <= 0xc0011fff; msr_index++) { + if (msr_index == 0xC0010131 /* MSR_SEV_STATUS */) { + /* + * Per section 15.34.10 "SEV_STATUS MSR" of AMD64 Architecture + * Programmer's Manual volume 2 - System Programming: + * http://support.amd.com/TechDocs/24593.pdf + * SEV_STATUS MSR (C001_0131) is a non-interceptable MSR. + */ + continue; + } - /* Skips gaps between supported MSR ranges */ - if (msr_index == 0x2000) - msr_index = 0xc0000000; - else if (msr_index == 0xc0002000) - msr_index = 0xc0010000; + /* Skips gaps between supported MSR ranges */ + if (msr_index == 0x2000) + msr_index = 0xc0000000; + else if (msr_index == 0xc0002000) + msr_index = 0xc0010000; - test->scratch = -1; + test->scratch = -1; - rdmsr(msr_index); + rdmsr(msr_index); - /* Check that a read intercept occurred for MSR at msr_index */ - if (test->scratch != msr_index) - report_fail("MSR 0x%lx read intercept", msr_index); + /* Check that a read intercept occurred for MSR at msr_index */ + if (test->scratch != msr_index) + report_fail("MSR 0x%lx read intercept", msr_index); - /* - * Poor man approach to generate a value that - * seems arbitrary each time around the loop. - */ - msr_value += (msr_value << 1); + /* + * Poor man approach to generate a value that + * seems arbitrary each time around the loop. + */ + msr_value += (msr_value << 1); - wrmsr(msr_index, msr_value); + wrmsr(msr_index, msr_value); - /* Check that a write intercept occurred for MSR with msr_value */ - if (test->scratch != msr_value) - report_fail("MSR 0x%lx write intercept", msr_index); - } + /* Check that a write intercept occurred for MSR with msr_value */ + if (test->scratch != msr_value) + report_fail("MSR 0x%lx write intercept", msr_index); + } - test->scratch = -2; + test->scratch = -2; } static bool msr_intercept_finished(struct svm_test *test) { - u32 exit_code = vmcb->control.exit_code; - u64 exit_info_1; - u8 *opcode; + u32 exit_code = vmcb->control.exit_code; + u64 exit_info_1; + u8 *opcode; - if (exit_code == SVM_EXIT_MSR) { - exit_info_1 = vmcb->control.exit_info_1; - } else { - /* - * If #GP exception occurs instead, check that it was - * for RDMSR/WRMSR and set exit_info_1 accordingly. - */ + if (exit_code == SVM_EXIT_MSR) { + exit_info_1 = vmcb->control.exit_info_1; + } else { + /* + * If #GP exception occurs instead, check that it was + * for RDMSR/WRMSR and set exit_info_1 accordingly. + */ - if (exit_code != (SVM_EXIT_EXCP_BASE + GP_VECTOR)) - return true; + if (exit_code != (SVM_EXIT_EXCP_BASE + GP_VECTOR)) + return true; - opcode = (u8 *)vmcb->save.rip; - if (opcode[0] != 0x0f) - return true; + opcode = (u8 *)vmcb->save.rip; + if (opcode[0] != 0x0f) + return true; - switch (opcode[1]) { - case 0x30: /* WRMSR */ - exit_info_1 = 1; - break; - case 0x32: /* RDMSR */ - exit_info_1 = 0; - break; - default: - return true; - } + switch (opcode[1]) { + case 0x30: /* WRMSR */ + exit_info_1 = 1; + break; + case 0x32: /* RDMSR */ + exit_info_1 = 0; + break; + default: + return true; + } - /* - * Warn that #GP exception occurred instead. - * RCX holds the MSR index. - */ - printf("%s 0x%lx #GP exception\n", - exit_info_1 ? "WRMSR" : "RDMSR", get_regs().rcx); - } + /* + * Warn that #GP exception occured instead. + * RCX holds the MSR index. + */ + printf("%s 0x%lx #GP exception\n", + exit_info_1 ? "WRMSR" : "RDMSR", get_regs().rcx); + } - /* Jump over RDMSR/WRMSR instruction */ - vmcb->save.rip += 2; - - /* - * Test whether the intercept was for RDMSR/WRMSR. - * For RDMSR, test->scratch is set to the MSR index; - * RCX holds the MSR index. - * For WRMSR, test->scratch is set to the MSR value; - * RDX holds the upper 32 bits of the MSR value, - * while RAX hold its lower 32 bits. - */ - if (exit_info_1) - test->scratch = - ((get_regs().rdx << 32) | (vmcb->save.rax & 0xffffffff)); - else - test->scratch = get_regs().rcx; + /* Jump over RDMSR/WRMSR instruction */ + vmcb->save.rip += 2; + + /* + * Test whether the intercept was for RDMSR/WRMSR. + * For RDMSR, test->scratch is set to the MSR index; + * RCX holds the MSR index. + * For WRMSR, test->scratch is set to the MSR value; + * RDX holds the upper 32 bits of the MSR value, + * while RAX hold its lower 32 bits. + */ + if (exit_info_1) + test->scratch = + ((get_regs().rdx << 32) | (vmcb->save.rax & 0xffffffff)); + else + test->scratch = get_regs().rcx; - return false; + return false; } static bool check_msr_intercept(struct svm_test *test) { - memset(msr_bitmap, 0, MSR_BITMAP_SIZE); - return (test->scratch == -2); + memset(msr_bitmap, 0, MSR_BITMAP_SIZE); + return (test->scratch == -2); } static void prepare_mode_switch(struct svm_test *test) { - vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR) - | (1ULL << UD_VECTOR) - | (1ULL << DF_VECTOR) - | (1ULL << PF_VECTOR); - test->scratch = 0; + vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR) + | (1ULL << UD_VECTOR) + | (1ULL << DF_VECTOR) + | (1ULL << PF_VECTOR); + test->scratch = 0; } static void test_mode_switch(struct svm_test *test) { - asm volatile(" cli\n" - " ljmp *1f\n" /* jump to 32-bit code segment */ - "1:\n" - " .long 2f\n" - " .long " xstr(KERNEL_CS32) "\n" - ".code32\n" - "2:\n" - " movl %%cr0, %%eax\n" - " btcl $31, %%eax\n" /* clear PG */ - " movl %%eax, %%cr0\n" - " movl $0xc0000080, %%ecx\n" /* EFER */ - " rdmsr\n" - " btcl $8, %%eax\n" /* clear LME */ - " wrmsr\n" - " movl %%cr4, %%eax\n" - " btcl $5, %%eax\n" /* clear PAE */ - " movl %%eax, %%cr4\n" - " movw %[ds16], %%ax\n" - " movw %%ax, %%ds\n" - " ljmpl %[cs16], $3f\n" /* jump to 16 bit protected-mode */ - ".code16\n" - "3:\n" - " movl %%cr0, %%eax\n" - " btcl $0, %%eax\n" /* clear PE */ - " movl %%eax, %%cr0\n" - " ljmpl $0, $4f\n" /* jump to real-mode */ - "4:\n" - " vmmcall\n" - " movl %%cr0, %%eax\n" - " btsl $0, %%eax\n" /* set PE */ - " movl %%eax, %%cr0\n" - " ljmpl %[cs32], $5f\n" /* back to protected mode */ - ".code32\n" - "5:\n" - " movl %%cr4, %%eax\n" - " btsl $5, %%eax\n" /* set PAE */ - " movl %%eax, %%cr4\n" - " movl $0xc0000080, %%ecx\n" /* EFER */ - " rdmsr\n" - " btsl $8, %%eax\n" /* set LME */ - " wrmsr\n" - " movl %%cr0, %%eax\n" - " btsl $31, %%eax\n" /* set PG */ - " movl %%eax, %%cr0\n" - " ljmpl %[cs64], $6f\n" /* back to long mode */ - ".code64\n\t" - "6:\n" - " vmmcall\n" - :: [cs16] "i"(KERNEL_CS16), [ds16] "i"(KERNEL_DS16), - [cs32] "i"(KERNEL_CS32), [cs64] "i"(KERNEL_CS64) - : "rax", "rbx", "rcx", "rdx", "memory"); + asm volatile(" cli\n" + " ljmp *1f\n" /* jump to 32-bit code segment */ + "1:\n" + " .long 2f\n" + " .long " xstr(KERNEL_CS32) "\n" + ".code32\n" + "2:\n" + " movl %%cr0, %%eax\n" + " btcl $31, %%eax\n" /* clear PG */ + " movl %%eax, %%cr0\n" + " movl $0xc0000080, %%ecx\n" /* EFER */ + " rdmsr\n" + " btcl $8, %%eax\n" /* clear LME */ + " wrmsr\n" + " movl %%cr4, %%eax\n" + " btcl $5, %%eax\n" /* clear PAE */ + " movl %%eax, %%cr4\n" + " movw %[ds16], %%ax\n" + " movw %%ax, %%ds\n" + " ljmpl %[cs16], $3f\n" /* jump to 16 bit protected-mode */ + ".code16\n" + "3:\n" + " movl %%cr0, %%eax\n" + " btcl $0, %%eax\n" /* clear PE */ + " movl %%eax, %%cr0\n" + " ljmpl $0, $4f\n" /* jump to real-mode */ + "4:\n" + " vmmcall\n" + " movl %%cr0, %%eax\n" + " btsl $0, %%eax\n" /* set PE */ + " movl %%eax, %%cr0\n" + " ljmpl %[cs32], $5f\n" /* back to protected mode */ + ".code32\n" + "5:\n" + " movl %%cr4, %%eax\n" + " btsl $5, %%eax\n" /* set PAE */ + " movl %%eax, %%cr4\n" + " movl $0xc0000080, %%ecx\n" /* EFER */ + " rdmsr\n" + " btsl $8, %%eax\n" /* set LME */ + " wrmsr\n" + " movl %%cr0, %%eax\n" + " btsl $31, %%eax\n" /* set PG */ + " movl %%eax, %%cr0\n" + " ljmpl %[cs64], $6f\n" /* back to long mode */ + ".code64\n\t" + "6:\n" + " vmmcall\n" + :: [cs16] "i"(KERNEL_CS16), [ds16] "i"(KERNEL_DS16), + [cs32] "i"(KERNEL_CS32), [cs64] "i"(KERNEL_CS64) + : "rax", "rbx", "rcx", "rdx", "memory"); } static bool mode_switch_finished(struct svm_test *test) { - u64 cr0, cr4, efer; + u64 cr0, cr4, efer; - cr0 = vmcb->save.cr0; - cr4 = vmcb->save.cr4; - efer = vmcb->save.efer; + cr0 = vmcb->save.cr0; + cr4 = vmcb->save.cr4; + efer = vmcb->save.efer; - /* Only expect VMMCALL intercepts */ - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) - return true; + /* Only expect VMMCALL intercepts */ + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) + return true; - /* Jump over VMMCALL instruction */ - vmcb->save.rip += 3; + /* Jump over VMMCALL instruction */ + vmcb->save.rip += 3; - /* Do sanity checks */ - switch (test->scratch) { - case 0: - /* Test should be in real mode now - check for this */ - if ((cr0 & 0x80000001) || /* CR0.PG, CR0.PE */ - (cr4 & 0x00000020) || /* CR4.PAE */ - (efer & 0x00000500)) /* EFER.LMA, EFER.LME */ - return true; - break; - case 2: - /* Test should be back in long-mode now - check for this */ - if (((cr0 & 0x80000001) != 0x80000001) || /* CR0.PG, CR0.PE */ - ((cr4 & 0x00000020) != 0x00000020) || /* CR4.PAE */ - ((efer & 0x00000500) != 0x00000500)) /* EFER.LMA, EFER.LME */ - return true; - break; - } + /* Do sanity checks */ + switch (test->scratch) { + case 0: + /* Test should be in real mode now - check for this */ + if ((cr0 & 0x80000001) || /* CR0.PG, CR0.PE */ + (cr4 & 0x00000020) || /* CR4.PAE */ + (efer & 0x00000500)) /* EFER.LMA, EFER.LME */ + return true; + break; + case 2: + /* Test should be back in long-mode now - check for this */ + if (((cr0 & 0x80000001) != 0x80000001) || /* CR0.PG, CR0.PE */ + ((cr4 & 0x00000020) != 0x00000020) || /* CR4.PAE */ + ((efer & 0x00000500) != 0x00000500)) /* EFER.LMA, EFER.LME */ + return true; + break; + } - /* one step forward */ - test->scratch += 1; + /* one step forward */ + test->scratch += 1; - return test->scratch == 2; + return test->scratch == 2; } static bool check_mode_switch(struct svm_test *test) @@ -540,132 +540,132 @@ extern u8 *io_bitmap; static void prepare_ioio(struct svm_test *test) { - vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT); - test->scratch = 0; - memset(io_bitmap, 0, 8192); - io_bitmap[8192] = 0xFF; + vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT); + test->scratch = 0; + memset(io_bitmap, 0, 8192); + io_bitmap[8192] = 0xFF; } static void test_ioio(struct svm_test *test) { - // stage 0, test IO pass - inb(0x5000); - outb(0x0, 0x5000); - if (get_test_stage(test) != 0) - goto fail; - - // test IO width, in/out - io_bitmap[0] = 0xFF; - inc_test_stage(test); - inb(0x0); - if (get_test_stage(test) != 2) - goto fail; - - outw(0x0, 0x0); - if (get_test_stage(test) != 3) - goto fail; - - inl(0x0); - if (get_test_stage(test) != 4) - goto fail; - - // test low/high IO port - io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8)); - inb(0x5000); - if (get_test_stage(test) != 5) - goto fail; - - io_bitmap[0x9000 / 8] = (1 << (0x9000 % 8)); - inw(0x9000); - if (get_test_stage(test) != 6) - goto fail; - - // test partial pass - io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8)); - inl(0x4FFF); - if (get_test_stage(test) != 7) - goto fail; - - // test across pages - inc_test_stage(test); - inl(0x7FFF); - if (get_test_stage(test) != 8) - goto fail; - - inc_test_stage(test); - io_bitmap[0x8000 / 8] = 1 << (0x8000 % 8); - inl(0x7FFF); - if (get_test_stage(test) != 10) - goto fail; - - io_bitmap[0] = 0; - inl(0xFFFF); - if (get_test_stage(test) != 11) - goto fail; - - io_bitmap[0] = 0xFF; - io_bitmap[8192] = 0; - inl(0xFFFF); - inc_test_stage(test); - if (get_test_stage(test) != 12) - goto fail; + // stage 0, test IO pass + inb(0x5000); + outb(0x0, 0x5000); + if (get_test_stage(test) != 0) + goto fail; - return; + // test IO width, in/out + io_bitmap[0] = 0xFF; + inc_test_stage(test); + inb(0x0); + if (get_test_stage(test) != 2) + goto fail; + + outw(0x0, 0x0); + if (get_test_stage(test) != 3) + goto fail; + + inl(0x0); + if (get_test_stage(test) != 4) + goto fail; + + // test low/high IO port + io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8)); + inb(0x5000); + if (get_test_stage(test) != 5) + goto fail; + + io_bitmap[0x9000 / 8] = (1 << (0x9000 % 8)); + inw(0x9000); + if (get_test_stage(test) != 6) + goto fail; + + // test partial pass + io_bitmap[0x5000 / 8] = (1 << (0x5000 % 8)); + inl(0x4FFF); + if (get_test_stage(test) != 7) + goto fail; + + // test across pages + inc_test_stage(test); + inl(0x7FFF); + if (get_test_stage(test) != 8) + goto fail; + + inc_test_stage(test); + io_bitmap[0x8000 / 8] = 1 << (0x8000 % 8); + inl(0x7FFF); + if (get_test_stage(test) != 10) + goto fail; + + io_bitmap[0] = 0; + inl(0xFFFF); + if (get_test_stage(test) != 11) + goto fail; + + io_bitmap[0] = 0xFF; + io_bitmap[8192] = 0; + inl(0xFFFF); + inc_test_stage(test); + if (get_test_stage(test) != 12) + goto fail; + + return; fail: - report_fail("stage %d", get_test_stage(test)); - test->scratch = -1; + report_fail("stage %d", get_test_stage(test)); + test->scratch = -1; } static bool ioio_finished(struct svm_test *test) { - unsigned port, size; + unsigned port, size; - /* Only expect IOIO intercepts */ - if (vmcb->control.exit_code == SVM_EXIT_VMMCALL) - return true; + /* Only expect IOIO intercepts */ + if (vmcb->control.exit_code == SVM_EXIT_VMMCALL) + return true; - if (vmcb->control.exit_code != SVM_EXIT_IOIO) - return true; + if (vmcb->control.exit_code != SVM_EXIT_IOIO) + return true; - /* one step forward */ - test->scratch += 1; + /* one step forward */ + test->scratch += 1; - port = vmcb->control.exit_info_1 >> 16; - size = (vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7; + port = vmcb->control.exit_info_1 >> 16; + size = (vmcb->control.exit_info_1 >> SVM_IOIO_SIZE_SHIFT) & 7; - while (size--) { - io_bitmap[port / 8] &= ~(1 << (port & 7)); - port++; - } + while (size--) { + io_bitmap[port / 8] &= ~(1 << (port & 7)); + port++; + } - return false; + return false; } static bool check_ioio(struct svm_test *test) { - memset(io_bitmap, 0, 8193); - return test->scratch != -1; + memset(io_bitmap, 0, 8193); + return test->scratch != -1; } static void prepare_asid_zero(struct svm_test *test) { - vmcb->control.asid = 0; + vmcb->control.asid = 0; } static void test_asid_zero(struct svm_test *test) { - asm volatile ("vmmcall\n\t"); + asm volatile ("vmmcall\n\t"); } static bool check_asid_zero(struct svm_test *test) { - return vmcb->control.exit_code == SVM_EXIT_ERR; + return vmcb->control.exit_code == SVM_EXIT_ERR; } static void sel_cr0_bug_prepare(struct svm_test *test) { - vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0); + vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0); } static bool sel_cr0_bug_finished(struct svm_test *test) @@ -675,25 +675,25 @@ static bool sel_cr0_bug_finished(struct svm_test *test) static void sel_cr0_bug_test(struct svm_test *test) { - unsigned long cr0; + unsigned long cr0; - /* read cr0, clear CD, and write back */ - cr0 = read_cr0(); - cr0 |= (1UL << 30); - write_cr0(cr0); + /* read cr0, clear CD, and write back */ + cr0 = read_cr0(); + cr0 |= (1UL << 30); + write_cr0(cr0); - /* - * If we are here the test failed, not sure what to do now because we - * are not in guest-mode anymore so we can't trigger an intercept. - * Trigger a tripple-fault for now. - */ - report_fail("sel_cr0 test. Can not recover from this - exiting"); - exit(report_summary()); + /* + * If we are here the test failed, not sure what to do now because we + * are not in guest-mode anymore so we can't trigger an intercept. + * Trigger a tripple-fault for now. + */ + report_fail("sel_cr0 test. Can not recover from this - exiting"); + exit(report_summary()); } static bool sel_cr0_bug_check(struct svm_test *test) { - return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE; + return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE; } #define TSC_ADJUST_VALUE (1ll << 32) @@ -702,43 +702,43 @@ static bool ok; static bool tsc_adjust_supported(void) { - return this_cpu_has(X86_FEATURE_TSC_ADJUST); + return this_cpu_has(X86_FEATURE_TSC_ADJUST); } static void tsc_adjust_prepare(struct svm_test *test) { - default_prepare(test); - vmcb->control.tsc_offset = TSC_OFFSET_VALUE; + default_prepare(test); + vmcb->control.tsc_offset = TSC_OFFSET_VALUE; - wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE); - int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST); - ok = adjust == -TSC_ADJUST_VALUE; + wrmsr(MSR_IA32_TSC_ADJUST, -TSC_ADJUST_VALUE); + int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST); + ok = adjust == -TSC_ADJUST_VALUE; } static void tsc_adjust_test(struct svm_test *test) { - int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST); - ok &= adjust == -TSC_ADJUST_VALUE; + int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST); + ok &= adjust == -TSC_ADJUST_VALUE; - uint64_t l1_tsc = rdtsc() - TSC_OFFSET_VALUE; - wrmsr(MSR_IA32_TSC, l1_tsc - TSC_ADJUST_VALUE); + uint64_t l1_tsc = rdtsc() - TSC_OFFSET_VALUE; + wrmsr(MSR_IA32_TSC, l1_tsc - TSC_ADJUST_VALUE); - adjust = rdmsr(MSR_IA32_TSC_ADJUST); - ok &= adjust <= -2 * TSC_ADJUST_VALUE; + adjust = rdmsr(MSR_IA32_TSC_ADJUST); + ok &= adjust <= -2 * TSC_ADJUST_VALUE; - uint64_t l1_tsc_end = rdtsc() - TSC_OFFSET_VALUE; - ok &= (l1_tsc_end + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE; + uint64_t l1_tsc_end = rdtsc() - TSC_OFFSET_VALUE; + ok &= (l1_tsc_end + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE; - uint64_t l1_tsc_msr = rdmsr(MSR_IA32_TSC) - TSC_OFFSET_VALUE; - ok &= (l1_tsc_msr + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE; + uint64_t l1_tsc_msr = rdmsr(MSR_IA32_TSC) - TSC_OFFSET_VALUE; + ok &= (l1_tsc_msr + TSC_ADJUST_VALUE - l1_tsc) < TSC_ADJUST_VALUE; } static bool tsc_adjust_check(struct svm_test *test) { - int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST); + int64_t adjust = rdmsr(MSR_IA32_TSC_ADJUST); - wrmsr(MSR_IA32_TSC_ADJUST, 0); - return ok && adjust <= -2 * TSC_ADJUST_VALUE; + wrmsr(MSR_IA32_TSC_ADJUST, 0); + return ok && adjust <= -2 * TSC_ADJUST_VALUE; } @@ -749,203 +749,203 @@ static u64 guest_tsc_delay_value; static void svm_tsc_scale_guest(struct svm_test *test) { - u64 start_tsc = rdtsc(); + u64 start_tsc = rdtsc(); - while (rdtsc() - start_tsc < guest_tsc_delay_value) - cpu_relax(); + while (rdtsc() - start_tsc < guest_tsc_delay_value) + cpu_relax(); } static void svm_tsc_scale_run_testcase(u64 duration, - double tsc_scale, u64 tsc_offset) + double tsc_scale, u64 tsc_offset) { - u64 start_tsc, actual_duration; + u64 start_tsc, actual_duration; - guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale; + guest_tsc_delay_value = (duration << TSC_SHIFT) * tsc_scale; - test_set_guest(svm_tsc_scale_guest); - vmcb->control.tsc_offset = tsc_offset; - wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32))); + test_set_guest(svm_tsc_scale_guest); + vmcb->control.tsc_offset = tsc_offset; + wrmsr(MSR_AMD64_TSC_RATIO, (u64)(tsc_scale * (1ULL << 32))); - start_tsc = rdtsc(); + start_tsc = rdtsc(); - if (svm_vmrun() != SVM_EXIT_VMMCALL) - report_fail("unexpected vm exit code 0x%x", vmcb->control.exit_code); + if (svm_vmrun() != SVM_EXIT_VMMCALL) + report_fail("unexpected vm exit code 0x%x", vmcb->control.exit_code); - actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT; + actual_duration = (rdtsc() - start_tsc) >> TSC_SHIFT; - report(duration == actual_duration, "tsc delay (expected: %lu, actual: %lu)", - duration, actual_duration); + report(duration == actual_duration, "tsc delay (expected: %lu, actual: %lu)", + duration, actual_duration); } static void svm_tsc_scale_test(void) { - int i; + int i; - if (!tsc_scale_supported()) { - report_skip("TSC scale not supported in the guest"); - return; - } + if (!tsc_scale_supported()) { + report_skip("TSC scale not supported in the guest"); + return; + } - report(rdmsr(MSR_AMD64_TSC_RATIO) == TSC_RATIO_DEFAULT, - "initial TSC scale ratio"); + report(rdmsr(MSR_AMD64_TSC_RATIO) == TSC_RATIO_DEFAULT, + "initial TSC scale ratio"); - for (i = 0 ; i < TSC_SCALE_ITERATIONS; i++) { + for (i = 0 ; i < TSC_SCALE_ITERATIONS; i++) { - double tsc_scale = (double)(rdrand() % 100 + 1) / 10; - int duration = rdrand() % 50 + 1; - u64 tsc_offset = rdrand(); + double tsc_scale = (double)(rdrand() % 100 + 1) / 10; + int duration = rdrand() % 50 + 1; + u64 tsc_offset = rdrand(); - report_info("duration=%d, tsc_scale=%d, tsc_offset=%ld", - duration, (int)(tsc_scale * 100), tsc_offset); + report_info("duration=%d, tsc_scale=%d, tsc_offset=%ld", + duration, (int)(tsc_scale * 100), tsc_offset); - svm_tsc_scale_run_testcase(duration, tsc_scale, tsc_offset); - } + svm_tsc_scale_run_testcase(duration, tsc_scale, tsc_offset); + } - svm_tsc_scale_run_testcase(50, 255, rdrand()); - svm_tsc_scale_run_testcase(50, 0.0001, rdrand()); + svm_tsc_scale_run_testcase(50, 255, rdrand()); + svm_tsc_scale_run_testcase(50, 0.0001, rdrand()); } static void latency_prepare(struct svm_test *test) { - default_prepare(test); - runs = LATENCY_RUNS; - latvmrun_min = latvmexit_min = -1ULL; - latvmrun_max = latvmexit_max = 0; - vmrun_sum = vmexit_sum = 0; - tsc_start = rdtsc(); + default_prepare(test); + runs = LATENCY_RUNS; + latvmrun_min = latvmexit_min = -1ULL; + latvmrun_max = latvmexit_max = 0; + vmrun_sum = vmexit_sum = 0; + tsc_start = rdtsc(); } static void latency_test(struct svm_test *test) { - u64 cycles; + u64 cycles; start: - tsc_end = rdtsc(); + tsc_end = rdtsc(); - cycles = tsc_end - tsc_start; + cycles = tsc_end - tsc_start; - if (cycles > latvmrun_max) - latvmrun_max = cycles; + if (cycles > latvmrun_max) + latvmrun_max = cycles; - if (cycles < latvmrun_min) - latvmrun_min = cycles; + if (cycles < latvmrun_min) + latvmrun_min = cycles; - vmrun_sum += cycles; + vmrun_sum += cycles; - tsc_start = rdtsc(); + tsc_start = rdtsc(); - asm volatile ("vmmcall" : : : "memory"); - goto start; + asm volatile ("vmmcall" : : : "memory"); + goto start; } static bool latency_finished(struct svm_test *test) { - u64 cycles; + u64 cycles; - tsc_end = rdtsc(); + tsc_end = rdtsc(); - cycles = tsc_end - tsc_start; + cycles = tsc_end - tsc_start; - if (cycles > latvmexit_max) - latvmexit_max = cycles; + if (cycles > latvmexit_max) + latvmexit_max = cycles; - if (cycles < latvmexit_min) - latvmexit_min = cycles; + if (cycles < latvmexit_min) + latvmexit_min = cycles; - vmexit_sum += cycles; + vmexit_sum += cycles; - vmcb->save.rip += 3; + vmcb->save.rip += 3; - runs -= 1; + runs -= 1; - tsc_end = rdtsc(); + tsc_end = rdtsc(); - return runs == 0; + return runs == 0; } static bool latency_finished_clean(struct svm_test *test) { - vmcb->control.clean = VMCB_CLEAN_ALL; - return latency_finished(test); + vmcb->control.clean = VMCB_CLEAN_ALL; + return latency_finished(test); } static bool latency_check(struct svm_test *test) { - printf(" Latency VMRUN : max: %ld min: %ld avg: %ld\n", latvmrun_max, - latvmrun_min, vmrun_sum / LATENCY_RUNS); - printf(" Latency VMEXIT: max: %ld min: %ld avg: %ld\n", latvmexit_max, - latvmexit_min, vmexit_sum / LATENCY_RUNS); - return true; + printf(" Latency VMRUN : max: %ld min: %ld avg: %ld\n", latvmrun_max, + latvmrun_min, vmrun_sum / LATENCY_RUNS); + printf(" Latency VMEXIT: max: %ld min: %ld avg: %ld\n", latvmexit_max, + latvmexit_min, vmexit_sum / LATENCY_RUNS); + return true; } static void lat_svm_insn_prepare(struct svm_test *test) { - default_prepare(test); - runs = LATENCY_RUNS; - latvmload_min = latvmsave_min = latstgi_min = latclgi_min = -1ULL; - latvmload_max = latvmsave_max = latstgi_max = latclgi_max = 0; - vmload_sum = vmsave_sum = stgi_sum = clgi_sum; + default_prepare(test); + runs = LATENCY_RUNS; + latvmload_min = latvmsave_min = latstgi_min = latclgi_min = -1ULL; + latvmload_max = latvmsave_max = latstgi_max = latclgi_max = 0; + vmload_sum = vmsave_sum = stgi_sum = clgi_sum; } static bool lat_svm_insn_finished(struct svm_test *test) { - u64 vmcb_phys = virt_to_phys(vmcb); - u64 cycles; - - for ( ; runs != 0; runs--) { - tsc_start = rdtsc(); - asm volatile("vmload %0\n\t" : : "a"(vmcb_phys) : "memory"); - cycles = rdtsc() - tsc_start; - if (cycles > latvmload_max) - latvmload_max = cycles; - if (cycles < latvmload_min) - latvmload_min = cycles; - vmload_sum += cycles; - - tsc_start = rdtsc(); - asm volatile("vmsave %0\n\t" : : "a"(vmcb_phys) : "memory"); - cycles = rdtsc() - tsc_start; - if (cycles > latvmsave_max) - latvmsave_max = cycles; - if (cycles < latvmsave_min) - latvmsave_min = cycles; - vmsave_sum += cycles; - - tsc_start = rdtsc(); - asm volatile("stgi\n\t"); - cycles = rdtsc() - tsc_start; - if (cycles > latstgi_max) - latstgi_max = cycles; - if (cycles < latstgi_min) - latstgi_min = cycles; - stgi_sum += cycles; - - tsc_start = rdtsc(); - asm volatile("clgi\n\t"); - cycles = rdtsc() - tsc_start; - if (cycles > latclgi_max) - latclgi_max = cycles; - if (cycles < latclgi_min) - latclgi_min = cycles; - clgi_sum += cycles; - } + u64 vmcb_phys = virt_to_phys(vmcb); + u64 cycles; + + for ( ; runs != 0; runs--) { + tsc_start = rdtsc(); + asm volatile("vmload %0\n\t" : : "a"(vmcb_phys) : "memory"); + cycles = rdtsc() - tsc_start; + if (cycles > latvmload_max) + latvmload_max = cycles; + if (cycles < latvmload_min) + latvmload_min = cycles; + vmload_sum += cycles; + + tsc_start = rdtsc(); + asm volatile("vmsave %0\n\t" : : "a"(vmcb_phys) : "memory"); + cycles = rdtsc() - tsc_start; + if (cycles > latvmsave_max) + latvmsave_max = cycles; + if (cycles < latvmsave_min) + latvmsave_min = cycles; + vmsave_sum += cycles; + + tsc_start = rdtsc(); + asm volatile("stgi\n\t"); + cycles = rdtsc() - tsc_start; + if (cycles > latstgi_max) + latstgi_max = cycles; + if (cycles < latstgi_min) + latstgi_min = cycles; + stgi_sum += cycles; + + tsc_start = rdtsc(); + asm volatile("clgi\n\t"); + cycles = rdtsc() - tsc_start; + if (cycles > latclgi_max) + latclgi_max = cycles; + if (cycles < latclgi_min) + latclgi_min = cycles; + clgi_sum += cycles; + } - tsc_end = rdtsc(); + tsc_end = rdtsc(); - return true; + return true; } static bool lat_svm_insn_check(struct svm_test *test) { - printf(" Latency VMLOAD: max: %ld min: %ld avg: %ld\n", latvmload_max, - latvmload_min, vmload_sum / LATENCY_RUNS); - printf(" Latency VMSAVE: max: %ld min: %ld avg: %ld\n", latvmsave_max, - latvmsave_min, vmsave_sum / LATENCY_RUNS); - printf(" Latency STGI: max: %ld min: %ld avg: %ld\n", latstgi_max, - latstgi_min, stgi_sum / LATENCY_RUNS); - printf(" Latency CLGI: max: %ld min: %ld avg: %ld\n", latclgi_max, - latclgi_min, clgi_sum / LATENCY_RUNS); - return true; + printf(" Latency VMLOAD: max: %ld min: %ld avg: %ld\n", latvmload_max, + latvmload_min, vmload_sum / LATENCY_RUNS); + printf(" Latency VMSAVE: max: %ld min: %ld avg: %ld\n", latvmsave_max, + latvmsave_min, vmsave_sum / LATENCY_RUNS); + printf(" Latency STGI: max: %ld min: %ld avg: %ld\n", latstgi_max, + latstgi_min, stgi_sum / LATENCY_RUNS); + printf(" Latency CLGI: max: %ld min: %ld avg: %ld\n", latclgi_max, + latclgi_min, clgi_sum / LATENCY_RUNS); + return true; } bool pending_event_ipi_fired; @@ -953,182 +953,182 @@ bool pending_event_guest_run; static void pending_event_ipi_isr(isr_regs_t *regs) { - pending_event_ipi_fired = true; - eoi(); + pending_event_ipi_fired = true; + eoi(); } static void pending_event_prepare(struct svm_test *test) { - int ipi_vector = 0xf1; + int ipi_vector = 0xf1; - default_prepare(test); + default_prepare(test); - pending_event_ipi_fired = false; + pending_event_ipi_fired = false; - handle_irq(ipi_vector, pending_event_ipi_isr); + handle_irq(ipi_vector, pending_event_ipi_isr); - pending_event_guest_run = false; + pending_event_guest_run = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); - vmcb->control.int_ctl |= V_INTR_MASKING_MASK; + vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb->control.int_ctl |= V_INTR_MASKING_MASK; - apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | - APIC_DM_FIXED | ipi_vector, 0); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | + APIC_DM_FIXED | ipi_vector, 0); - set_test_stage(test, 0); + set_test_stage(test, 0); } static void pending_event_test(struct svm_test *test) { - pending_event_guest_run = true; + pending_event_guest_run = true; } static bool pending_event_finished(struct svm_test *test) { - switch (get_test_stage(test)) { - case 0: - if (vmcb->control.exit_code != SVM_EXIT_INTR) { - report_fail("VMEXIT not due to pending interrupt. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } + switch (get_test_stage(test)) { + case 0: + if (vmcb->control.exit_code != SVM_EXIT_INTR) { + report_fail("VMEXIT not due to pending interrupt. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); - vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; - if (pending_event_guest_run) { - report_fail("Guest ran before host received IPI\n"); - return true; - } + if (pending_event_guest_run) { + report_fail("Guest ran before host received IPI\n"); + return true; + } - irq_enable(); - asm volatile ("nop"); - irq_disable(); + irq_enable(); + asm volatile ("nop"); + irq_disable(); - if (!pending_event_ipi_fired) { - report_fail("Pending interrupt not dispatched after IRQ enabled\n"); - return true; - } - break; + if (!pending_event_ipi_fired) { + report_fail("Pending interrupt not dispatched after IRQ enabled\n"); + return true; + } + break; - case 1: - if (!pending_event_guest_run) { - report_fail("Guest did not resume when no interrupt\n"); - return true; - } - break; - } + case 1: + if (!pending_event_guest_run) { + report_fail("Guest did not resume when no interrupt\n"); + return true; + } + break; + } - inc_test_stage(test); + inc_test_stage(test); - return get_test_stage(test) == 2; + return get_test_stage(test) == 2; } static bool pending_event_check(struct svm_test *test) { - return get_test_stage(test) == 2; + return get_test_stage(test) == 2; } static void pending_event_cli_prepare(struct svm_test *test) { - default_prepare(test); + default_prepare(test); - pending_event_ipi_fired = false; + pending_event_ipi_fired = false; - handle_irq(0xf1, pending_event_ipi_isr); + handle_irq(0xf1, pending_event_ipi_isr); - apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | - APIC_DM_FIXED | 0xf1, 0); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | + APIC_DM_FIXED | 0xf1, 0); - set_test_stage(test, 0); + set_test_stage(test, 0); } static void pending_event_cli_prepare_gif_clear(struct svm_test *test) { - asm("cli"); + asm("cli"); } static void pending_event_cli_test(struct svm_test *test) { - if (pending_event_ipi_fired == true) { - set_test_stage(test, -1); - report_fail("Interrupt preceeded guest"); - vmmcall(); - } + if (pending_event_ipi_fired == true) { + set_test_stage(test, -1); + report_fail("Interrupt preceeded guest"); + vmmcall(); + } - /* VINTR_MASKING is zero. This should cause the IPI to fire. */ - irq_enable(); - asm volatile ("nop"); - irq_disable(); + /* VINTR_MASKING is zero. This should cause the IPI to fire. */ + irq_enable(); + asm volatile ("nop"); + irq_disable(); - if (pending_event_ipi_fired != true) { - set_test_stage(test, -1); - report_fail("Interrupt not triggered by guest"); - } + if (pending_event_ipi_fired != true) { + set_test_stage(test, -1); + report_fail("Interrupt not triggered by guest"); + } - vmmcall(); + vmmcall(); - /* - * Now VINTR_MASKING=1, but no interrupt is pending so - * the VINTR interception should be clear in VMCB02. Check - * that L0 did not leave a stale VINTR in the VMCB. - */ - irq_enable(); - asm volatile ("nop"); - irq_disable(); + /* + * Now VINTR_MASKING=1, but no interrupt is pending so + * the VINTR interception should be clear in VMCB02. Check + * that L0 did not leave a stale VINTR in the VMCB. + */ + irq_enable(); + asm volatile ("nop"); + irq_disable(); } static bool pending_event_cli_finished(struct svm_test *test) { - if ( vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x", - vmcb->control.exit_code); - return true; - } + if ( vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VM_EXIT return to host is not EXIT_VMMCALL exit reason 0x%x", + vmcb->control.exit_code); + return true; + } - switch (get_test_stage(test)) { - case 0: - vmcb->save.rip += 3; + switch (get_test_stage(test)) { + case 0: + vmcb->save.rip += 3; - pending_event_ipi_fired = false; + pending_event_ipi_fired = false; - vmcb->control.int_ctl |= V_INTR_MASKING_MASK; + vmcb->control.int_ctl |= V_INTR_MASKING_MASK; - /* Now entering again with VINTR_MASKING=1. */ - apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | - APIC_DM_FIXED | 0xf1, 0); + /* Now entering again with VINTR_MASKING=1. */ + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | + APIC_DM_FIXED | 0xf1, 0); - break; + break; - case 1: - if (pending_event_ipi_fired == true) { - report_fail("Interrupt triggered by guest"); - return true; - } + case 1: + if (pending_event_ipi_fired == true) { + report_fail("Interrupt triggered by guest"); + return true; + } - irq_enable(); - asm volatile ("nop"); - irq_disable(); + irq_enable(); + asm volatile ("nop"); + irq_disable(); - if (pending_event_ipi_fired != true) { - report_fail("Interrupt not triggered by host"); - return true; - } + if (pending_event_ipi_fired != true) { + report_fail("Interrupt not triggered by host"); + return true; + } - break; + break; - default: - return true; - } + default: + return true; + } - inc_test_stage(test); + inc_test_stage(test); - return get_test_stage(test) == 2; + return get_test_stage(test) == 2; } static bool pending_event_cli_check(struct svm_test *test) { - return get_test_stage(test) == 2; + return get_test_stage(test) == 2; } #define TIMER_VECTOR 222 @@ -1137,529 +1137,529 @@ static volatile bool timer_fired; static void timer_isr(isr_regs_t *regs) { - timer_fired = true; - apic_write(APIC_EOI, 0); + timer_fired = true; + apic_write(APIC_EOI, 0); } static void interrupt_prepare(struct svm_test *test) { - default_prepare(test); - handle_irq(TIMER_VECTOR, timer_isr); - timer_fired = false; - set_test_stage(test, 0); + default_prepare(test); + handle_irq(TIMER_VECTOR, timer_isr); + timer_fired = false; + set_test_stage(test, 0); } static void interrupt_test(struct svm_test *test) { - long long start, loops; + long long start, loops; - apic_write(APIC_LVTT, TIMER_VECTOR); - irq_enable(); - apic_write(APIC_TMICT, 1); //Timer Initial Count Register 0x380 one-shot - for (loops = 0; loops < 10000000 && !timer_fired; loops++) - asm volatile ("nop"); + apic_write(APIC_LVTT, TIMER_VECTOR); + irq_enable(); + apic_write(APIC_TMICT, 1); //Timer Initial Count Register 0x380 one-shot + for (loops = 0; loops < 10000000 && !timer_fired; loops++) + asm volatile ("nop"); - report(timer_fired, "direct interrupt while running guest"); + report(timer_fired, "direct interrupt while running guest"); - if (!timer_fired) { - set_test_stage(test, -1); - vmmcall(); - } + if (!timer_fired) { + set_test_stage(test, -1); + vmmcall(); + } - apic_write(APIC_TMICT, 0); - irq_disable(); - vmmcall(); + apic_write(APIC_TMICT, 0); + irq_disable(); + vmmcall(); - timer_fired = false; - apic_write(APIC_TMICT, 1); - for (loops = 0; loops < 10000000 && !timer_fired; loops++) - asm volatile ("nop"); + timer_fired = false; + apic_write(APIC_TMICT, 1); + for (loops = 0; loops < 10000000 && !timer_fired; loops++) + asm volatile ("nop"); - report(timer_fired, "intercepted interrupt while running guest"); + report(timer_fired, "intercepted interrupt while running guest"); - if (!timer_fired) { - set_test_stage(test, -1); - vmmcall(); - } + if (!timer_fired) { + set_test_stage(test, -1); + vmmcall(); + } - irq_enable(); - apic_write(APIC_TMICT, 0); - irq_disable(); + irq_enable(); + apic_write(APIC_TMICT, 0); + irq_disable(); - timer_fired = false; - start = rdtsc(); - apic_write(APIC_TMICT, 1000000); - safe_halt(); + timer_fired = false; + start = rdtsc(); + apic_write(APIC_TMICT, 1000000); + safe_halt(); - report(rdtsc() - start > 10000 && timer_fired, - "direct interrupt + hlt"); + report(rdtsc() - start > 10000 && timer_fired, + "direct interrupt + hlt"); - if (!timer_fired) { - set_test_stage(test, -1); - vmmcall(); - } + if (!timer_fired) { + set_test_stage(test, -1); + vmmcall(); + } - apic_write(APIC_TMICT, 0); - irq_disable(); - vmmcall(); + apic_write(APIC_TMICT, 0); + irq_disable(); + vmmcall(); - timer_fired = false; - start = rdtsc(); - apic_write(APIC_TMICT, 1000000); - asm volatile ("hlt"); + timer_fired = false; + start = rdtsc(); + apic_write(APIC_TMICT, 1000000); + asm volatile ("hlt"); - report(rdtsc() - start > 10000 && timer_fired, - "intercepted interrupt + hlt"); + report(rdtsc() - start > 10000 && timer_fired, + "intercepted interrupt + hlt"); - if (!timer_fired) { - set_test_stage(test, -1); - vmmcall(); - } + if (!timer_fired) { + set_test_stage(test, -1); + vmmcall(); + } - apic_write(APIC_TMICT, 0); - irq_disable(); + apic_write(APIC_TMICT, 0); + irq_disable(); } static bool interrupt_finished(struct svm_test *test) { - switch (get_test_stage(test)) { - case 0: - case 2: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - vmcb->save.rip += 3; + switch (get_test_stage(test)) { + case 0: + case 2: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); - vmcb->control.int_ctl |= V_INTR_MASKING_MASK; - break; + vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb->control.int_ctl |= V_INTR_MASKING_MASK; + break; - case 1: - case 3: - if (vmcb->control.exit_code != SVM_EXIT_INTR) { - report_fail("VMEXIT not due to intr intercept. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } + case 1: + case 3: + if (vmcb->control.exit_code != SVM_EXIT_INTR) { + report_fail("VMEXIT not due to intr intercept. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } - irq_enable(); - asm volatile ("nop"); - irq_disable(); + irq_enable(); + asm volatile ("nop"); + irq_disable(); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); - vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; - break; + vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + break; - case 4: - break; + case 4: + break; - default: - return true; - } + default: + return true; + } - inc_test_stage(test); + inc_test_stage(test); - return get_test_stage(test) == 5; + return get_test_stage(test) == 5; } static bool interrupt_check(struct svm_test *test) { - return get_test_stage(test) == 5; + return get_test_stage(test) == 5; } static volatile bool nmi_fired; static void nmi_handler(struct ex_regs *regs) { - nmi_fired = true; + nmi_fired = true; } static void nmi_prepare(struct svm_test *test) { - default_prepare(test); - nmi_fired = false; - handle_exception(NMI_VECTOR, nmi_handler); - set_test_stage(test, 0); + default_prepare(test); + nmi_fired = false; + handle_exception(NMI_VECTOR, nmi_handler); + set_test_stage(test, 0); } static void nmi_test(struct svm_test *test) { - apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, 0); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, 0); - report(nmi_fired, "direct NMI while running guest"); + report(nmi_fired, "direct NMI while running guest"); - if (!nmi_fired) - set_test_stage(test, -1); + if (!nmi_fired) + set_test_stage(test, -1); - vmmcall(); + vmmcall(); - nmi_fired = false; + nmi_fired = false; - apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, 0); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, 0); - if (!nmi_fired) { - report(nmi_fired, "intercepted pending NMI not dispatched"); - set_test_stage(test, -1); - } + if (!nmi_fired) { + report(nmi_fired, "intercepted pending NMI not dispatched"); + set_test_stage(test, -1); + } } static bool nmi_finished(struct svm_test *test) { - switch (get_test_stage(test)) { - case 0: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - vmcb->save.rip += 3; + switch (get_test_stage(test)) { + case 0: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); - break; + vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); + break; - case 1: - if (vmcb->control.exit_code != SVM_EXIT_NMI) { - report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } + case 1: + if (vmcb->control.exit_code != SVM_EXIT_NMI) { + report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } - report_pass("NMI intercept while running guest"); - break; + report_pass("NMI intercept while running guest"); + break; - case 2: - break; + case 2: + break; - default: - return true; - } + default: + return true; + } - inc_test_stage(test); + inc_test_stage(test); - return get_test_stage(test) == 3; + return get_test_stage(test) == 3; } static bool nmi_check(struct svm_test *test) { - return get_test_stage(test) == 3; + return get_test_stage(test) == 3; } #define NMI_DELAY 100000000ULL static void nmi_message_thread(void *_test) { - struct svm_test *test = _test; + struct svm_test *test = _test; - while (get_test_stage(test) != 1) - pause(); + while (get_test_stage(test) != 1) + pause(); - delay(NMI_DELAY); + delay(NMI_DELAY); - apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, id_map[0]); + apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, id_map[0]); - while (get_test_stage(test) != 2) - pause(); + while (get_test_stage(test) != 2) + pause(); - delay(NMI_DELAY); + delay(NMI_DELAY); - apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, id_map[0]); + apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_NMI | APIC_INT_ASSERT, id_map[0]); } static void nmi_hlt_test(struct svm_test *test) { - long long start; + long long start; - on_cpu_async(1, nmi_message_thread, test); + on_cpu_async(1, nmi_message_thread, test); - start = rdtsc(); + start = rdtsc(); - set_test_stage(test, 1); + set_test_stage(test, 1); - asm volatile ("hlt"); + asm volatile ("hlt"); - report((rdtsc() - start > NMI_DELAY) && nmi_fired, - "direct NMI + hlt"); + report((rdtsc() - start > NMI_DELAY) && nmi_fired, + "direct NMI + hlt"); - if (!nmi_fired) - set_test_stage(test, -1); + if (!nmi_fired) + set_test_stage(test, -1); - nmi_fired = false; + nmi_fired = false; - vmmcall(); + vmmcall(); - start = rdtsc(); + start = rdtsc(); - set_test_stage(test, 2); + set_test_stage(test, 2); - asm volatile ("hlt"); + asm volatile ("hlt"); - report((rdtsc() - start > NMI_DELAY) && nmi_fired, - "intercepted NMI + hlt"); + report((rdtsc() - start > NMI_DELAY) && nmi_fired, + "intercepted NMI + hlt"); - if (!nmi_fired) { - report(nmi_fired, "intercepted pending NMI not dispatched"); - set_test_stage(test, -1); - vmmcall(); - } + if (!nmi_fired) { + report(nmi_fired, "intercepted pending NMI not dispatched"); + set_test_stage(test, -1); + vmmcall(); + } - set_test_stage(test, 3); + set_test_stage(test, 3); } static bool nmi_hlt_finished(struct svm_test *test) { - switch (get_test_stage(test)) { - case 1: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - vmcb->save.rip += 3; + switch (get_test_stage(test)) { + case 1: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); - break; + vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); + break; - case 2: - if (vmcb->control.exit_code != SVM_EXIT_NMI) { - report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } + case 2: + if (vmcb->control.exit_code != SVM_EXIT_NMI) { + report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } - report_pass("NMI intercept while running guest"); - break; + report_pass("NMI intercept while running guest"); + break; - case 3: - break; + case 3: + break; - default: - return true; - } + default: + return true; + } - return get_test_stage(test) == 3; + return get_test_stage(test) == 3; } static bool nmi_hlt_check(struct svm_test *test) { - return get_test_stage(test) == 3; + return get_test_stage(test) == 3; } static volatile int count_exc = 0; static void my_isr(struct ex_regs *r) { - count_exc++; + count_exc++; } static void exc_inject_prepare(struct svm_test *test) { - default_prepare(test); - handle_exception(DE_VECTOR, my_isr); - handle_exception(NMI_VECTOR, my_isr); + default_prepare(test); + handle_exception(DE_VECTOR, my_isr); + handle_exception(NMI_VECTOR, my_isr); } static void exc_inject_test(struct svm_test *test) { - asm volatile ("vmmcall\n\tvmmcall\n\t"); + asm volatile ("vmmcall\n\tvmmcall\n\t"); } static bool exc_inject_finished(struct svm_test *test) { - switch (get_test_stage(test)) { - case 0: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - vmcb->save.rip += 3; - vmcb->control.event_inj = NMI_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID; - break; + switch (get_test_stage(test)) { + case 0: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + vmcb->save.rip += 3; + vmcb->control.event_inj = NMI_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID; + break; - case 1: - if (vmcb->control.exit_code != SVM_EXIT_ERR) { - report_fail("VMEXIT not due to error. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - report(count_exc == 0, "exception with vector 2 not injected"); - vmcb->control.event_inj = DE_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID; - break; + case 1: + if (vmcb->control.exit_code != SVM_EXIT_ERR) { + report_fail("VMEXIT not due to error. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + report(count_exc == 0, "exception with vector 2 not injected"); + vmcb->control.event_inj = DE_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID; + break; - case 2: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - vmcb->save.rip += 3; - report(count_exc == 1, "divide overflow exception injected"); - report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID), "eventinj.VALID cleared"); - break; + case 2: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + vmcb->save.rip += 3; + report(count_exc == 1, "divide overflow exception injected"); + report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID), "eventinj.VALID cleared"); + break; - default: - return true; - } + default: + return true; + } - inc_test_stage(test); + inc_test_stage(test); - return get_test_stage(test) == 3; + return get_test_stage(test) == 3; } static bool exc_inject_check(struct svm_test *test) { - return count_exc == 1 && get_test_stage(test) == 3; + return count_exc == 1 && get_test_stage(test) == 3; } static volatile bool virq_fired; static void virq_isr(isr_regs_t *regs) { - virq_fired = true; + virq_fired = true; } static void virq_inject_prepare(struct svm_test *test) { - handle_irq(0xf1, virq_isr); - default_prepare(test); - vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | - (0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority - vmcb->control.int_vector = 0xf1; - virq_fired = false; - set_test_stage(test, 0); + handle_irq(0xf1, virq_isr); + default_prepare(test); + vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | + (0x0f << V_INTR_PRIO_SHIFT); // Set to the highest priority + vmcb->control.int_vector = 0xf1; + virq_fired = false; + set_test_stage(test, 0); } static void virq_inject_test(struct svm_test *test) { - if (virq_fired) { - report_fail("virtual interrupt fired before L2 sti"); - set_test_stage(test, -1); - vmmcall(); - } + if (virq_fired) { + report_fail("virtual interrupt fired before L2 sti"); + set_test_stage(test, -1); + vmmcall(); + } - irq_enable(); - asm volatile ("nop"); - irq_disable(); + irq_enable(); + asm volatile ("nop"); + irq_disable(); - if (!virq_fired) { - report_fail("virtual interrupt not fired after L2 sti"); - set_test_stage(test, -1); - } + if (!virq_fired) { + report_fail("virtual interrupt not fired after L2 sti"); + set_test_stage(test, -1); + } - vmmcall(); + vmmcall(); - if (virq_fired) { - report_fail("virtual interrupt fired before L2 sti after VINTR intercept"); - set_test_stage(test, -1); - vmmcall(); - } + if (virq_fired) { + report_fail("virtual interrupt fired before L2 sti after VINTR intercept"); + set_test_stage(test, -1); + vmmcall(); + } - irq_enable(); - asm volatile ("nop"); - irq_disable(); + irq_enable(); + asm volatile ("nop"); + irq_disable(); - if (!virq_fired) { - report_fail("virtual interrupt not fired after return from VINTR intercept"); - set_test_stage(test, -1); - } + if (!virq_fired) { + report_fail("virtual interrupt not fired after return from VINTR intercept"); + set_test_stage(test, -1); + } - vmmcall(); + vmmcall(); - irq_enable(); - asm volatile ("nop"); - irq_disable(); + irq_enable(); + asm volatile ("nop"); + irq_disable(); - if (virq_fired) { - report_fail("virtual interrupt fired when V_IRQ_PRIO less than V_TPR"); - set_test_stage(test, -1); - } + if (virq_fired) { + report_fail("virtual interrupt fired when V_IRQ_PRIO less than V_TPR"); + set_test_stage(test, -1); + } - vmmcall(); - vmmcall(); + vmmcall(); + vmmcall(); } static bool virq_inject_finished(struct svm_test *test) { - vmcb->save.rip += 3; + vmcb->save.rip += 3; - switch (get_test_stage(test)) { - case 0: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - if (vmcb->control.int_ctl & V_IRQ_MASK) { - report_fail("V_IRQ not cleared on VMEXIT after firing"); - return true; - } - virq_fired = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); - vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | - (0x0f << V_INTR_PRIO_SHIFT); - break; + switch (get_test_stage(test)) { + case 0: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + if (vmcb->control.int_ctl & V_IRQ_MASK) { + report_fail("V_IRQ not cleared on VMEXIT after firing"); + return true; + } + virq_fired = false; + vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); + vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | + (0x0f << V_INTR_PRIO_SHIFT); + break; - case 1: - if (vmcb->control.exit_code != SVM_EXIT_VINTR) { - report_fail("VMEXIT not due to vintr. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - if (virq_fired) { - report_fail("V_IRQ fired before SVM_EXIT_VINTR"); - return true; - } - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR); - break; + case 1: + if (vmcb->control.exit_code != SVM_EXIT_VINTR) { + report_fail("VMEXIT not due to vintr. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + if (virq_fired) { + report_fail("V_IRQ fired before SVM_EXIT_VINTR"); + return true; + } + vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR); + break; - case 2: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - virq_fired = false; - // Set irq to lower priority - vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | - (0x08 << V_INTR_PRIO_SHIFT); - // Raise guest TPR - vmcb->control.int_ctl |= 0x0a & V_TPR_MASK; - break; + case 2: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + virq_fired = false; + // Set irq to lower priority + vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | + (0x08 << V_INTR_PRIO_SHIFT); + // Raise guest TPR + vmcb->control.int_ctl |= 0x0a & V_TPR_MASK; + break; - case 3: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); - break; + case 3: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); + break; - case 4: - // INTERCEPT_VINTR should be ignored because V_INTR_PRIO < V_TPR - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); - return true; - } - break; + case 4: + // INTERCEPT_VINTR should be ignored because V_INTR_PRIO < V_TPR + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall. Exit reason 0x%x", + vmcb->control.exit_code); + return true; + } + break; - default: - return true; - } + default: + return true; + } - inc_test_stage(test); + inc_test_stage(test); - return get_test_stage(test) == 5; + return get_test_stage(test) == 5; } static bool virq_inject_check(struct svm_test *test) { - return get_test_stage(test) == 5; + return get_test_stage(test) == 5; } /* @@ -1688,157 +1688,157 @@ extern const char insb_instruction_label[]; static void reg_corruption_isr(isr_regs_t *regs) { - isr_cnt++; - apic_write(APIC_EOI, 0); + isr_cnt++; + apic_write(APIC_EOI, 0); } static void reg_corruption_prepare(struct svm_test *test) { - default_prepare(test); - set_test_stage(test, 0); + default_prepare(test); + set_test_stage(test, 0); - vmcb->control.int_ctl = V_INTR_MASKING_MASK; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb->control.int_ctl = V_INTR_MASKING_MASK; + vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); - handle_irq(TIMER_VECTOR, reg_corruption_isr); + handle_irq(TIMER_VECTOR, reg_corruption_isr); - /* set local APIC to inject external interrupts */ - apic_write(APIC_TMICT, 0); - apic_write(APIC_TDCR, 0); - apic_write(APIC_LVTT, TIMER_VECTOR | APIC_LVT_TIMER_PERIODIC); - apic_write(APIC_TMICT, 1000); + /* set local APIC to inject external interrupts */ + apic_write(APIC_TMICT, 0); + apic_write(APIC_TDCR, 0); + apic_write(APIC_LVTT, TIMER_VECTOR | APIC_LVT_TIMER_PERIODIC); + apic_write(APIC_TMICT, 1000); } static void reg_corruption_test(struct svm_test *test) { - /* this is endless loop, which is interrupted by the timer interrupt */ - asm volatile ( - "1:\n\t" - "movw $0x4d0, %%dx\n\t" // IO port - "lea %[io_port_var], %%rdi\n\t" - "movb $0xAA, %[io_port_var]\n\t" - "insb_instruction_label:\n\t" - "insb\n\t" - "jmp 1b\n\t" - - : [io_port_var] "=m" (io_port_var) - : /* no inputs*/ - : "rdx", "rdi" - ); + /* this is endless loop, which is interrupted by the timer interrupt */ + asm volatile ( + "1:\n\t" + "movw $0x4d0, %%dx\n\t" // IO port + "lea %[io_port_var], %%rdi\n\t" + "movb $0xAA, %[io_port_var]\n\t" + "insb_instruction_label:\n\t" + "insb\n\t" + "jmp 1b\n\t" + + : [io_port_var] "=m" (io_port_var) + : /* no inputs*/ + : "rdx", "rdi" + ); } static bool reg_corruption_finished(struct svm_test *test) { - if (isr_cnt == 10000) { - report_pass("No RIP corruption detected after %d timer interrupts", - isr_cnt); - set_test_stage(test, 1); - goto cleanup; - } + if (isr_cnt == 10000) { + report_pass("No RIP corruption detected after %d timer interrupts", + isr_cnt); + set_test_stage(test, 1); + goto cleanup; + } - if (vmcb->control.exit_code == SVM_EXIT_INTR) { + if (vmcb->control.exit_code == SVM_EXIT_INTR) { - void* guest_rip = (void*)vmcb->save.rip; + void* guest_rip = (void*)vmcb->save.rip; - irq_enable(); - asm volatile ("nop"); - irq_disable(); + irq_enable(); + asm volatile ("nop"); + irq_disable(); - if (guest_rip == insb_instruction_label && io_port_var != 0xAA) { - report_fail("RIP corruption detected after %d timer interrupts", - isr_cnt); - goto cleanup; - } + if (guest_rip == insb_instruction_label && io_port_var != 0xAA) { + report_fail("RIP corruption detected after %d timer interrupts", + isr_cnt); + goto cleanup; + } - } - return false; + } + return false; cleanup: - apic_write(APIC_LVTT, APIC_LVT_TIMER_MASK); - apic_write(APIC_TMICT, 0); - return true; + apic_write(APIC_LVTT, APIC_LVT_TIMER_MASK); + apic_write(APIC_TMICT, 0); + return true; } static bool reg_corruption_check(struct svm_test *test) { - return get_test_stage(test) == 1; + return get_test_stage(test) == 1; } static void get_tss_entry(void *data) { - *((gdt_entry_t **)data) = get_tss_descr(); + *((gdt_entry_t **)data) = get_tss_descr(); } static int orig_cpu_count; static void init_startup_prepare(struct svm_test *test) { - gdt_entry_t *tss_entry; - int i; + gdt_entry_t *tss_entry; + int i; - on_cpu(1, get_tss_entry, &tss_entry); + on_cpu(1, get_tss_entry, &tss_entry); - orig_cpu_count = cpu_online_count; + orig_cpu_count = cpu_online_count; - apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_INIT | APIC_INT_ASSERT, - id_map[1]); + apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_INIT | APIC_INT_ASSERT, + id_map[1]); - delay(100000000ULL); + delay(100000000ULL); - --cpu_online_count; + --cpu_online_count; - tss_entry->type &= ~DESC_BUSY; + tss_entry->type &= ~DESC_BUSY; - apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_STARTUP, id_map[1]); + apic_icr_write(APIC_DEST_PHYSICAL | APIC_DM_STARTUP, id_map[1]); - for (i = 0; i < 5 && cpu_online_count < orig_cpu_count; i++) - delay(100000000ULL); + for (i = 0; i < 5 && cpu_online_count < orig_cpu_count; i++) + delay(100000000ULL); } static bool init_startup_finished(struct svm_test *test) { - return true; + return true; } static bool init_startup_check(struct svm_test *test) { - return cpu_online_count == orig_cpu_count; + return cpu_online_count == orig_cpu_count; } static volatile bool init_intercept; static void init_intercept_prepare(struct svm_test *test) { - init_intercept = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_INIT); + init_intercept = false; + vmcb->control.intercept |= (1ULL << INTERCEPT_INIT); } static void init_intercept_test(struct svm_test *test) { - apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_INIT | APIC_INT_ASSERT, 0); + apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | APIC_DM_INIT | APIC_INT_ASSERT, 0); } static bool init_intercept_finished(struct svm_test *test) { - vmcb->save.rip += 3; + vmcb->save.rip += 3; - if (vmcb->control.exit_code != SVM_EXIT_INIT) { - report_fail("VMEXIT not due to init intercept. Exit reason 0x%x", - vmcb->control.exit_code); + if (vmcb->control.exit_code != SVM_EXIT_INIT) { + report_fail("VMEXIT not due to init intercept. Exit reason 0x%x", + vmcb->control.exit_code); - return true; - } + return true; + } - init_intercept = true; + init_intercept = true; - report_pass("INIT to vcpu intercepted"); + report_pass("INIT to vcpu intercepted"); - return true; + return true; } static bool init_intercept_check(struct svm_test *test) { - return init_intercept; + return init_intercept; } /* From patchwork Tue Jun 28 11:38:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manali Shukla X-Patchwork-Id: 12898128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A7C0C43334 for ; Tue, 28 Jun 2022 11:42:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345123AbiF1Lm2 (ORCPT ); Tue, 28 Jun 2022 07:42:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344628AbiF1Lm0 (ORCPT ); Tue, 28 Jun 2022 07:42:26 -0400 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2063.outbound.protection.outlook.com [40.107.220.63]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E48D22FE75 for ; Tue, 28 Jun 2022 04:42:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dTORnKlEFxUfzYep4ri73/WT4Wh4A7Nz3WxPiAOVvLMPv1OtTeuSf2PX8T0rthcXfWZsNlgKxf5HVZd3gNi9PuMNv6ClI8ys013aLYNeZLlU9TfAwWJXE6SVrlp9EkwFgBH3XDjx6VceBOwfPTjjeSE23qZ+sAqNkwqFyzQVeVylVZFgr2WuPxXaLkcU335xMZsYW/DeuSp1Dots5UxvLVXscXNY4SCB53uMXLxeEuN7povrrnz4NjfbhQRrDeCPyG21c1MZAmHh2AsExwkMnkg5lT+2yoD1sEVUNsCOYcVHiQUO7U1joda+LBdibpjMslux/sTJrU3zUgJyR120fw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6RivgrolPWp8JH691cvFixYrDSyRvlwl4kF9orQwEUY=; b=nbYTetI1DqWe3L9m6aFM95HlRMrWiER15nfSEvOKyrtyfulzizruAnXkCw1SXhZH5H5cWckp8i7l11YI3vRhHx041BU8G+sp5m8xjVNmwBH+S7AxhPez0RMBI7ehCXIW65KjL/Dbe07XNg6E+UmtgeajjXYPZ5lqz7dSXC/US063P/9eUSgjzEc5oRS83FBw4DbsMAUhhX59V5eXXRwVbvTG4Sh6BU1/UCaw8E37bWdttvNQp4rN9RAFxaHymH4FVoytNJ6cJUDvmjn6zwrRiXFsU9uFLN2qTmB+39uh0Sz5jzrK4U3kn/jeL4P88ArvMBbsThf4Xj3vTTBVlfK1ew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6RivgrolPWp8JH691cvFixYrDSyRvlwl4kF9orQwEUY=; b=J+tcLPcZQySWECjlP02I2QNwa1uQxXysmaMKj047XjVC+0Qsi8ylkJUR8+apUoN/BOVYE7IUZ9niZfAzgHhzjCXI08Wgb1TDqe31BdbnSheNx2AKYD7jDQKDd4GcPQw4+KMJn2b0TM3ehZxJAy0VS/WPNccBU9ukGhtHWXOSMqI= Received: from BN9PR03CA0145.namprd03.prod.outlook.com (2603:10b6:408:fe::30) by DM4PR12MB5296.namprd12.prod.outlook.com (2603:10b6:5:39d::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.17; Tue, 28 Jun 2022 11:42:21 +0000 Received: from BN8NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fe:cafe::c1) by BN9PR03CA0145.outlook.office365.com (2603:10b6:408:fe::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.16 via Frontend Transport; Tue, 28 Jun 2022 11:42:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT036.mail.protection.outlook.com (10.13.177.168) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5373.15 via Frontend Transport; Tue, 28 Jun 2022 11:42:20 +0000 Received: from bhadra.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Tue, 28 Jun 2022 06:42:18 -0500 From: Manali Shukla To: , CC: Subject: [kvm-unit-tests PATCH v5 8/8] x86: nSVM: Correct indentation for svm_tests.c part-2 Date: Tue, 28 Jun 2022 11:38:53 +0000 Message-ID: <20220628113853.392569-9-manali.shukla@amd.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220628113853.392569-1-manali.shukla@amd.com> References: <20220628113853.392569-1-manali.shukla@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b3116611-d9d1-4b6e-caf1-08da58fb42cf X-MS-TrafficTypeDiagnostic: DM4PR12MB5296:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: q5mSU+X4dxPdkxJ0lYFbkx5fVORFQymf/8qIfq2ONTMZoQe19G/RVOjA8bOx5RqYd+TO9/xe8TpQxBksPpC5WnU+nepiYHSG/rFD8V/7htOf7oM1on1w8YYDub8nyb0EXUBZiJpPpKNA86Du3rT26EsJe3YTdZaChF2REhMBVxSaneURE6n7FgxOcXCPZcHfP6T7jQ3MSzF99xFQYhK8TRBfn/UN09D8g0VRC8j8mjsfkhpLYX3JtsQ9IU/wqGj99NpmZdKou+6/l+lbbF0WdoOMAjdAzJisaAtdMZoo3MLN16Ts9g6sYRiFlW3NV5tWV0IMuNK6xbDfuLWF/oTtDzxEt63/FYzXLYHV6rf2fRbbz4t9nlTK7AlW/D+DUDqib2wzP2bxDz8mrbxNlyVDeMRbmcARbIfUMsIwb/pCEk/5shWkfzu0Q04NiBfHS/0I68oQKlin+qQGSk/Gk8PcoJTRxAuegQZnLY22zw/EJNX9QRaIrQb0xfYanJwsyqjijfs8Sfe+8ablwtmXiQI0FqEZ/zFptMKpB4pD3NgTvTlHy3rmawDn66XDe1Cp4tRZen+JPoYKeGsm69CIFx3kXpZP2Ic+wYBzoOXyIbShcUpA1x5yCE2r5YlCbvYk1bGvjP8OuPddYaKa7K0doHDDFZyptNgQHuFpA+D2GGP31U3QCB+JjKa37tz5vAXUlbwgvyaVRApdX1NF54fk1QFEVxLGYLkg4HA/MIaDbVaOVetSRYexf7BHeeD4QnZFxxpl+KM0SDwfWdr55DwW95lXohzhovMmWW9YUylT0B1HCLo6YB8m4eZXRX+2FS0YaIF5FPQPAB7F1atioJzXn2A8+g== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230016)(4636009)(376002)(39860400002)(136003)(396003)(346002)(36840700001)(46966006)(40470700004)(7696005)(186003)(26005)(81166007)(110136005)(30864003)(47076005)(2616005)(2906002)(44832011)(70586007)(16526019)(70206006)(5660300002)(8676002)(426003)(1076003)(41300700001)(6666004)(8936002)(316002)(36860700001)(478600001)(4326008)(83380400001)(40460700003)(82310400005)(82740400003)(36756003)(356005)(86362001)(40480700001)(336012)(36900700001)(579004);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2022 11:42:20.6671 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b3116611-d9d1-4b6e-caf1-08da58fb42cf X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5296 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fixed indentation errors in svm_tests.c No functional change intended. Signed-off-by: Manali Shukla --- x86/svm_tests.c | 765 ++++++++++++++++++++++++------------------------ 1 file changed, 381 insertions(+), 384 deletions(-) diff --git a/x86/svm_tests.c b/x86/svm_tests.c index f9e3f36..f953000 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -2043,18 +2043,18 @@ static void basic_guest_main(struct svm_test *test) #define SVM_TEST_REG_RESERVED_BITS(start, end, inc, str_name, reg, val, \ resv_mask) \ { \ - u64 tmp, mask; \ - int i; \ + u64 tmp, mask; \ + int i; \ \ - for (i = start; i <= end; i = i + inc) { \ - mask = 1ull << i; \ - if (!(mask & resv_mask)) \ - continue; \ - tmp = val | mask; \ + for (i = start; i <= end; i = i + inc) { \ + mask = 1ull << i; \ + if (!(mask & resv_mask)) \ + continue; \ + tmp = val | mask; \ reg = tmp; \ - report(svm_vmrun() == SVM_EXIT_ERR, "Test %s %d:%d: %lx",\ - str_name, end, start, tmp); \ - } \ + report(svm_vmrun() == SVM_EXIT_ERR, "Test %s %d:%d: %lx", \ + str_name, end, start, tmp); \ + } \ } #define SVM_TEST_CR_RESERVED_BITS(start, end, inc, cr, val, resv_mask, \ @@ -2080,7 +2080,7 @@ static void basic_guest_main(struct svm_test *test) vmcb->save.cr4 = tmp; \ } \ r = svm_vmrun(); \ - report(r == exit_code, "Test CR%d %s%d:%d: %lx, wanted exit 0x%x, got 0x%x",\ + report(r == exit_code, "Test CR%d %s%d:%d: %lx, wanted exit 0x%x, got 0x%x", \ cr, test_name, end, start, tmp, exit_code, r); \ } \ } @@ -2105,9 +2105,9 @@ static void test_efer(void) efer_saved = vmcb->save.efer; SVM_TEST_REG_RESERVED_BITS(8, 9, 1, "EFER", vmcb->save.efer, - efer_saved, SVM_EFER_RESERVED_MASK); + efer_saved, SVM_EFER_RESERVED_MASK); SVM_TEST_REG_RESERVED_BITS(16, 63, 4, "EFER", vmcb->save.efer, - efer_saved, SVM_EFER_RESERVED_MASK); + efer_saved, SVM_EFER_RESERVED_MASK); /* * EFER.LME and CR0.PG are both set and CR4.PAE is zero. @@ -2124,7 +2124,7 @@ static void test_efer(void) cr4 = cr4_saved & ~X86_CR4_PAE; vmcb->save.cr4 = cr4; report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), " - "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4); + "CR0.PG=1 (%lx) and CR4.PAE=0 (%lx)", efer, cr0, cr4); /* * EFER.LME and CR0.PG are both set and CR0.PE is zero. @@ -2137,7 +2137,7 @@ static void test_efer(void) cr0 &= ~X86_CR0_PE; vmcb->save.cr0 = cr0; report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), " - "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0); + "CR0.PG=1 and CR0.PE=0 (%lx)", efer, cr0); /* * EFER.LME, CR0.PG, CR4.PAE, CS.L, and CS.D are all non-zero. @@ -2148,11 +2148,11 @@ static void test_efer(void) cr0 |= X86_CR0_PE; vmcb->save.cr0 = cr0; cs_attrib = cs_attrib_saved | SVM_SELECTOR_L_MASK | - SVM_SELECTOR_DB_MASK; + SVM_SELECTOR_DB_MASK; vmcb->save.cs.attrib = cs_attrib; report(svm_vmrun() == SVM_EXIT_ERR, "EFER.LME=1 (%lx), " - "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)", - efer, cr0, cr4, cs_attrib); + "CR0.PG=1 (%lx), CR4.PAE=1 (%lx), CS.L=1 and CS.D=1 (%x)", + efer, cr0, cr4, cs_attrib); vmcb->save.cr0 = cr0_saved; vmcb->save.cr4 = cr4_saved; @@ -2172,20 +2172,20 @@ static void test_cr0(void) cr0 &= ~X86_CR0_NW; vmcb->save.cr0 = cr0; report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=0: %lx", - cr0); + cr0); cr0 |= X86_CR0_NW; vmcb->save.cr0 = cr0; report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=1,NW=1: %lx", - cr0); + cr0); cr0 &= ~X86_CR0_NW; cr0 &= ~X86_CR0_CD; vmcb->save.cr0 = cr0; report (svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR0 CD=0,NW=0: %lx", - cr0); + cr0); cr0 |= X86_CR0_NW; vmcb->save.cr0 = cr0; report (svm_vmrun() == SVM_EXIT_ERR, "Test CR0 CD=0,NW=1: %lx", - cr0); + cr0); vmcb->save.cr0 = cr0_saved; /* @@ -2194,7 +2194,7 @@ static void test_cr0(void) cr0 = cr0_saved; SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "CR0", vmcb->save.cr0, cr0_saved, - SVM_CR0_RESERVED_MASK); + SVM_CR0_RESERVED_MASK); vmcb->save.cr0 = cr0_saved; } @@ -2207,11 +2207,11 @@ static void test_cr3(void) u64 cr3_saved = vmcb->save.cr3; SVM_TEST_CR_RESERVED_BITS(0, 63, 1, 3, cr3_saved, - SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, ""); + SVM_CR3_LONG_MBZ_MASK, SVM_EXIT_ERR, ""); vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_MBZ_MASK; report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx", - vmcb->save.cr3); + vmcb->save.cr3); /* * CR3 non-MBZ reserved bits based on different modes: @@ -2227,11 +2227,11 @@ static void test_cr3(void) if (this_cpu_has(X86_FEATURE_PCID)) { vmcb->save.cr4 = cr4_saved | X86_CR4_PCIDE; SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved, - SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) "); + SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_VMMCALL, "(PCIDE=1) "); vmcb->save.cr3 = cr3_saved & ~SVM_CR3_LONG_RESERVED_MASK; report(svm_vmrun() == SVM_EXIT_VMMCALL, "Test CR3 63:0: %lx", - vmcb->save.cr3); + vmcb->save.cr3); } vmcb->save.cr4 = cr4_saved & ~X86_CR4_PCIDE; @@ -2243,7 +2243,7 @@ static void test_cr3(void) pdpe[0] &= ~1ULL; SVM_TEST_CR_RESERVED_BITS(0, 11, 1, 3, cr3_saved, - SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) "); + SVM_CR3_LONG_RESERVED_MASK, SVM_EXIT_NPF, "(PCIDE=0) "); pdpe[0] |= 1ULL; vmcb->save.cr3 = cr3_saved; @@ -2254,7 +2254,7 @@ static void test_cr3(void) pdpe[0] &= ~1ULL; vmcb->save.cr4 = cr4_saved | X86_CR4_PAE; SVM_TEST_CR_RESERVED_BITS(0, 2, 1, 3, cr3_saved, - SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) "); + SVM_CR3_PAE_LEGACY_RESERVED_MASK, SVM_EXIT_NPF, "(PAE) "); pdpe[0] |= 1ULL; @@ -2273,14 +2273,14 @@ static void test_cr4(void) efer &= ~EFER_LME; vmcb->save.efer = efer; SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved, - SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, ""); + SVM_CR4_LEGACY_RESERVED_MASK, SVM_EXIT_ERR, ""); efer |= EFER_LME; vmcb->save.efer = efer; SVM_TEST_CR_RESERVED_BITS(12, 31, 1, 4, cr4_saved, - SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, ""); + SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, ""); SVM_TEST_CR_RESERVED_BITS(32, 63, 4, 4, cr4_saved, - SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, ""); + SVM_CR4_RESERVED_MASK, SVM_EXIT_ERR, ""); vmcb->save.cr4 = cr4_saved; vmcb->save.efer = efer_saved; @@ -2294,12 +2294,12 @@ static void test_dr(void) u64 dr_saved = vmcb->save.dr6; SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR6", vmcb->save.dr6, dr_saved, - SVM_DR6_RESERVED_MASK); + SVM_DR6_RESERVED_MASK); vmcb->save.dr6 = dr_saved; dr_saved = vmcb->save.dr7; SVM_TEST_REG_RESERVED_BITS(32, 63, 4, "DR7", vmcb->save.dr7, dr_saved, - SVM_DR7_RESERVED_MASK); + SVM_DR7_RESERVED_MASK); vmcb->save.dr7 = dr_saved; } @@ -2307,14 +2307,14 @@ static void test_dr(void) /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */ #define TEST_BITMAP_ADDR(save_intercept, type, addr, exit_code, \ msg) { \ - vmcb->control.intercept = saved_intercept | 1ULL << type; \ - if (type == INTERCEPT_MSR_PROT) \ - vmcb->control.msrpm_base_pa = addr; \ - else \ - vmcb->control.iopm_base_pa = addr; \ - report(svm_vmrun() == exit_code, \ - "Test %s address: %lx", msg, addr); \ -} + vmcb->control.intercept = saved_intercept | 1ULL << type; \ + if (type == INTERCEPT_MSR_PROT) \ + vmcb->control.msrpm_base_pa = addr; \ + else \ + vmcb->control.iopm_base_pa = addr; \ + report(svm_vmrun() == exit_code, \ + "Test %s address: %lx", msg, addr); \ + } /* * If the MSR or IOIO intercept table extends to a physical address that @@ -2339,41 +2339,41 @@ static void test_msrpm_iopm_bitmap_addrs(void) u64 addr = virt_to_phys(msr_bitmap) & (~((1ull << 12) - 1)); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, - addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR, - "MSRPM"); + addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR, + "MSRPM"); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, - addr_beyond_limit - 2 * PAGE_SIZE + 1, SVM_EXIT_ERR, - "MSRPM"); + addr_beyond_limit - 2 * PAGE_SIZE + 1, SVM_EXIT_ERR, + "MSRPM"); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, - addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR, - "MSRPM"); + addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR, + "MSRPM"); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, addr, - SVM_EXIT_VMMCALL, "MSRPM"); + SVM_EXIT_VMMCALL, "MSRPM"); addr |= (1ull << 12) - 1; TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_MSR_PROT, addr, - SVM_EXIT_VMMCALL, "MSRPM"); + SVM_EXIT_VMMCALL, "MSRPM"); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, - addr_beyond_limit - 4 * PAGE_SIZE, SVM_EXIT_VMMCALL, - "IOPM"); + addr_beyond_limit - 4 * PAGE_SIZE, SVM_EXIT_VMMCALL, + "IOPM"); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, - addr_beyond_limit - 3 * PAGE_SIZE, SVM_EXIT_VMMCALL, - "IOPM"); + addr_beyond_limit - 3 * PAGE_SIZE, SVM_EXIT_VMMCALL, + "IOPM"); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, - addr_beyond_limit - 2 * PAGE_SIZE - 2, SVM_EXIT_VMMCALL, - "IOPM"); + addr_beyond_limit - 2 * PAGE_SIZE - 2, SVM_EXIT_VMMCALL, + "IOPM"); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, - addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR, - "IOPM"); + addr_beyond_limit - 2 * PAGE_SIZE, SVM_EXIT_ERR, + "IOPM"); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, - addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR, - "IOPM"); + addr_beyond_limit - PAGE_SIZE, SVM_EXIT_ERR, + "IOPM"); addr = virt_to_phys(io_bitmap) & (~((1ull << 11) - 1)); TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr, - SVM_EXIT_VMMCALL, "IOPM"); + SVM_EXIT_VMMCALL, "IOPM"); addr |= (1ull << 12) - 1; TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr, - SVM_EXIT_VMMCALL, "IOPM"); + SVM_EXIT_VMMCALL, "IOPM"); vmcb->control.intercept = saved_intercept; } @@ -2382,22 +2382,22 @@ static void test_msrpm_iopm_bitmap_addrs(void) * Unlike VMSAVE, VMRUN seems not to update the value of noncanonical * segment bases in the VMCB. However, VMENTRY succeeds as documented. */ -#define TEST_CANONICAL_VMRUN(seg_base, msg) \ - saved_addr = seg_base; \ +#define TEST_CANONICAL_VMRUN(seg_base, msg) \ + saved_addr = seg_base; \ seg_base = (seg_base & ((1ul << addr_limit) - 1)) | noncanonical_mask; \ - return_value = svm_vmrun(); \ - report(return_value == SVM_EXIT_VMMCALL, \ - "Successful VMRUN with noncanonical %s.base", msg); \ + return_value = svm_vmrun(); \ + report(return_value == SVM_EXIT_VMMCALL, \ + "Successful VMRUN with noncanonical %s.base", msg); \ seg_base = saved_addr; -#define TEST_CANONICAL_VMLOAD(seg_base, msg) \ - saved_addr = seg_base; \ +#define TEST_CANONICAL_VMLOAD(seg_base, msg) \ + saved_addr = seg_base; \ seg_base = (seg_base & ((1ul << addr_limit) - 1)) | noncanonical_mask; \ - asm volatile ("vmload %0" : : "a"(vmcb_phys) : "memory"); \ - asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory"); \ - report(is_canonical(seg_base), \ - "Test %s.base for canonical form: %lx", msg, seg_base); \ + asm volatile ("vmload %0" : : "a"(vmcb_phys) : "memory"); \ + asm volatile ("vmsave %0" : : "a"(vmcb_phys) : "memory"); \ + report(is_canonical(seg_base), \ + "Test %s.base for canonical form: %lx", msg, seg_base); \ seg_base = saved_addr; static void test_canonicalization(void) @@ -2477,7 +2477,7 @@ static void svm_test_singlestep(void) vmcb->save.rflags |= X86_EFLAGS_TF; report (__svm_vmrun((u64)guest_rflags_test_guest) == SVM_EXIT_VMMCALL && guest_rflags_test_trap_rip == (u64)&insn2, - "Test EFLAGS.TF on VMRUN: trap expected after completion of first guest instruction"); + "Test EFLAGS.TF on VMRUN: trap expected after completion of first guest instruction"); /* * No trap expected */ @@ -2513,52 +2513,52 @@ static unsigned long volatile physical = 0; static void gp_isr(struct ex_regs *r) { - svm_errata_reproduced = true; - /* skip over the vmsave instruction*/ - r->rip += 3; + svm_errata_reproduced = true; + /* skip over the vmsave instruction*/ + r->rip += 3; } static void svm_vmrun_errata_test(void) { - unsigned long *last_page = NULL; + unsigned long *last_page = NULL; - handle_exception(GP_VECTOR, gp_isr); + handle_exception(GP_VECTOR, gp_isr); - while (!svm_errata_reproduced) { + while (!svm_errata_reproduced) { - unsigned long *page = alloc_pages(1); + unsigned long *page = alloc_pages(1); - if (!page) { - report_pass("All guest memory tested, no bug found"); - break; - } + if (!page) { + report_pass("All guest memory tested, no bug found"); + break; + } - physical = virt_to_phys(page); + physical = virt_to_phys(page); - asm volatile ( - "mov %[_physical], %%rax\n\t" - "vmsave %%rax\n\t" + asm volatile ( + "mov %[_physical], %%rax\n\t" + "vmsave %%rax\n\t" - : [_physical] "=m" (physical) - : /* no inputs*/ - : "rax" /*clobbers*/ - ); + : [_physical] "=m" (physical) + : /* no inputs*/ + : "rax" /*clobbers*/ + ); - if (svm_errata_reproduced) { - report_fail("Got #GP exception - svm errata reproduced at 0x%lx", - physical); - break; - } + if (svm_errata_reproduced) { + report_fail("Got #GP exception - svm errata reproduced at 0x%lx", + physical); + break; + } - *page = (unsigned long)last_page; - last_page = page; - } + *page = (unsigned long)last_page; + last_page = page; + } - while (last_page) { - unsigned long *page = last_page; - last_page = (unsigned long *)*last_page; - free_pages_by_order(page, 1); - } + while (last_page) { + unsigned long *page = last_page; + last_page = (unsigned long *)*last_page; + free_pages_by_order(page, 1); + } } static void vmload_vmsave_guest_main(struct svm_test *test) @@ -2583,7 +2583,7 @@ static void svm_vmload_vmsave(void) vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " - "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); + "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); /* * Enabling intercept for VMLOAD and VMSAVE causes respective @@ -2592,102 +2592,101 @@ static void svm_vmload_vmsave(void) vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " - "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); + "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " - "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); + "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " - "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); + "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " - "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); + "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " - "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); + "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " - "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); + "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " - "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); + "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); vmcb->control.intercept = intercept_saved; } static void prepare_vgif_enabled(struct svm_test *test) { - default_prepare(test); + default_prepare(test); } static void test_vgif(struct svm_test *test) { - asm volatile ("vmmcall\n\tstgi\n\tvmmcall\n\tclgi\n\tvmmcall\n\t"); - + asm volatile ("vmmcall\n\tstgi\n\tvmmcall\n\tclgi\n\tvmmcall\n\t"); } static bool vgif_finished(struct svm_test *test) { - switch (get_test_stage(test)) - { - case 0: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall."); - return true; - } - vmcb->control.int_ctl |= V_GIF_ENABLED_MASK; - vmcb->save.rip += 3; - inc_test_stage(test); - break; - case 1: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall."); - return true; - } - if (!(vmcb->control.int_ctl & V_GIF_MASK)) { - report_fail("Failed to set VGIF when executing STGI."); - vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK; - return true; - } - report_pass("STGI set VGIF bit."); - vmcb->save.rip += 3; - inc_test_stage(test); - break; - case 2: - if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { - report_fail("VMEXIT not due to vmmcall."); - return true; - } - if (vmcb->control.int_ctl & V_GIF_MASK) { - report_fail("Failed to clear VGIF when executing CLGI."); - vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK; - return true; - } - report_pass("CLGI cleared VGIF bit."); - vmcb->save.rip += 3; - inc_test_stage(test); - vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK; - break; - default: - return true; - break; - } - - return get_test_stage(test) == 3; + switch (get_test_stage(test)) + { + case 0: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall."); + return true; + } + vmcb->control.int_ctl |= V_GIF_ENABLED_MASK; + vmcb->save.rip += 3; + inc_test_stage(test); + break; + case 1: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall."); + return true; + } + if (!(vmcb->control.int_ctl & V_GIF_MASK)) { + report_fail("Failed to set VGIF when executing STGI."); + vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK; + return true; + } + report_pass("STGI set VGIF bit."); + vmcb->save.rip += 3; + inc_test_stage(test); + break; + case 2: + if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { + report_fail("VMEXIT not due to vmmcall."); + return true; + } + if (vmcb->control.int_ctl & V_GIF_MASK) { + report_fail("Failed to clear VGIF when executing CLGI."); + vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK; + return true; + } + report_pass("CLGI cleared VGIF bit."); + vmcb->save.rip += 3; + inc_test_stage(test); + vmcb->control.int_ctl &= ~V_GIF_ENABLED_MASK; + break; + default: + return true; + break; + } + + return get_test_stage(test) == 3; } static bool vgif_check(struct svm_test *test) { - return get_test_stage(test) == 3; + return get_test_stage(test) == 3; } @@ -2696,66 +2695,66 @@ static int wait_counter; static void pause_filter_test_guest_main(struct svm_test *test) { - int i; - for (i = 0 ; i < pause_test_counter ; i++) - pause(); + int i; + for (i = 0 ; i < pause_test_counter ; i++) + pause(); - if (!wait_counter) - return; + if (!wait_counter) + return; - for (i = 0; i < wait_counter; i++) - ; + for (i = 0; i < wait_counter; i++) + ; - for (i = 0 ; i < pause_test_counter ; i++) - pause(); + for (i = 0 ; i < pause_test_counter ; i++) + pause(); } static void pause_filter_run_test(int pause_iterations, int filter_value, int wait_iterations, int threshold) { - test_set_guest(pause_filter_test_guest_main); + test_set_guest(pause_filter_test_guest_main); - pause_test_counter = pause_iterations; - wait_counter = wait_iterations; + pause_test_counter = pause_iterations; + wait_counter = wait_iterations; - vmcb->control.pause_filter_count = filter_value; - vmcb->control.pause_filter_thresh = threshold; - svm_vmrun(); + vmcb->control.pause_filter_count = filter_value; + vmcb->control.pause_filter_thresh = threshold; + svm_vmrun(); - if (filter_value <= pause_iterations || wait_iterations < threshold) - report(vmcb->control.exit_code == SVM_EXIT_PAUSE, "expected PAUSE vmexit"); - else - report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "no expected PAUSE vmexit"); + if (filter_value <= pause_iterations || wait_iterations < threshold) + report(vmcb->control.exit_code == SVM_EXIT_PAUSE, "expected PAUSE vmexit"); + else + report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "no expected PAUSE vmexit"); } static void pause_filter_test(void) { - if (!pause_filter_supported()) { - report_skip("PAUSE filter not supported in the guest"); - return; - } + if (!pause_filter_supported()) { + report_skip("PAUSE filter not supported in the guest"); + return; + } - vmcb->control.intercept |= (1 << INTERCEPT_PAUSE); + vmcb->control.intercept |= (1 << INTERCEPT_PAUSE); - // filter count more that pause count - no VMexit - pause_filter_run_test(10, 9, 0, 0); + // filter count more that pause count - no VMexit + pause_filter_run_test(10, 9, 0, 0); - // filter count smaller pause count - no VMexit - pause_filter_run_test(20, 21, 0, 0); + // filter count smaller pause count - no VMexit + pause_filter_run_test(20, 21, 0, 0); - if (pause_threshold_supported()) { - // filter count smaller pause count - no VMexit + large enough threshold - // so that filter counter resets - pause_filter_run_test(20, 21, 1000, 10); + if (pause_threshold_supported()) { + // filter count smaller pause count - no VMexit + large enough threshold + // so that filter counter resets + pause_filter_run_test(20, 21, 1000, 10); - // filter count smaller pause count - no VMexit + small threshold - // so that filter doesn't reset - pause_filter_run_test(20, 21, 10, 1000); - } else { - report_skip("PAUSE threshold not supported in the guest"); - return; - } + // filter count smaller pause count - no VMexit + small threshold + // so that filter doesn't reset + pause_filter_run_test(20, 21, 10, 1000); + } else { + report_skip("PAUSE threshold not supported in the guest"); + return; + } } @@ -2763,81 +2762,81 @@ static int of_test_counter; static void guest_test_of_handler(struct ex_regs *r) { - of_test_counter++; + of_test_counter++; } static void svm_of_test_guest(struct svm_test *test) { - struct far_pointer32 fp = { - .offset = (uintptr_t)&&into, - .selector = KERNEL_CS32, - }; - uintptr_t rsp; + struct far_pointer32 fp = { + .offset = (uintptr_t)&&into, + .selector = KERNEL_CS32, + }; + uintptr_t rsp; - asm volatile ("mov %%rsp, %0" : "=r"(rsp)); + asm volatile ("mov %%rsp, %0" : "=r"(rsp)); - if (fp.offset != (uintptr_t)&&into) { - printf("Codee address too high.\n"); - return; - } + if (fp.offset != (uintptr_t)&&into) { + printf("Codee address too high.\n"); + return; + } - if ((u32)rsp != rsp) { - printf("Stack address too high.\n"); - } + if ((u32)rsp != rsp) { + printf("Stack address too high.\n"); + } - asm goto("lcall *%0" : : "m" (fp) : "rax" : into); - return; + asm goto("lcall *%0" : : "m" (fp) : "rax" : into); + return; into: - asm volatile (".code32;" - "movl $0x7fffffff, %eax;" - "addl %eax, %eax;" - "into;" - "lret;" - ".code64"); - __builtin_unreachable(); + asm volatile (".code32;" + "movl $0x7fffffff, %eax;" + "addl %eax, %eax;" + "into;" + "lret;" + ".code64"); + __builtin_unreachable(); } static void svm_into_test(void) { - handle_exception(OF_VECTOR, guest_test_of_handler); - test_set_guest(svm_of_test_guest); - report(svm_vmrun() == SVM_EXIT_VMMCALL && of_test_counter == 1, - "#OF is generated in L2 exception handler0"); + handle_exception(OF_VECTOR, guest_test_of_handler); + test_set_guest(svm_of_test_guest); + report(svm_vmrun() == SVM_EXIT_VMMCALL && of_test_counter == 1, + "#OF is generated in L2 exception handler"); } static int bp_test_counter; static void guest_test_bp_handler(struct ex_regs *r) { - bp_test_counter++; + bp_test_counter++; } static void svm_bp_test_guest(struct svm_test *test) { - asm volatile("int3"); + asm volatile("int3"); } static void svm_int3_test(void) { - handle_exception(BP_VECTOR, guest_test_bp_handler); - test_set_guest(svm_bp_test_guest); - report(svm_vmrun() == SVM_EXIT_VMMCALL && bp_test_counter == 1, - "#BP is handled in L2 exception handler"); + handle_exception(BP_VECTOR, guest_test_bp_handler); + test_set_guest(svm_bp_test_guest); + report(svm_vmrun() == SVM_EXIT_VMMCALL && bp_test_counter == 1, + "#BP is handled in L2 exception handler"); } static int nm_test_counter; static void guest_test_nm_handler(struct ex_regs *r) { - nm_test_counter++; - write_cr0(read_cr0() & ~X86_CR0_TS); - write_cr0(read_cr0() & ~X86_CR0_EM); + nm_test_counter++; + write_cr0(read_cr0() & ~X86_CR0_TS); + write_cr0(read_cr0() & ~X86_CR0_EM); } static void svm_nm_test_guest(struct svm_test *test) { - asm volatile("fnop"); + asm volatile("fnop"); } /* This test checks that: @@ -2854,24 +2853,23 @@ static void svm_nm_test_guest(struct svm_test *test) static void svm_nm_test(void) { - handle_exception(NM_VECTOR, guest_test_nm_handler); - write_cr0(read_cr0() & ~X86_CR0_TS); - test_set_guest(svm_nm_test_guest); + handle_exception(NM_VECTOR, guest_test_nm_handler); + write_cr0(read_cr0() & ~X86_CR0_TS); + test_set_guest(svm_nm_test_guest); - vmcb->save.cr0 = vmcb->save.cr0 | X86_CR0_TS; - report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 1, - "fnop with CR0.TS set in L2, #NM is triggered"); + vmcb->save.cr0 = vmcb->save.cr0 | X86_CR0_TS; + report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 1, + "fnop with CR0.TS set in L2, #NM is triggered"); - vmcb->save.cr0 = (vmcb->save.cr0 & ~X86_CR0_TS) | X86_CR0_EM; - report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 2, - "fnop with CR0.EM set in L2, #NM is triggered"); + vmcb->save.cr0 = (vmcb->save.cr0 & ~X86_CR0_TS) | X86_CR0_EM; + report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 2, + "fnop with CR0.EM set in L2, #NM is triggered"); - vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM); - report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 2, - "fnop with CR0.TS and CR0.EM unset no #NM exception"); + vmcb->save.cr0 = vmcb->save.cr0 & ~(X86_CR0_TS | X86_CR0_EM); + report(svm_vmrun() == SVM_EXIT_VMMCALL && nm_test_counter == 2, + "fnop with CR0.TS and CR0.EM unset no #NM excpetion"); } - static bool check_lbr(u64 *from_excepted, u64 *to_expected) { u64 from = rdmsr(MSR_IA32_LASTBRANCHFROMIP); @@ -2879,13 +2877,13 @@ static bool check_lbr(u64 *from_excepted, u64 *to_expected) if ((u64)from_excepted != from) { report(false, "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx", - (u64)from_excepted, from); + (u64)from_excepted, from); return false; } if ((u64)to_expected != to) { report(false, "MSR_IA32_LASTBRANCHFROMIP, expected=0x%lx, actual=0x%lx", - (u64)from_excepted, from); + (u64)from_excepted, from); return false; } @@ -2902,15 +2900,15 @@ static bool check_dbgctl(u64 dbgctl, u64 dbgctl_expected) } -#define DO_BRANCH(branch_name) \ - asm volatile ( \ - # branch_name "_from:" \ - "jmp " # branch_name "_to\n" \ - "nop\n" \ - "nop\n" \ - # branch_name "_to:" \ - "nop\n" \ - ) +#define DO_BRANCH(branch_name) \ + asm volatile ( \ + # branch_name "_from:" \ + "jmp " # branch_name "_to\n" \ + "nop\n" \ + "nop\n" \ + # branch_name "_to:" \ + "nop\n" \ + ) extern u64 guest_branch0_from, guest_branch0_to; @@ -3010,7 +3008,7 @@ static void svm_lbrv_test1(void) if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); + vmcb->control.exit_code); return; } @@ -3034,7 +3032,7 @@ static void svm_lbrv_test2(void) if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); + vmcb->control.exit_code); return; } @@ -3062,7 +3060,7 @@ static void svm_lbrv_nested_test1(void) if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); + vmcb->control.exit_code); return; } @@ -3074,6 +3072,7 @@ static void svm_lbrv_nested_test1(void) check_dbgctl(dbgctl, DEBUGCTLMSR_LBR); check_lbr(&host_branch3_from, &host_branch3_to); } + static void svm_lbrv_nested_test2(void) { if (!lbrv_supported()) { @@ -3097,7 +3096,7 @@ static void svm_lbrv_nested_test2(void) if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) { report(false, "VMEXIT not due to vmmcall. Exit reason 0x%x", - vmcb->control.exit_code); + vmcb->control.exit_code); return; } @@ -3206,8 +3205,6 @@ static void svm_intr_intercept_mix_gif(void) svm_intr_intercept_mix_run_guest(&dummy_isr_recevied, SVM_EXIT_INTR); } - - // subtest: test that a clever guest can trigger an interrupt by setting GIF // if GIF is not intercepted and interrupt comes after guest // started running @@ -3296,121 +3293,121 @@ static void svm_intr_intercept_mix_smi(void) int main(int ac, char **av) { - setup_vm(); - return run_svm_tests(ac, av); + setup_vm(); + return run_svm_tests(ac, av); } struct svm_test svm_tests[] = { - { "null", default_supported, default_prepare, - default_prepare_gif_clear, null_test, - default_finished, null_check }, - { "vmrun", default_supported, default_prepare, - default_prepare_gif_clear, test_vmrun, - default_finished, check_vmrun }, - { "ioio", default_supported, prepare_ioio, - default_prepare_gif_clear, test_ioio, - ioio_finished, check_ioio }, - { "vmrun intercept check", default_supported, prepare_no_vmrun_int, - default_prepare_gif_clear, null_test, default_finished, - check_no_vmrun_int }, - { "rsm", default_supported, - prepare_rsm_intercept, default_prepare_gif_clear, - test_rsm_intercept, finished_rsm_intercept, check_rsm_intercept }, - { "cr3 read intercept", default_supported, - prepare_cr3_intercept, default_prepare_gif_clear, - test_cr3_intercept, default_finished, check_cr3_intercept }, - { "cr3 read nointercept", default_supported, default_prepare, - default_prepare_gif_clear, test_cr3_intercept, default_finished, - check_cr3_nointercept }, - { "cr3 read intercept emulate", smp_supported, - prepare_cr3_intercept_bypass, default_prepare_gif_clear, - test_cr3_intercept_bypass, default_finished, check_cr3_intercept }, - { "dr intercept check", default_supported, prepare_dr_intercept, - default_prepare_gif_clear, test_dr_intercept, dr_intercept_finished, - check_dr_intercept }, - { "next_rip", next_rip_supported, prepare_next_rip, - default_prepare_gif_clear, test_next_rip, - default_finished, check_next_rip }, - { "msr intercept check", default_supported, prepare_msr_intercept, - default_prepare_gif_clear, test_msr_intercept, - msr_intercept_finished, check_msr_intercept }, - { "mode_switch", default_supported, prepare_mode_switch, - default_prepare_gif_clear, test_mode_switch, - mode_switch_finished, check_mode_switch }, - { "asid_zero", default_supported, prepare_asid_zero, - default_prepare_gif_clear, test_asid_zero, - default_finished, check_asid_zero }, - { "sel_cr0_bug", default_supported, sel_cr0_bug_prepare, - default_prepare_gif_clear, sel_cr0_bug_test, - sel_cr0_bug_finished, sel_cr0_bug_check }, - { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare, - default_prepare_gif_clear, tsc_adjust_test, - default_finished, tsc_adjust_check }, - { "latency_run_exit", default_supported, latency_prepare, - default_prepare_gif_clear, latency_test, - latency_finished, latency_check }, - { "latency_run_exit_clean", default_supported, latency_prepare, - default_prepare_gif_clear, latency_test, - latency_finished_clean, latency_check }, - { "latency_svm_insn", default_supported, lat_svm_insn_prepare, - default_prepare_gif_clear, null_test, - lat_svm_insn_finished, lat_svm_insn_check }, - { "exc_inject", default_supported, exc_inject_prepare, - default_prepare_gif_clear, exc_inject_test, - exc_inject_finished, exc_inject_check }, - { "pending_event", default_supported, pending_event_prepare, - default_prepare_gif_clear, - pending_event_test, pending_event_finished, pending_event_check }, - { "pending_event_cli", default_supported, pending_event_cli_prepare, - pending_event_cli_prepare_gif_clear, - pending_event_cli_test, pending_event_cli_finished, - pending_event_cli_check }, - { "interrupt", default_supported, interrupt_prepare, - default_prepare_gif_clear, interrupt_test, - interrupt_finished, interrupt_check }, - { "nmi", default_supported, nmi_prepare, - default_prepare_gif_clear, nmi_test, - nmi_finished, nmi_check }, - { "nmi_hlt", smp_supported, nmi_prepare, - default_prepare_gif_clear, nmi_hlt_test, - nmi_hlt_finished, nmi_hlt_check }, - { "virq_inject", default_supported, virq_inject_prepare, - default_prepare_gif_clear, virq_inject_test, - virq_inject_finished, virq_inject_check }, - { "reg_corruption", default_supported, reg_corruption_prepare, - default_prepare_gif_clear, reg_corruption_test, - reg_corruption_finished, reg_corruption_check }, - { "svm_init_startup_test", smp_supported, init_startup_prepare, - default_prepare_gif_clear, null_test, - init_startup_finished, init_startup_check }, - { "svm_init_intercept_test", smp_supported, init_intercept_prepare, - default_prepare_gif_clear, init_intercept_test, - init_intercept_finished, init_intercept_check, .on_vcpu = 2 }, - { "host_rflags", default_supported, host_rflags_prepare, - host_rflags_prepare_gif_clear, host_rflags_test, - host_rflags_finished, host_rflags_check }, - { "vgif", vgif_supported, prepare_vgif_enabled, - default_prepare_gif_clear, test_vgif, vgif_finished, - vgif_check }, - TEST(svm_cr4_osxsave_test), - TEST(svm_guest_state_test), - TEST(svm_vmrun_errata_test), - TEST(svm_vmload_vmsave), - TEST(svm_test_singlestep), - TEST(svm_nm_test), - TEST(svm_int3_test), - TEST(svm_into_test), - TEST(svm_lbrv_test0), - TEST(svm_lbrv_test1), - TEST(svm_lbrv_test2), - TEST(svm_lbrv_nested_test1), - TEST(svm_lbrv_nested_test2), - TEST(svm_intr_intercept_mix_if), - TEST(svm_intr_intercept_mix_gif), - TEST(svm_intr_intercept_mix_gif2), - TEST(svm_intr_intercept_mix_nmi), - TEST(svm_intr_intercept_mix_smi), - TEST(svm_tsc_scale_test), - TEST(pause_filter_test), - { NULL, NULL, NULL, NULL, NULL, NULL, NULL } + { "null", default_supported, default_prepare, + default_prepare_gif_clear, null_test, + default_finished, null_check }, + { "vmrun", default_supported, default_prepare, + default_prepare_gif_clear, test_vmrun, + default_finished, check_vmrun }, + { "ioio", default_supported, prepare_ioio, + default_prepare_gif_clear, test_ioio, + ioio_finished, check_ioio }, + { "vmrun intercept check", default_supported, prepare_no_vmrun_int, + default_prepare_gif_clear, null_test, default_finished, + check_no_vmrun_int }, + { "rsm", default_supported, + prepare_rsm_intercept, default_prepare_gif_clear, + test_rsm_intercept, finished_rsm_intercept, check_rsm_intercept }, + { "cr3 read intercept", default_supported, + prepare_cr3_intercept, default_prepare_gif_clear, + test_cr3_intercept, default_finished, check_cr3_intercept }, + { "cr3 read nointercept", default_supported, default_prepare, + default_prepare_gif_clear, test_cr3_intercept, default_finished, + check_cr3_nointercept }, + { "cr3 read intercept emulate", smp_supported, + prepare_cr3_intercept_bypass, default_prepare_gif_clear, + test_cr3_intercept_bypass, default_finished, check_cr3_intercept }, + { "dr intercept check", default_supported, prepare_dr_intercept, + default_prepare_gif_clear, test_dr_intercept, dr_intercept_finished, + check_dr_intercept }, + { "next_rip", next_rip_supported, prepare_next_rip, + default_prepare_gif_clear, test_next_rip, + default_finished, check_next_rip }, + { "msr intercept check", default_supported, prepare_msr_intercept, + default_prepare_gif_clear, test_msr_intercept, + msr_intercept_finished, check_msr_intercept }, + { "mode_switch", default_supported, prepare_mode_switch, + default_prepare_gif_clear, test_mode_switch, + mode_switch_finished, check_mode_switch }, + { "asid_zero", default_supported, prepare_asid_zero, + default_prepare_gif_clear, test_asid_zero, + default_finished, check_asid_zero }, + { "sel_cr0_bug", default_supported, sel_cr0_bug_prepare, + default_prepare_gif_clear, sel_cr0_bug_test, + sel_cr0_bug_finished, sel_cr0_bug_check }, + { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare, + default_prepare_gif_clear, tsc_adjust_test, + default_finished, tsc_adjust_check }, + { "latency_run_exit", default_supported, latency_prepare, + default_prepare_gif_clear, latency_test, + latency_finished, latency_check }, + { "latency_run_exit_clean", default_supported, latency_prepare, + default_prepare_gif_clear, latency_test, + latency_finished_clean, latency_check }, + { "latency_svm_insn", default_supported, lat_svm_insn_prepare, + default_prepare_gif_clear, null_test, + lat_svm_insn_finished, lat_svm_insn_check }, + { "exc_inject", default_supported, exc_inject_prepare, + default_prepare_gif_clear, exc_inject_test, + exc_inject_finished, exc_inject_check }, + { "pending_event", default_supported, pending_event_prepare, + default_prepare_gif_clear, + pending_event_test, pending_event_finished, pending_event_check }, + { "pending_event_cli", default_supported, pending_event_cli_prepare, + pending_event_cli_prepare_gif_clear, + pending_event_cli_test, pending_event_cli_finished, + pending_event_cli_check }, + { "interrupt", default_supported, interrupt_prepare, + default_prepare_gif_clear, interrupt_test, + interrupt_finished, interrupt_check }, + { "nmi", default_supported, nmi_prepare, + default_prepare_gif_clear, nmi_test, + nmi_finished, nmi_check }, + { "nmi_hlt", smp_supported, nmi_prepare, + default_prepare_gif_clear, nmi_hlt_test, + nmi_hlt_finished, nmi_hlt_check }, + { "virq_inject", default_supported, virq_inject_prepare, + default_prepare_gif_clear, virq_inject_test, + virq_inject_finished, virq_inject_check }, + { "reg_corruption", default_supported, reg_corruption_prepare, + default_prepare_gif_clear, reg_corruption_test, + reg_corruption_finished, reg_corruption_check }, + { "svm_init_startup_test", smp_supported, init_startup_prepare, + default_prepare_gif_clear, null_test, + init_startup_finished, init_startup_check }, + { "svm_init_intercept_test", smp_supported, init_intercept_prepare, + default_prepare_gif_clear, init_intercept_test, + init_intercept_finished, init_intercept_check, .on_vcpu = 2 }, + { "host_rflags", default_supported, host_rflags_prepare, + host_rflags_prepare_gif_clear, host_rflags_test, + host_rflags_finished, host_rflags_check }, + { "vgif", vgif_supported, prepare_vgif_enabled, + default_prepare_gif_clear, test_vgif, vgif_finished, + vgif_check }, + TEST(svm_cr4_osxsave_test), + TEST(svm_guest_state_test), + TEST(svm_vmrun_errata_test), + TEST(svm_vmload_vmsave), + TEST(svm_test_singlestep), + TEST(svm_nm_test), + TEST(svm_int3_test), + TEST(svm_into_test), + TEST(svm_lbrv_test0), + TEST(svm_lbrv_test1), + TEST(svm_lbrv_test2), + TEST(svm_lbrv_nested_test1), + TEST(svm_lbrv_nested_test2), + TEST(svm_intr_intercept_mix_if), + TEST(svm_intr_intercept_mix_gif), + TEST(svm_intr_intercept_mix_gif2), + TEST(svm_intr_intercept_mix_nmi), + TEST(svm_intr_intercept_mix_smi), + TEST(svm_tsc_scale_test), + TEST(pause_filter_test), + { NULL, NULL, NULL, NULL, NULL, NULL, NULL } };