From patchwork Mon Jul 29 11:56:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 18D4B13A4 for ; Mon, 29 Jul 2019 11:56:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 08702286F3 for ; Mon, 29 Jul 2019 11:56:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E7ED628746; Mon, 29 Jul 2019 11:56:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8C3612871F for ; Mon, 29 Jul 2019 11:56:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LhiZBfjjjQqEYsyyIsLT57n62sPOPTEEPUO3h1iDOeI=; b=TOQYMbYX/R24PB P11VRtxWKTWokPLgvO24QAw6G83lIzU71WwyRcDPPCamC6E/O1DJduLbJr2UXUt+SUITi38Kj0Tap ylnffq7LiRI4D7hDFz9MZ8bv4rGPk8yOtHITh5OjdaJ7VnWvq/j+owMwGuepx4LmMwd7SMaUWXDCJ cNOAz9C+0I9ZEV5CKKCMEuWXzeiV9trUgTvnKsjkJR3EMzv/wHLzO0Bsrwb4RDVX6lWdWJQMVNOmu t0DMK1wOkR2UR5Aqbtg5J1KCODhiu2QZKVdKfhTUyGqC6aCvn8EooJn0Q8HOu23zueHg55Luz3uJn deqQydZBt3k6ViUGKOKQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4GV-0002IF-LP; Mon, 29 Jul 2019 11:56:31 +0000 Received: from esa2.hgst.iphmx.com ([68.232.143.124]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4GT-0002Es-TZ for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:56:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401401; x=1595937401; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=hH7etl79ZpfJxHHPqbla+Okggd0CtmJEM47yF9QZBpM=; b=nBuC2ajMtl6KEdEF94k1L0cZrmqMfYfcSUS69EQHAxQxO6hkBFM9c1Ui v/uRnAfbj4LktD+NpQp/xQHyEFBoORKX9imqu9rPdkBhzxTsGidZ15Kkm yQZbXzDnpgLxqzuRSgvpGxBDEFgi32DbHzdlbeVxfk33KmBfNK+fv5llG RIu9/C/tLk3FNFBb6lp7Si6DJgBhxq6y+ZHOhXXMpXsWrI3ViK4jemTai lyUZPj4MTwmkQgkeBI10Ar6LM+VKHET34lDuvG+AD/xz/203kwaRltBEl ZCyiDAco/NqD6fFCOmLYbwSNFsCHqizsLnbwkLpOgcp2x0nPdg+TaA6HI A==; IronPort-SDR: z8cNlTar4RhfX6+1PqB16VGcmC87IoJ4E3SwT6WUuFzA2OdZ9hdjf0zgjIrorKFf4EavBwzK7N j5OQMoKgq+TKSoGc606AwI37+djepQViRQt3aokeKolOv9Nqi2JktY1Y4MkEd5/4gVS7BpOUx0 iRbIUvIdY2WDhKK+DAOHiAFsspniXTZVr7XkBdCkk5tqC+AyGNzEonHl40wtOaCR81+BCvbd9B aMQ3JCIPq7YsOcqpaS/Rya9bTS56ovmISg9LRE97T3VBTYrh79yQa3HJ3jOqtT6J0/ZbIq7Raw 4rQ= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="214553052" Received: from mail-sn1nam04lp2050.outbound.protection.outlook.com (HELO NAM04-SN1-obe.outbound.protection.outlook.com) ([104.47.44.50]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:56:40 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ICkHqUxsjfKX7jZAwDfEcOHdYiiYk45mm2bnVngtKkcVifxlQpz7sg/jP62U5w49HO8tye7KAtnAbRyO17Rfe0Zk+LX+T3sPvwQZ0YcaXirIZ/v100MzlZp3hdQa6VA48t3eH7c3qlaw0F7XBDIxgLclWGTbbNjRnJfE/iFGklxZJ/rk3JnEE4GX6QvngXJ4NKKGnUhLJ9e8HYjLgwfu77k94yE+OnXxQy/Emws+B4nuNx8Xj0YDE/ETddHpofmTSFmiETfkXkyA5cu6q6PUvrfNnPopZNEfuu0TaA/0shFVAO+VgHJfwnamZ0YmS9r1EsQ2/GzXk6Cw0K3CLoApXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cIf5ar9KJ4QA9rBh1lTCb1fvDMUQSmqrfiZPoUtfzIY=; b=KHaXq5l3SXAlY8G4xvBIuFI+RoTfwVRPf/eQ62GGpb2cOciKuIze9zd/FGJ/72yxAKURzKVnhQPU7P+nETgZ5mXQQNrUZyuHx52fIYGouLQwwpc70OXkwtN2oVAfIFHsRfX8x4poFFfmghN4nEO0UeqziQz5BOkNqHrnvD3qLvTIB9uB0vHHC6Ujc1wkLpGzrat584d01S8IM7WYPKXARr3sI6Mbm+bToUGchYMc36C4c/rMRxgbwSs9JiDLpd6lj4yBzD7UF4HpSKLE4hwjSD84YNM9UKB6kwIWj4KmBhHIRnKxC8qjqeqfgrYPs8767+Q5FrbDbrVTmfbwOzCy2A== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cIf5ar9KJ4QA9rBh1lTCb1fvDMUQSmqrfiZPoUtfzIY=; b=MTmWFyFhMBN8VjL6bMWPOsyMOtIS4qmJ1s6bZnBYvkQwfZsEnB5rRPFRsS/RFY+pTZAe53OB+1RZTL5XOrhe3G2OkS1tErW8mNWv0MhBvm+WRfiCWMAPJ8pxIQBFYo6DLplwdycuPIxlaABqd8GjyZfdF7oJJs7HmE7hJ483xuU= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5952.namprd04.prod.outlook.com (20.179.21.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.15; Mon, 29 Jul 2019 11:56:28 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:56:28 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 01/16] KVM: RISC-V: Add KVM_REG_RISCV for ONE_REG interface Thread-Topic: [RFC PATCH 01/16] KVM: RISC-V: Add KVM_REG_RISCV for ONE_REG interface Thread-Index: AQHVRgSnaPlunpy9GkSc6WLWyGHt/Q== Date: Mon, 29 Jul 2019 11:56:27 +0000 Message-ID: <20190729115544.17895-2-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 45ece69e-3a8e-4874-76e6-08d7141bc996 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5952; x-ms-traffictypediagnostic: MN2PR04MB5952: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:147; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(39860400002)(366004)(376002)(396003)(136003)(346002)(189003)(199004)(478600001)(2906002)(446003)(6436002)(486006)(6512007)(53936002)(36756003)(11346002)(2616005)(44832011)(78486014)(386003)(6506007)(102836004)(55236004)(4326008)(71200400001)(476003)(76176011)(71190400001)(9456002)(7736002)(26005)(50226002)(81166006)(81156014)(8676002)(8936002)(186003)(99286004)(6486002)(68736007)(1076003)(256004)(7416002)(305945005)(66446008)(25786009)(66066001)(6116002)(3846002)(52116002)(14454004)(4744005)(316002)(54906003)(86362001)(66556008)(66476007)(110136005)(66946007)(5660300002)(64756008); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5952; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: dIKZmULRFFohR2fkbqZaEq/HkdZ6FBdwFfi0XgOAL/hyzk3+Fx3/q4zsCByLRYnIIeTFDoNOuCvhMsqdc7OKawY2WQgSly49t8idN7SFSgHC+286h6rKBlkOSXqYP1tqgCOjLGsTHkiP1uyHdK+85OLZHfBuhVeZKAXgft1SGOBcdumVGAjJQ7RZ++KwlG+CGHTtE34Ru4wMK0hNbeyosl5HPulzAKepEKl3udSjL/0i7kpqADIIoPOopd496cAKGvWS/tQjfdyKi0kNq7yH7s5IOmnrlpkochEX8YTql9hKYw8ZupmLznCAfu9lWyWXWISXaeSlp2FHLGrE65+oITuN3JO/cBQqqAvjhuSKMK/aMjFhEK8j+EcqdJ8td8izftsJIQhj8O2ixcu6haYDAJWCERLdmXPtRdQCpvzwSiM= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 45ece69e-3a8e-4874-76e6-08d7141bc996 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:56:27.9135 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5952 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045630_006935_8B34C776 X-CRM114-Status: GOOD ( 11.06 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We will be using ONE_REG interface accessing VCPU registers from user-space hence we add KVM_REG_RISCV for RISC-V VCPU registers. Signed-off-by: Anup Patel --- include/uapi/linux/kvm.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index a7c19540ce21..1b918ed94399 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1142,6 +1142,7 @@ struct kvm_dirty_tlb { #define KVM_REG_S390 0x5000000000000000ULL #define KVM_REG_ARM64 0x6000000000000000ULL #define KVM_REG_MIPS 0x7000000000000000ULL +#define KVM_REG_RISCV 0x8000000000000000ULL #define KVM_REG_SIZE_SHIFT 52 #define KVM_REG_SIZE_MASK 0x00f0000000000000ULL From patchwork Mon Jul 29 11:56:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 84B3914E5 for ; Mon, 29 Jul 2019 11:56:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7378E200E5 for ; Mon, 29 Jul 2019 11:56:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 676C7212DA; Mon, 29 Jul 2019 11:56:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F1FEB200E5 for ; Mon, 29 Jul 2019 11:56:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=M8atlw1NGYIXz1ORNVwQE7HWVlVJwL0rZ4zTw78svTE=; b=Oswk8neSrYf/Cy EXs8SS93d0fpaRkPTtgFhzHEgXgwTieGMl+GCjGOfWCddKe9dsd9sGlrqQaZLQODqUN6I0PMIc6Vt X0JiJMuyxJRA1J2poJ9l2kdDvfbwOJKWN+sMstaMG/I13E8Wz9EcJkf5o+dFnLlba+B6aHeUtZhex EEBvBtRcb48YhxLBZ6kveUvKZcHhMzP75zbQAFgOWHL1fsIyGSOR9lnKiX2WegTey1w0BDFQ0UbBp 8LS4PPyOjr3h8pwuD2sXVVsYXwzfgzSIMV8lTdB8eA7k9BxwhWNemnzVR7T5iqlo7uWk4Aipzuo0r /0fGjbPGDCxDeowO7A9g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Gg-0002NW-Fj; Mon, 29 Jul 2019 11:56:42 +0000 Received: from esa1.hgst.iphmx.com ([68.232.141.245]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Gd-0002Mg-IC for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:56:41 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401399; x=1595937399; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=E1j6Pb427CcQeVW7c3luwWNPehcIlXqi1Wpyb7HstMI=; b=Jp0Ats4Qy8BGeON9IUcJGrfF0jxzEk5psJVZwxblHjBVDCxLHqPSeuUw E1+nyEUzfSq5q95qUJaQbfUOCmozSQLj5R3E48tBsCgMH+kMm9SppvZHe eSUgpcSlFm2uGgVb39eJ3bhY0ubszm3c8+rDjAvzktpoFbI0GR0KfdSfm GO8GEGFqhYdFuNeg6gZ3JetUpAZ62/5rPX5VVCPX+8356NPZo+xOBeho1 1TmMoeKaHgggikbiSKd3EpHCL4OkkUmFbXRt5YFCvx7rK5PcdX2u3fp2S Icn8ahpg/xkv+QqFunoSarRrHAAZMGW71Mn/W+AhFr1M94KckdI9hFoWI A==; IronPort-SDR: SaiWdMdSLKMQRTOe+csM07wTKw9OzcIoID9cBspQggZUqbXXyZb0Q2pxH8N8Ct2/xxOmZTAmpF 4ohWFaorw7cKNYSMrY4IzMcuGW28QZW+2BfxE8QK7H/i7BFOyZozDW0TFCgbgJI72+eijgrVY4 7VvQMavKkcntbtBifj6++5dDHj2QsPytg6cHbqQ0tYABlf7d2CvD8J7+iKF+O5BMqxUfWDbyrL 98nKx/XMkL2aa309rqVTo95+ErvuzmRcPAeswlUGr/11l1jF3ZOq9LJN//05xqWNm4fNEU1L+x oW0= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="220843314" Received: from mail-co1nam05lp2054.outbound.protection.outlook.com (HELO NAM05-CO1-obe.outbound.protection.outlook.com) ([104.47.48.54]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:56:36 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Gdx884s1FRhTP/G+YV24P0z2SZXE3sHj6NKJnB3wi83JKUdJPaMxe42CUE4sC9bdHzo3/keQC3Y8/k3mDA4ljgwC75xNAsgHXaoR44AU0/j9U1nebWfUh0qc4DB6fEU0pUmCyXQK+WRjGsx2Nubbx4GFBdL+m0gLmmTqWK/51XBCkq2dJIpdw/H0YA4JHbfkj+KwD2D4alZf/tPag8++0Mc9y3hgXdOmE9/M0hmE9Eicnzjl9a9wu2QzBGhC7AHngx4aR9bMqxV6yrf7PDDMbSYhi6cMegcdDKVSxqYEff96C8l9nNvrJVc8EgP+SYJIJP8iSe3km2JcfxvC6w1Ceg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=caRpajzSVpBPjhqMi2IoMp9REmOb3dOIVMidSRAcrxg=; b=aAmFe0fq9RyzASzePik76Ts9OZ5T+emG6syrFtIFxBhzHDMmTLQE55w7ZWFEVUcgqkHDimgds+U0yaF7TvVzR3MfN/9OE94sVK2UX2+VICgVIg2DV/r/FrObJjAixPSL0nuYyYaNDMtkV25OMFN0AmEpLePz/xYR3qlm3BMmTFyhFRiLDiCT3PbVN0cvjIsn2R2xRCKAiQ90lbDzzVRpUABDv+XSSwHIJOvNY1/DjoUcotlJ67bbjclvzduOa3e44QQGLMPaNETVfKAorbdINC6s0zVp03WRcjziBO5QFTMgFC/YynxN+FrMO7T3zt7kVZkN4gL7RC57T6CTZ2czXQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=caRpajzSVpBPjhqMi2IoMp9REmOb3dOIVMidSRAcrxg=; b=fw70i9lf/5A9rU0IHzj4v0097S9ImhBQyYNLx7aygTtc9MwbCOGkqvqejLsqnpbmVyr4JR/egp4NFc8N4QdP/cGNTuKKFgjTvgh6D+jjkawXxWNeYwNYU4ZVsYEb36bvuMgsFc87ugtaXSecDFb7EsXvhyfl0Da22u4r0osjvLY= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5952.namprd04.prod.outlook.com (20.179.21.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.15; Mon, 29 Jul 2019 11:56:35 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:56:35 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 02/16] RISC-V: Add hypervisor extension related CSR defines Thread-Topic: [RFC PATCH 02/16] RISC-V: Add hypervisor extension related CSR defines Thread-Index: AQHVRgSrvZppU5q2bE+f/j7a1UqB9A== Date: Mon, 29 Jul 2019 11:56:35 +0000 Message-ID: <20190729115544.17895-3-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 0c98c886-68b4-4145-697b-08d7141bcdf8 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5952; x-ms-traffictypediagnostic: MN2PR04MB5952: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:5; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(39860400002)(366004)(376002)(396003)(136003)(346002)(189003)(199004)(478600001)(2906002)(446003)(6436002)(486006)(6512007)(53936002)(36756003)(11346002)(2616005)(44832011)(78486014)(386003)(6506007)(102836004)(55236004)(4326008)(71200400001)(476003)(76176011)(71190400001)(9456002)(7736002)(26005)(50226002)(81166006)(81156014)(8676002)(8936002)(186003)(99286004)(6486002)(68736007)(1076003)(256004)(7416002)(305945005)(66446008)(25786009)(66066001)(6116002)(3846002)(52116002)(14454004)(316002)(54906003)(86362001)(66556008)(66476007)(110136005)(66946007)(5660300002)(64756008); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5952; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: mly3vEpgZiIGKqoaG7jORsD1thoG4aH4tw4DYNGTdW47W+BzzCVW1n/QuXCoux6snuVaS9VGG5jixWXjlCRnAIRtK3P5K3H0qIEBmcJ89tx6cTSsYiEVEFemwwfZeaEiBMx3iwmAq+vqxssj70S+k6OY4+VBnXw0BGtFq10LQpPu5gkivcA2XHk6QFOBz3wL23rgb7J79O6GlbxKdl/+dOVjaAwnxSMsRvMZEHfFLhRcyly9fiLIpp3qmY4ginoIXmwUcVkvDz8Fn0DJfnMHYGTW3oad1mj7Rpqb+NXZsDGxl2vXyUSF7QErfsgMCpMLLzSeyXIQ4FijpCYef3OmAttoMiG+qoF0Ain13NRalDVqxH/UXsEwPhbanQz8eiBfy8PYmAgp/a1ZIsRHYRVB/D0qQBlkbJ+zRfwZXEo71Lk= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0c98c886-68b4-4145-697b-08d7141bcdf8 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:56:35.1264 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5952 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045639_715165_ADCBD57A X-CRM114-Status: UNSURE ( 9.46 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch extends asm/csr.h by adding RISC-V hypervisor extension related defines. Signed-off-by: Anup Patel --- arch/riscv/include/asm/csr.h | 58 ++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index a18923fa23c8..059c5cb22aaf 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -27,6 +27,8 @@ #define SR_XS_CLEAN _AC(0x00010000, UL) #define SR_XS_DIRTY _AC(0x00018000, UL) +#define SR_MXR _AC(0x00080000, UL) + #ifndef CONFIG_64BIT #define SR_SD _AC(0x80000000, UL) /* FS/XS dirty */ #else @@ -59,10 +61,13 @@ #define EXC_INST_MISALIGNED 0 #define EXC_INST_ACCESS 1 +#define EXC_INST_ILLEGAL 2 #define EXC_BREAKPOINT 3 #define EXC_LOAD_ACCESS 5 #define EXC_STORE_ACCESS 7 #define EXC_SYSCALL 8 +#define EXC_HYPERVISOR_SYSCALL 9 +#define EXC_SUPERVISOR_SYSCALL 10 #define EXC_INST_PAGE_FAULT 12 #define EXC_LOAD_PAGE_FAULT 13 #define EXC_STORE_PAGE_FAULT 15 @@ -72,6 +77,43 @@ #define SIE_STIE (_AC(0x1, UL) << IRQ_S_TIMER) #define SIE_SEIE (_AC(0x1, UL) << IRQ_S_EXT) +/* HSTATUS flags */ +#define HSTATUS_VTSR _AC(0x00400000, UL) +#define HSTATUS_VTVM _AC(0x00100000, UL) +#define HSTATUS_SP2V _AC(0x00000200, UL) +#define HSTATUS_SP2P _AC(0x00000100, UL) +#define HSTATUS_SPV _AC(0x00000080, UL) +#define HSTATUS_STL _AC(0x00000040, UL) +#define HSTATUS_SPRV _AC(0x00000001, UL) + +/* HGATP flags */ +#define HGATP_MODE_OFF _AC(0, UL) +#define HGATP_MODE_SV32X4 _AC(1, UL) +#define HGATP_MODE_SV39X4 _AC(8, UL) +#define HGATP_MODE_SV48X4 _AC(9, UL) + +#define HGATP32_MODE_SHIFT 31 +#define HGATP32_VMID_SHIFT 22 +#define HGATP32_VMID_MASK _AC(0x1FC00000, UL) +#define HGATP32_PPN _AC(0x003FFFFF, UL) + +#define HGATP64_MODE_SHIFT 60 +#define HGATP64_VMID_SHIFT 44 +#define HGATP64_VMID_MASK _AC(0x03FFF00000000000, UL) +#define HGATP64_PPN _AC(0x00000FFFFFFFFFFF, UL) + +#ifdef CONFIG_64BIT +#define HGATP_PPN HGATP64_PPN +#define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT +#define HGATP_VMID_MASK HGATP64_VMID_MASK +#define HGATP_MODE (HGATP_MODE_SV39X4 << HGATP64_MODE_SHIFT) +#else +#define HGATP_PPN HGATP32_PPN +#define HGATP_VMID_SHIFT HGATP32_VMID_SHIFT +#define HGATP_VMID_MASK HGATP32_VMID_MASK +#define HGATP_MODE (HGATP_MODE_SV32X4 << HGATP32_MODE_SHIFT) +#endif + #define CSR_CYCLE 0xc00 #define CSR_TIME 0xc01 #define CSR_INSTRET 0xc02 @@ -85,6 +127,22 @@ #define CSR_STVAL 0x143 #define CSR_SIP 0x144 #define CSR_SATP 0x180 + +#define CSR_VSSTATUS 0x200 +#define CSR_VSIE 0x204 +#define CSR_VSTVEC 0x205 +#define CSR_VSSCRATCH 0x240 +#define CSR_VSEPC 0x241 +#define CSR_VSCAUSE 0x242 +#define CSR_VSTVAL 0x243 +#define CSR_VSIP 0x244 +#define CSR_VSATP 0x280 + +#define CSR_HSTATUS 0x600 +#define CSR_HEDELEG 0x602 +#define CSR_HIDELEG 0x603 +#define CSR_HGATP 0x680 + #define CSR_CYCLEH 0xc80 #define CSR_TIMEH 0xc81 #define CSR_INSTRETH 0xc82 From patchwork Mon Jul 29 11:56:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063645 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E7FF14E5 for ; Mon, 29 Jul 2019 11:56:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3A69828780 for ; Mon, 29 Jul 2019 11:56:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2DF1828608; Mon, 29 Jul 2019 11:56:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5D56A286F3 for ; Mon, 29 Jul 2019 11:56:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FRwuswEYwJsQuIw1ZubYNYyAeXjHWnAmmuVccSkrwgQ=; b=SK2rZzfeww9Z/X VyYV3Y/3+adW5tAgdLEs4/7ddj4Yz11SVi0G5yc0/VSfcjVtyfsaowSkARHLqLT/8aCVCoH7C1jRH gdBY0zzQHZ2OK0+RtYlfEV4mAjODO0CRqp0I92yVCcI53pfK9HDKqsPhqFwdpvFetxNWZh65Uuaer c10j2jIoVBxFWmnaiDUJ+Wq1U2TVQEeWk6tpYMQCKVYBXGOB0taa7Yy174HuPsYAon0W3iuFHe5KS Ih6gK/sVFcMV50cFzuhgCKRwd26CdrLbpzBgvOUDID3VIe2CkcWJaixES7swBl+VpLBgTAbldbBnu tJsb2cexgrYFXsTG0G5g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Gm-0002RL-6s; Mon, 29 Jul 2019 11:56:48 +0000 Received: from esa1.hgst.iphmx.com ([68.232.141.245]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Gi-0002Mg-3d for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:56:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401404; x=1595937404; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=bFj+G3RZP/gNaGdgRsOAPpKOUgN46ZJXDT0vKlRG+hc=; b=DfO72FjFBYBchImh/EkeoPkCSNzujs/wcmFEAN1+fNfR75g5toLc9QlJ RZ8JZoE010WChJr4wX816eHsgith3ewRZCou8JQ210gcsXjbO0VmrUwCm URDqFonpDmGwZAtYNmj8ykHuuvCHma4eO8QdvBDJQks86sSdpAHZnd4Gy vKwToIwwM93yLQRFhJUIyGckYYReN7TVdfYdX038lhKCgUFn1j71M027m rR0ry+mJfs+tEDNcjt0jzRnEPwn3vwYJ2qL0unuizF8MdNjuDdhRw6EgK 2MRqNbmFCMsIKti2oUUiq5wUDoAdlK89n4RTg5lKt4q3jOIBlUbtbwjaX Q==; IronPort-SDR: ZHbm8aUkstxmzKdzBcVPXwDK5dicA9aOVjzuleWEkbPRPCmiFrcL0gYFoOpfbvLFGtvF80CX03 Qi6XfKQ+AOKVWYioRwql+dzvRxcvfFf5GRyYj0oUbNa3065R8nyJQdejgE/2uWQ/naDU7FcE/f 3QP7UIM+jv5caFP0SwM1/nCK5dxRFib27pEcg2M8o/Lnyyrr7SM4C66TLt4JLSWj67UMAL9DLB aO6JEzuGCmIsNTv+EgTLwBxjw/AP6Fw2+H0KROb0u1qm5lBMMlAyKECyYzv5TNbqHiew5emo1P hNY= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="220843319" Received: from mail-co1nam05lp2052.outbound.protection.outlook.com (HELO NAM05-CO1-obe.outbound.protection.outlook.com) ([104.47.48.52]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:56:42 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=msELc0OFuwWb0TQojHZeNYYFWCoXxJB52bjCzdbMR+5Ta+avdeYcJo4+4r/Mw2TqGJl0OIYTdtnpYjPG/046Zc1HdftBAcwE1EjpCE7+ldN/bjY+8XCMGdcjcRypOrbLFMnhgXP9OZMFs0G9LWwP3qgpr2qWVHWP3Qxe6ysdJsg8GQ8jIxYuqRKf1Jx5zq1i7jo9saHAYCPaAkgO4aWYUYrE8vSh2wyZRKqf4kwTEZfYsoxVOqdKGu6uG1lgord0a/mn7zgJWrZObp9WPAVeCHWm0mtI05GCVBeLveE1IwM5w0jAz4oQGVrJYw7iJCc+r+jpvgQnLGRrvxADWsu1oQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1sDJ49x3AP1mbspMkuAuPZALARjbZIuVRrU6PFWNBB4=; b=WpDVC9slXH+7mWWWfomiMO5KC6whcJurXx7KZUgWzq/lHXqQyVJbvRx/8BEAat/WhG5XE2oG61uhHavPgVOfYnZI93h1jzPQCUfHq+QcFolrZuopHQiys+N2+G2zRJcDifFvgZvtm6KfMISTGWDUOrxUbN6oSUmFX9hw/U9SqG8zXdlN2YnbsN0OU7+BKdHZvczYjjLRLOH6Bbv+vvqRFtF1a6p7n2/4APdqKAOPmkGuylWNGiqWfHhva10lJmXa4acbZyAvURfFoErMMqdK1XSe9SeeFkvsbVYbz0UWchrJkFyLeWrMycuuy3f6LCgJkUaoQBDMxk3rR1Ky595Ybg== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1sDJ49x3AP1mbspMkuAuPZALARjbZIuVRrU6PFWNBB4=; b=FYv1hOUwlPqmK2K5DxcJNQn4CMaAXy+luoTSgtO6VsrtDCqDaZaWfHRsJLehLOFPaIQBVaHcSIBCftO/FigeEu64QyIIDW3p4ZK2tAJQetPzBN7ZzcxOnUdGQlOxHEtYXmmyU5TmD/muXpCB7UuSYd3KVMXNOZ/DWLv6VS6sYT8= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5952.namprd04.prod.outlook.com (20.179.21.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.15; Mon, 29 Jul 2019 11:56:41 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:56:41 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 03/16] RISC-V: Add initial skeletal KVM support Thread-Topic: [RFC PATCH 03/16] RISC-V: Add initial skeletal KVM support Thread-Index: AQHVRgSvgK4gozCD8kys3kWrOnzfVw== Date: Mon, 29 Jul 2019 11:56:41 +0000 Message-ID: <20190729115544.17895-4-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 449e5e43-8872-4fc6-9d25-08d7141bd1a0 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5952; x-ms-traffictypediagnostic: MN2PR04MB5952: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:164; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(39860400002)(366004)(376002)(396003)(136003)(346002)(189003)(199004)(478600001)(2906002)(446003)(53946003)(6436002)(486006)(6512007)(53936002)(36756003)(11346002)(2616005)(44832011)(78486014)(386003)(6506007)(102836004)(55236004)(4326008)(71200400001)(476003)(76176011)(71190400001)(9456002)(7736002)(26005)(50226002)(81166006)(81156014)(8676002)(8936002)(186003)(99286004)(6486002)(68736007)(14444005)(1076003)(256004)(7416002)(305945005)(66446008)(25786009)(66066001)(6116002)(3846002)(52116002)(14454004)(316002)(54906003)(86362001)(30864003)(66556008)(66476007)(110136005)(66946007)(5660300002)(64756008)(579004); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5952; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: rSx70q6ccoHxi0ozwhAUO0KS1pzdkzbRLd1a30JYSJeoYJELzN2cxSyCFODbGVR1dwF343pH0+foDrbDEy2Hn2Eht7j3VXxQ7aq/zQotNT0OMlQC8SkUGOu8yGxomhD9o6DuDWv6RPPdNC97BJtP0uA81dTYG2Nkn+n80ghFpIDv1tv8nc9dxLGu/AK75XFHtY0zdVeLYBsCvvI6qcLBefZ76Zp6Vo0479UkfhNXBNFLdm2D9Dy+ESH9YdwQ2/OSzyN9601CilDPMD2wBNYK9499aZtOe/GxxcL2UMbH+yVKLpEyV9Wp4H8Tx7w+YhHYlSUNs631NG+Pxtg0s1BIl2zTl19q6UHt7U90xLHev7nNlb8h61N8g5jkJNhq3kf2XTzmUtZZBFJZlVQ0eGcaaPnvXuPMOQjsoJHzKipZ6Aw= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 449e5e43-8872-4fc6-9d25-08d7141bd1a0 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:56:41.3059 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5952 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045644_434545_FE5C832E X-CRM114-Status: GOOD ( 18.27 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds initial skeletal KVM RISC-V support which has: 1. A simple implementation of arch specific VM functions except kvm_vm_ioctl_get_dirty_log() which will implemeted in-future as part of stage2 page loging. 2. Stubs of required arch specific VCPU functions except kvm_arch_vcpu_ioctl_run() which is semi-complete and extended by subsequent patches. 3. Stubs for required arch specific stage2 MMU functions. Signed-off-by: Anup Patel --- arch/riscv/Kconfig | 2 + arch/riscv/Makefile | 2 + arch/riscv/include/asm/kvm_host.h | 82 ++++++++ arch/riscv/include/uapi/asm/kvm.h | 47 +++++ arch/riscv/kvm/Kconfig | 33 ++++ arch/riscv/kvm/Makefile | 13 ++ arch/riscv/kvm/main.c | 60 ++++++ arch/riscv/kvm/mmu.c | 83 ++++++++ arch/riscv/kvm/vcpu.c | 305 ++++++++++++++++++++++++++++++ arch/riscv/kvm/vcpu_exit.c | 35 ++++ arch/riscv/kvm/vm.c | 101 ++++++++++ 11 files changed, 763 insertions(+) create mode 100644 arch/riscv/include/asm/kvm_host.h create mode 100644 arch/riscv/include/uapi/asm/kvm.h create mode 100644 arch/riscv/kvm/Kconfig create mode 100644 arch/riscv/kvm/Makefile create mode 100644 arch/riscv/kvm/main.c create mode 100644 arch/riscv/kvm/mmu.c create mode 100644 arch/riscv/kvm/vcpu.c create mode 100644 arch/riscv/kvm/vcpu_exit.c create mode 100644 arch/riscv/kvm/vm.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 59a4727ecd6c..906104b8dc74 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -289,3 +289,5 @@ menu "Power management options" source "kernel/power/Kconfig" endmenu + +source "arch/riscv/kvm/Kconfig" diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile index 7a117be8297c..9f4f418978b1 100644 --- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -74,6 +74,8 @@ head-y := arch/riscv/kernel/head.o core-y += arch/riscv/kernel/ arch/riscv/mm/ arch/riscv/net/ +core-$(CONFIG_KVM) += arch/riscv/kvm/ + libs-y += arch/riscv/lib/ PHONY += vdso_install diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h new file mode 100644 index 000000000000..81acfb307d5c --- /dev/null +++ b/arch/riscv/include/asm/kvm_host.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#ifndef __RISCV_KVM_HOST_H__ +#define __RISCV_KVM_HOST_H__ + +#include +#include +#include + +#ifdef CONFIG_64BIT +#define KVM_MAX_VCPUS (1U << 16) +#else +#define KVM_MAX_VCPUS (1U << 9) +#endif + +#define KVM_USER_MEM_SLOTS 512 +#define KVM_HALT_POLL_NS_DEFAULT 500000 + +#define KVM_VCPU_MAX_FEATURES 0 + +#define KVM_REQ_SLEEP \ + KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) +#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1) +#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2) + +struct kvm_vm_stat { + ulong remote_tlb_flush; +}; + +struct kvm_vcpu_stat { + u64 halt_successful_poll; + u64 halt_attempted_poll; + u64 halt_poll_invalid; + u64 halt_wakeup; + u64 ecall_exit_stat; + u64 wfi_exit_stat; + u64 mmio_exit_user; + u64 mmio_exit_kernel; + u64 exits; +}; + +struct kvm_arch_memory_slot { +}; + +struct kvm_arch { + /* stage2 page table */ + pgd_t *pgd; + phys_addr_t pgd_phys; +}; + +struct kvm_vcpu_arch { + /* Don't run the VCPU (blocked) */ + bool pause; +}; + +static inline void kvm_arch_hardware_unsetup(void) {} +static inline void kvm_arch_sync_events(struct kvm *kvm) {} +static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} +static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} + +void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu); +int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm); +void kvm_riscv_stage2_free_pgd(struct kvm *kvm); +void kvm_riscv_stage2_update_pgtbl(struct kvm_vcpu *vcpu); + +int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run); +int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long scause, unsigned long stval); + +static inline void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch) {} + +void kvm_riscv_halt_guest(struct kvm *kvm); +void kvm_riscv_resume_guest(struct kvm *kvm); + +#endif /* __RISCV_KVM_HOST_H__ */ diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h new file mode 100644 index 000000000000..d15875818b6e --- /dev/null +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#ifndef __LINUX_KVM_RISCV_H +#define __LINUX_KVM_RISCV_H + +#ifndef __ASSEMBLY__ + +#include +#include + +#define __KVM_HAVE_READONLY_MEM + +#define KVM_COALESCED_MMIO_PAGE_OFFSET 1 + +/* for KVM_GET_REGS and KVM_SET_REGS */ +struct kvm_regs { +}; + +/* for KVM_GET_FPU and KVM_SET_FPU */ +struct kvm_fpu { +}; + +/* KVM Debug exit structure */ +struct kvm_debug_exit_arch { +}; + +/* for KVM_SET_GUEST_DEBUG */ +struct kvm_guest_debug_arch { +}; + +/* definition of registers in kvm_run */ +struct kvm_sync_regs { +}; + +/* dummy definition */ +struct kvm_sregs { +}; + +#endif + +#endif /* __LINUX_KVM_RISCV_H */ diff --git a/arch/riscv/kvm/Kconfig b/arch/riscv/kvm/Kconfig new file mode 100644 index 000000000000..35fd30d0e432 --- /dev/null +++ b/arch/riscv/kvm/Kconfig @@ -0,0 +1,33 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# KVM configuration +# + +source "virt/kvm/Kconfig" + +menuconfig VIRTUALIZATION + bool "Virtualization" + help + Say Y here to get to see options for using your Linux host to run + other operating systems inside virtual machines (guests). + This option alone does not add any kernel code. + + If you say N, all options in this submenu will be skipped and + disabled. + +if VIRTUALIZATION + +config KVM + tristate "Kernel-based Virtual Machine (KVM) support" + depends on OF + select PREEMPT_NOTIFIERS + select ANON_INODES + select KVM_MMIO + select HAVE_KVM_VCPU_ASYNC_IOCTL + select SRCU + help + Support hosting virtualized guest machines. + + If unsure, say N. + +endif # VIRTUALIZATION diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile new file mode 100644 index 000000000000..37b5a59d4f4f --- /dev/null +++ b/arch/riscv/kvm/Makefile @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: GPL-2.0 +# Makefile for RISC-V KVM support +# + +common-objs-y = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o) + +ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm + +kvm-objs := $(common-objs-y) + +kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o + +obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c new file mode 100644 index 000000000000..8cac0571a264 --- /dev/null +++ b/arch/riscv/kvm/main.c @@ -0,0 +1,60 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include + +long kvm_arch_dev_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + return -EINVAL; +} + +int kvm_arch_check_processor_compat(void) +{ + return 0; +} + +int kvm_arch_hardware_setup(void) +{ + return 0; +} + +int kvm_arch_hardware_enable(void) +{ + return 0; +} + +void kvm_arch_hardware_disable(void) +{ +} + +int kvm_arch_init(void *opaque) +{ + if (!riscv_isa_extension_available(H)) { + kvm_info("hypervisor extension not available\n"); + return -ENODEV; + } + + kvm_info("hypervisor extension available\n"); + + return 0; +} + +void kvm_arch_exit(void) +{ +} + +static int riscv_kvm_init(void) +{ + return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE); +} +module_init(riscv_kvm_init); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c new file mode 100644 index 000000000000..cead012a8399 --- /dev/null +++ b/arch/riscv/kvm/mmu.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, + struct kvm_memory_slot *dont) +{ +} + +int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + unsigned long npages) +{ + return 0; +} + +void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) +{ +} + +void kvm_arch_flush_shadow_all(struct kvm *kvm) +{ + /* TODO: */ +} + +void kvm_arch_flush_shadow_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) +{ +} + +void kvm_arch_commit_memory_region(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + /* TODO: */ +} + +int kvm_arch_prepare_memory_region(struct kvm *kvm, + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region *mem, + enum kvm_mr_change change) +{ + /* TODO: */ + return 0; +} + +void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} + +int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm) +{ + /* TODO: */ + return 0; +} + +void kvm_riscv_stage2_free_pgd(struct kvm *kvm) +{ + /* TODO: */ +} + +void kvm_riscv_stage2_update_pgtbl(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c new file mode 100644 index 000000000000..9fea9128d964 --- /dev/null +++ b/arch/riscv/kvm/vcpu.c @@ -0,0 +1,305 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define VCPU_STAT(x) { #x, offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU } + +struct kvm_stats_debugfs_item debugfs_entries[] = { + VCPU_STAT(ecall_exit_stat), + VCPU_STAT(wfi_exit_stat), + VCPU_STAT(mmio_exit_user), + VCPU_STAT(mmio_exit_kernel), + VCPU_STAT(exits), + { NULL } +}; + +struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) +{ + /* TODO: */ + return NULL; +} + +int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) +{ + return 0; +} + +void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) +{ +} + +int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} + +int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) +{ +} + +void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) +{ +} + +int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return false; +} + +bool kvm_arch_has_vcpu_debugfs(void) +{ + return false; +} + +int kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu) +{ + return 0; +} + +vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) +{ + return VM_FAULT_SIGBUS; +} + +long kvm_arch_vcpu_async_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + /* TODO; */ + return -ENOIOCTLCMD; +} + +long kvm_arch_vcpu_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + /* TODO: */ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, + struct kvm_sregs *sregs) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, + struct kvm_sregs *sregs) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, + struct kvm_translation *tr) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, + struct kvm_mp_state *mp_state) +{ + /* TODO: */ + return 0; +} + +int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, + struct kvm_mp_state *mp_state) +{ + /* TODO: */ + return 0; +} + +int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, + struct kvm_guest_debug *dbg) +{ + /* TODO; To be implemented later. */ + return -EINVAL; +} + +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + /* TODO: */ + + kvm_riscv_stage2_update_pgtbl(vcpu); +} + +void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} + +static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) +{ + if (kvm_request_pending(vcpu)) { + /* TODO: */ + + /* + * Clear IRQ_PENDING requests that were made to guarantee + * that a VCPU sees new virtual interrupts. + */ + kvm_check_request(KVM_REQ_IRQ_PENDING, vcpu); + } +} + +int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + int ret; + unsigned long scause, stval; + + /* Process MMIO value returned from user-space */ + if (run->exit_reason == KVM_EXIT_MMIO) { + ret = kvm_riscv_vcpu_mmio_return(vcpu, vcpu->run); + if (ret) + return ret; + } + + if (run->immediate_exit) + return -EINTR; + + vcpu_load(vcpu); + + kvm_sigset_activate(vcpu); + + ret = 1; + run->exit_reason = KVM_EXIT_UNKNOWN; + while (ret > 0) { + /* Check conditions before entering the guest */ + cond_resched(); + + kvm_riscv_check_vcpu_requests(vcpu); + + preempt_disable(); + + local_irq_disable(); + + /* + * Exit if we have a signal pending so that we can deliver + * the signal to user space. + */ + if (signal_pending(current)) { + ret = -EINTR; + run->exit_reason = KVM_EXIT_INTR; + } + + /* + * Ensure we set mode to IN_GUEST_MODE after we disable + * interrupts and before the final VCPU requests check. + * See the comment in kvm_vcpu_exiting_guest_mode() and + * Documentation/virtual/kvm/vcpu-requests.rst + */ + smp_store_mb(vcpu->mode, IN_GUEST_MODE); + + if (ret <= 0 || + kvm_request_pending(vcpu)) { + vcpu->mode = OUTSIDE_GUEST_MODE; + local_irq_enable(); + preempt_enable(); + continue; + } + + guest_enter_irqoff(); + + __kvm_riscv_switch_to(&vcpu->arch); + + vcpu->mode = OUTSIDE_GUEST_MODE; + vcpu->stat.exits++; + + /* Save SCAUSE and STVAL because we might get an interrupt + * between __kvm_riscv_switch_to() and local_irq_enable() + * which can potentially overwrite SCAUSE and STVAL. + */ + scause = csr_read(CSR_SCAUSE); + stval = csr_read(CSR_STVAL); + + /* + * We may have taken a host interrupt in VS/VU-mode (i.e. + * while executing the guest). This interrupt is still + * pending, as we haven't serviced it yet! + * + * We're now back in HS-mode with interrupts disabled + * so enabling the interrupts now will have the effect + * of taking the interrupt again, in HS-mode this time. + */ + local_irq_enable(); + + /* + * We do local_irq_enable() before calling guest_exit() so + * that if a timer interrupt hits while running the guest + * we account that tick as being spent in the guest. We + * enable preemption after calling guest_exit() so that if + * we get preempted we make sure ticks after that is not + * counted as guest time. + */ + guest_exit(); + + preempt_enable(); + + ret = kvm_riscv_vcpu_exit(vcpu, run, scause, stval); + } + + kvm_sigset_deactivate(vcpu); + + vcpu_put(vcpu); + return ret; +} diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c new file mode 100644 index 000000000000..e4d7c8f0807a --- /dev/null +++ b/arch/riscv/kvm/vcpu_exit.c @@ -0,0 +1,35 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include + +/** + * kvm_riscv_vcpu_mmio_return -- Handle MMIO loads after user space emulation + * or in-kernel IO emulation + * + * @vcpu: The VCPU pointer + * @run: The VCPU run struct containing the mmio data + */ +int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + /* TODO: */ + return 0; +} + +/* + * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on + * proper exit to userspace. + */ +int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long scause, unsigned long stval) +{ + /* TODO: */ + return 0; +} diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c new file mode 100644 index 000000000000..66904def2f93 --- /dev/null +++ b/arch/riscv/kvm/vm.c @@ -0,0 +1,101 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include + +int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) +{ + /* TODO: To be added later. */ + return -ENOTSUPP; +} + +int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) +{ + int r; + + r = kvm_riscv_stage2_alloc_pgd(kvm); + if (r) + return r; + + return 0; +} + +void kvm_arch_destroy_vm(struct kvm *kvm) +{ + int i; + + for (i = 0; i < KVM_MAX_VCPUS; ++i) { + if (kvm->vcpus[i]) { + kvm_arch_vcpu_destroy(kvm->vcpus[i]); + kvm->vcpus[i] = NULL; + } + } +} + +int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) +{ + int r; + + switch (ext) { + case KVM_CAP_DEVICE_CTRL: + case KVM_CAP_USER_MEMORY: + case KVM_CAP_SYNC_MMU: + case KVM_CAP_DESTROY_MEMORY_REGION_WORKS: + case KVM_CAP_ONE_REG: + case KVM_CAP_READONLY_MEM: + case KVM_CAP_MP_STATE: + case KVM_CAP_IMMEDIATE_EXIT: + r = 1; + break; + case KVM_CAP_NR_VCPUS: + r = num_online_cpus(); + break; + case KVM_CAP_MAX_VCPUS: + r = KVM_MAX_VCPUS; + break; + case KVM_CAP_NR_MEMSLOTS: + r = KVM_USER_MEM_SLOTS; + break; + default: + r = 0; + break; + } + + return r; +} + +long kvm_arch_vm_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + return -EINVAL; +} + +void kvm_riscv_halt_guest(struct kvm *kvm) +{ + int i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) + vcpu->arch.pause = true; + kvm_make_all_cpus_request(kvm, KVM_REQ_SLEEP); +} + +void kvm_riscv_resume_guest(struct kvm *kvm) +{ + int i; + struct kvm_vcpu *vcpu; + + kvm_for_each_vcpu(i, vcpu, kvm) { + vcpu->arch.pause = false; + swake_up_one(kvm_arch_vcpu_wq(vcpu)); + } +} From patchwork Mon Jul 29 11:56:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063649 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DFCE413A4 for ; Mon, 29 Jul 2019 11:56:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE147212DA for ; Mon, 29 Jul 2019 11:56:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C1C8E28735; Mon, 29 Jul 2019 11:56:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 38A58212DA for ; Mon, 29 Jul 2019 11:56:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dromvKFCrOq+zVgGpSexhuxHV3aaadPRG1905Vj5qr4=; b=sNR0sBkZ+Ppnjc nXcF+r/C+nMiVDOmKHsUfcYHjX6l2w0UVntVTgMhc2586Q0eqp9Dv11GbkcP0y7dIkop+F/enA+Eo hUAvGJgTuKi+yEbuoworuq7+/6hV5K0vHkvLDxjcWGYvuYUv/BYuVQwlFujVz/K+yGpxsWyXlGZM2 fzfSUGjxxXpxD6EOuTEggW75wnzWSSf3dbDSa1Rz9zkj2zWHSZ7nGM8NBAo9ajdJaRKH14P4J+LQr bYf6+gw95YJzda66ot/wyCOHfwbCYDGBc10JaQS58V8mGbgh+HpZpNroSl9nnRbWzkj9ImzSAxAWt DNvGxyW+8gLxJ7z7E6+w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Gr-0002VM-0v; Mon, 29 Jul 2019 11:56:53 +0000 Received: from esa2.hgst.iphmx.com ([68.232.143.124]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Go-0002TV-1f for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:56:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401432; x=1595937432; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=9Wy9dcL5rq/dXA1ehdGpkqZUlQKAK0ZrddRv+2Qleb4=; b=GQ+oNJfUnj6brPdaGxwUbYZOrmAAdNar3YbkpSMlXVYnJJBObwt9x5s+ vBiboZm7/M8ve6tLvk9D6WVeWBMMAGOR71URnsWDpRphAm85cR5Synz/d LP+cKOnwsXVdNfEgwqvPbcxk7QG/WK/B2yhKoLCYk+BZY5vyN3SePoVxH /wxBCw2xk85FIhJrsj6vj2tMfMwdmlqk5Os7yE3+DudyJZiSrDMMX5E6+ Cfgr1Kq6kyW3tj4iXoLYTY8r0+il4P6z7a53Ie9Yko8Z/W287CFCfKrU7 OB0cjIcY5P9VFKS6znwG7a6WMsNbB7IPOUhdZUiJ0lbKQnOG60lsk/1Os w==; IronPort-SDR: 4OaK7ZFf0PrHiIi0s+mUXicvC4ANSdlyk2ROSzGsICPw5jxeNYAsCsCDYs3ble3ehFi0JX717d gd8yrdnTyQ2a7C3PuLTK/dcTJ/pBTb02n8p6VCVfwPVqnuzaHPO92b2c5CpLapShzQRQaCOE3y 33H5qwErydqs/zHfTxPuwd06sXl5/uQ8DJUBiVWa0GB+V0KJRMfTwt+MfT05qGF8EYdPwJk0gR M+RanvjRk9H6T09wjK0BwKi4+ATffYBsqKeblHuNcWzc7YEajOWt0ceBBBk2MpUKMuOjZ4bNf4 M6o= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="214553071" Received: from mail-by2nam01lp2056.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.56]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:10 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=L6Ukcd1dz3l0HOUfHp1hLVqOFRBODdEgXrsZRZ0hOcoyFOe5gi78VVmYHZ6oNOpdeGBqVQo+znHanvppSINSiJTJImPf6D0TroY5R3xgpF9MEmsokgG7lXDPFINSYRkpUFY2/PzoLeSvvy4IQpSU6XB8JEqvI2izWsNlQcdpIHcBWn5LSNZQKzi7LTmOcF9Snup2VafME/7y1DR2NgwrGUlH8KxOijOj4jI5+t6E2xkHvLTVDYkWvTDfvA2s0AuanBEfMohDyW8zzytQjMmAtx3oF0D4/6p4ge6jwucWGsURy++f/S8zyqrAzBHCNGeZYZDQVJAK7A625bCWdVCKQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yiPo2+keeX3moredVziA7E0nHI9JO4m7lsLDn4yxJXQ=; b=c+2eMdBAB6a3HSYUoOeYisvBKivVX5jd/S9+FQGKzNOqmHdvbCq+EqQ78suvMRJB2RV/iTsVlM2G6nBeb5NMYQnqmjjlw09ntYn+1D1CyMpJ5NbJM2HRvpImZjsAkuXhuXrjfdr81gK7HF+XEIQXxRqaNb/l8pkiRvX+Uj8Sjp2tiW2u2B7vRa9j4a65kCGFXnywEl7h9jS5KoBT/CAsQFcc9Qj88KyVf4me2kdhjoOFYMqHi0mU4Gm6sAOEEfdNIZLpEiWepmV2nNPxmagx7GqJoJgE/NUkOqJsik4rFTJywAvyBVa8dX6Mh5anm/S1p97kC0DibN6n9AFfLR8UUA== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yiPo2+keeX3moredVziA7E0nHI9JO4m7lsLDn4yxJXQ=; b=NmSN8+se2kCDBVJMwtXxIMq1MmIxD+SconIcY5nkQf15WtmoC6XApLqgmKO5Vz6J2J6fvN7fBHI55rAvfFdvJXcL2jBi2h9t6RaSb37Fg2pb+Ezv64mYEpGR04C6BW7e31nnmZiiYOjCw46+BqPwwq/QRwCFlPnSOMohRZ4YN48= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:56:47 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:56:47 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 04/16] RISC-V: KVM: Implement VCPU create, init and destroy functions Thread-Topic: [RFC PATCH 04/16] RISC-V: KVM: Implement VCPU create, init and destroy functions Thread-Index: AQHVRgSyv/KrR5DDqU2SV6sMLYuTwg== Date: Mon, 29 Jul 2019 11:56:46 +0000 Message-ID: <20190729115544.17895-5-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 034d01e3-aee1-41dc-934a-08d7141bd515 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:187; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: 1VimpCGYqUvax3UI1LBqR/Fe4zkjfoPiHZGJzY/O3Hs3W3K8KRoMPUbjf8sAZz7maEzMc867nfxQRbJXM29Pi17jfD6IFdpddims0ZXt1Ls165gEGvyinOBFTe8FaAbmXFzU8o7kRoFSaRrAV0ISHWepeNAO9e++ElNuxFCiTp5NC2MZzuRPeykcftlfPVphlWDlWXOmNEUfDw1hrilQRoVGnA6dy9EjSX8Y1fnKMjKGU7GXzzIiT920WVeJ2UjheDfuIbZWrARHn16rcL0XZtpMt3SMXkKoW/eAbGFWl7duFTPYw7M8R9V5TML2j38oHmK0xeRFbPn9WGryKb6YRxoxa+CIGMRO/6s+qAV3auAEooOFj7vI0wBE4j2HdpAH2aBDEXJsTXYuKvfFIknG/su40iEQ56vvSq423rKpWTQ= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 034d01e3-aee1-41dc-934a-08d7141bd515 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:56:46.9766 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045650_141463_5E8C1C86 X-CRM114-Status: GOOD ( 14.75 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements VCPU create, init and destroy functions required by generic KVM module. We don't have much dynamic resources in struct kvm_vcpu_arch so thest functions are quite simple for KVM RISC-V. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 70 ++++++++++++++++++++++++++ arch/riscv/kvm/vcpu.c | 83 +++++++++++++++++++++++++++++-- 2 files changed, 149 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 81acfb307d5c..244eabe62710 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -54,7 +54,77 @@ struct kvm_arch { phys_addr_t pgd_phys; }; +struct kvm_cpu_context { + unsigned long zero; + unsigned long ra; + unsigned long sp; + unsigned long gp; + unsigned long tp; + unsigned long t0; + unsigned long t1; + unsigned long t2; + unsigned long s0; + unsigned long s1; + unsigned long a0; + unsigned long a1; + unsigned long a2; + unsigned long a3; + unsigned long a4; + unsigned long a5; + unsigned long a6; + unsigned long a7; + unsigned long s2; + unsigned long s3; + unsigned long s4; + unsigned long s5; + unsigned long s6; + unsigned long s7; + unsigned long s8; + unsigned long s9; + unsigned long s10; + unsigned long s11; + unsigned long t3; + unsigned long t4; + unsigned long t5; + unsigned long t6; + unsigned long sepc; + unsigned long sstatus; + unsigned long hstatus; +}; + +struct kvm_vcpu_csr { + unsigned long hedeleg; + unsigned long hideleg; + unsigned long vsstatus; + unsigned long vsie; + unsigned long vstvec; + unsigned long vsscratch; + unsigned long vsepc; + unsigned long vscause; + unsigned long vstval; + unsigned long vsip; + unsigned long vsatp; +}; + struct kvm_vcpu_arch { + /* VCPU ran atleast once */ + bool ran_atleast_once; + + /* ISA feature bits (similar to MISA) */ + unsigned long isa; + + /* CPU context of Guest VCPU */ + struct kvm_cpu_context guest_context; + + /* CPU CSR context of Guest VCPU */ + struct kvm_vcpu_csr guest_csr; + + /* CPU context upon Guest VCPU reset */ + struct kvm_cpu_context guest_reset_context; + + /* CPU CSR context upon Guest VCPU reset */ + struct kvm_vcpu_csr guest_reset_csr; + /* Don't run the VCPU (blocked) */ bool pause; }; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 9fea9128d964..1ae806f28c0e 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -31,10 +31,48 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { { NULL } }; +#define KVM_RISCV_ISA_ALLOWED (RISCV_ISA_EXT_A | \ + RISCV_ISA_EXT_C | \ + RISCV_ISA_EXT_D | \ + RISCV_ISA_EXT_F | \ + RISCV_ISA_EXT_I | \ + RISCV_ISA_EXT_M | \ + RISCV_ISA_EXT_S | \ + RISCV_ISA_EXT_U) + +static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr; + struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + struct kvm_cpu_context *reset_cntx = &vcpu->arch.guest_reset_context; + + memcpy(csr, reset_csr, sizeof(*csr)); + + memcpy(cntx, reset_cntx, sizeof(*cntx)); +} + struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) { - /* TODO: */ - return NULL; + int err; + struct kvm_vcpu *vcpu; + + vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL); + if (!vcpu) { + err = -ENOMEM; + goto out; + } + + err = kvm_vcpu_init(vcpu, kvm, id); + if (err) + goto free_vcpu; + + return vcpu; + +free_vcpu: + kmem_cache_free(kvm_vcpu_cache, vcpu); +out: + return ERR_PTR(err); } int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) @@ -48,13 +86,47 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) { - /* TODO: */ + struct kvm_cpu_context *cntx; + struct kvm_vcpu_csr *csr; + + /* Mark this VCPU never ran */ + vcpu->arch.ran_atleast_once = false; + + /* Setup ISA features available to VCPU */ + vcpu->arch.isa = riscv_isa & KVM_RISCV_ISA_ALLOWED; + + /* Setup reset state of shadow SSTATUS and HSTATUS CSRs */ + cntx = &vcpu->arch.guest_reset_context; + cntx->sstatus = SR_SPP | SR_SPIE; + cntx->hstatus = 0; + cntx->hstatus |= HSTATUS_SP2V; + cntx->hstatus |= HSTATUS_SP2P; + cntx->hstatus |= HSTATUS_SPV; + + /* Setup reset state of HEDELEG and HIDELEG CSRs */ + csr = &vcpu->arch.guest_reset_csr; + csr->hedeleg = 0; + csr->hedeleg |= (1UL << EXC_INST_MISALIGNED); + csr->hedeleg |= (1UL << EXC_BREAKPOINT); + csr->hedeleg |= (1UL << EXC_SYSCALL); + csr->hedeleg |= (1UL << EXC_INST_PAGE_FAULT); + csr->hedeleg |= (1UL << EXC_LOAD_PAGE_FAULT); + csr->hedeleg |= (1UL << EXC_STORE_PAGE_FAULT); + csr->hideleg = 0; + csr->hideleg |= SIE_SSIE; + csr->hideleg |= SIE_STIE; + csr->hideleg |= SIE_SEIE; + + /* Reset VCPU */ + kvm_riscv_reset_vcpu(vcpu); + return 0; } void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - /* TODO: */ + kvm_riscv_stage2_flush_cache(vcpu); + kmem_cache_free(kvm_vcpu_cache, vcpu); } int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) @@ -207,6 +279,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) int ret; unsigned long scause, stval; + /* Mark this VCPU ran atleast once */ + vcpu->arch.ran_atleast_once = true; + /* Process MMIO value returned from user-space */ if (run->exit_reason == KVM_EXIT_MMIO) { ret = kvm_riscv_vcpu_mmio_return(vcpu, vcpu->run); From patchwork Mon Jul 29 11:56:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65C6414E5 for ; Mon, 29 Jul 2019 11:57:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 53FB5286F3 for ; Mon, 29 Jul 2019 11:57:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 476EB28735; Mon, 29 Jul 2019 11:57:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AF2FD2871F for ; Mon, 29 Jul 2019 11:57:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SSfFsjt6x9pYF5OQ875Ds5c0/U6Z0vzClfCoEmWOdCg=; b=iVpY05eY1Au4Jz SOk0YnsPZVGuz356gvfmkAZrD0DN7kpTm7gbDO+tZPCzYo+mkqEH+euEaB/DQ5Hi9F99T8w2DwDVF rsPOWtz4YRhJR52vY+6KLKMmpV50rReHbveI049Lan2XhUZFrUruyVUgTYCZe77P1AgyUFrOQmhLv wGkFHO63rTRPnHaFHUSnIYyIfBWpimuG+lt0/IHZo5Y5f+RD4hiXWzG5qMVMM4G3JHfwlk8NXdkof a+QgQYP4JkmFhbwdI4NQKd5j7bN1HtvU6QHmZX5Rdjcg1pmOG2cOit4u42EX6VY6XRfCuH45k1gG6 QhMLV9loVnNdoq5NMLJw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Gx-0002aQ-Pa; Mon, 29 Jul 2019 11:56:59 +0000 Received: from esa4.hgst.iphmx.com ([216.71.154.42]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Gv-0002Yf-83 for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:56:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401417; x=1595937417; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=aLB1BfTfKuxJNVzFxRln4I/cmLmHDRHmCnWQI7hJAw0=; b=lg23YJLVt1CQFM0AfPTkYy7kyrjiWtOlY8On6fFET2bo1dAzMR3oFW05 YgQ7/pyrcTd2qIGfmW9hpUpNIetNOnJAEJJFwSjgS73626LgAXiNyvRl+ BL7/u60cdpe1eptuREkKWASk1R7+ignMP2CSLPXsFU5UbrEdK54Etu3+1 mrAnCtjaoZ4R8hYW/8CIDOW1nzBc713hMiHEG7QswospFBAWhwKFyrbOe ddMO0x1sMf/SIueFZwaYaoSd6SWgXhoMPXFQqHiGXQ31gPhMfPYo91JHj vGJWUGRwyp+3ZUGj3t6/L7C8SY7icskZWCEtzPykSo8SJh62wseoXgu6Q A==; IronPort-SDR: wXb12YM6RDUdK3BcVt4nirA/q/OFFx1YixwVlEsGZKuxxbtWLg2s+APhtSVNenWAdrmLfCsvD9 wVNvMlkClwU5JvVl921qbb0P0hsr+kFcrQypKR2yA3CN8k4+IP3s2xtyhpvs2rrhb2BWdzEf+e WZnN+tGh8J1W3IcUmxIZubM8ti8k0e9k/hWce8Bgyiij5peiwMmSEr8X5bZZu4BPAhp4r9ci4l 7zAqIT1OHJS5XYs9fFRR4istXsntHN4eKhUcg4c0APNZQqUmYUuO10y/Z42qc3ESXOf6y6FhRj 2ts= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="114381574" Received: from mail-by2nam01lp2053.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.53]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:56:54 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aecmcPgbdAyvAU8+oD/Ji3Pyj3r7GQw6T1rKJwyBMOed7nroLq2/OYmcQkp+UNobaQDTlhBTQfWo2UGueqDdJz22beJILleEwDzBf4gsm0UrnsfRWNhGLSzJga0jA7iTiR++ez43Mpb/jnQgb6u3wU/aQDgE6ZNrCN3S5kEYUarBTcfrLE22T/zOYbQ6xv6jbxgfuG5i9GT5woX26mpiz8MFSmc07xOpxdgmKf8k7EomE1nv6p11nhrfBo5wRVbkLlbgfh82nw3+9knQaVpinqAfQJinhBsLyyXlbWRd8mYRRNLdL+Xwei1iDyWL15yLpNntr/uUyJPH68i9JDvVwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rlcvlZeH45M8QdFmuudg+23ZuWKr3B0TfQzWEKFC9mI=; b=DmaLpHwtglzwYu4obUm2Iv52NUjJEmVt5435vcWnHddigmX0MQRVJxQmG89VXt8AP3MT4gCU8U1mSfSz/5z501ZzxgqBRRYDnVSUYeny22DE1cMa0OY/eMGvjRSKFXp55ZxQzzB5ezxQTR4dwwID9jk/Igj0elgfG48wHylRMM+uZnu6pTQQOppbgzh0kAVZlnrtdpuH2dEfLs5Shdkw/noJsD6Zxbi+burMgo/xkVvSJvyCkXn+trf7QdszeNjnYunhsyuae+qJmGT1j1bDErOnMSFXydemDlaezJ6eY7nnUsAy7fR77vbXMt/hc/aXlsoXg7ahxHbDVMhGUhh+IQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rlcvlZeH45M8QdFmuudg+23ZuWKr3B0TfQzWEKFC9mI=; b=Q3GDrM11qWDsOPBPkYct2TZTaf5NSsNvH0nNzo+uqGznNujrPFP5fh5SUwerx4tAcWFt8DhEFD+FqS41PqZ/0zCXUwDncRs77raf0DQeAAe6auNDvcSKaNkFzn1nlI39ao7//H2rDHDkMKjANYyGW7XvPALGBT8Ox7kFVcHl0sE= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:56:53 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:56:53 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 05/16] RISC-V: KVM: Implement VCPU interrupts and requests handling Thread-Topic: [RFC PATCH 05/16] RISC-V: KVM: Implement VCPU interrupts and requests handling Thread-Index: AQHVRgS2tKwUvYEsWk2qg6zYTK9XKg== Date: Mon, 29 Jul 2019 11:56:53 +0000 Message-ID: <20190729115544.17895-6-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: ed7fbf68-b92b-4c7d-d096-08d7141bd8b0 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:1850; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: i+ezUaeC/6xJm0SQt1ZXIp59VAKroCIg5fQneJmXVv3g5saz1mvY+sek8jdgyHnewJxC/cqfguweX3u2w5bBn8TFDyHka2gW9t/19cKGgH9dcWraZfb3TrGAshuPI+CsFsKGfdUuYqW0+hv8era+54bumB6C0RNSlG3H8bHiCPi/mTEuQYE840Z9YD1reXY+ZQXo1Yi/jRBQ5y/IcslyFy7jSPC9sukMlnQQQedBfk91XmYby3mamJ+BGYYA6lOE03LP4/lnr/LMr1qEpFqkWAakHXPu4/kYE04hTGCd+4i9iEWoBWe1ydncMw6DBFJ8e/iBcVMzvzIN4jXNCMrR/eEqZfkddsjcJ2eZO/PvLT6qXDTGyodnvdBCKX5s3NJbjRxX5tNlM5KJMsoCJ/DKlvst6Lh9wXa3MYLlKnBQD8Y= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: ed7fbf68-b92b-4c7d-d096-08d7141bd8b0 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:56:53.2391 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045657_350817_A18FBB16 X-CRM114-Status: GOOD ( 18.95 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements VCPU interrupts and requests which are both asynchronous events. The VCPU interrupts can be set/unset using KVM_INTERRUPT ioctl from user-space. In future, the in-kernel IRQCHIP emulation will use kvm_riscv_vcpu_set_interrupt() and kvm_riscv_vcpu_unset_interrupt() functions to set/unset VCPU interrupts. Important VCPU requests implemented by this patch are: KVM_REQ_IRQ_PENDING - set whenever some VCPU interrupt pending KVM_REQ_SLEEP - set whenever VCPU itself goes to sleep state KVM_REQ_VCPU_RESET - set whenever VCPU reset is requested The WFI trap-n-emulate (added later) will use KVM_REQ_SLEEP request and kvm_riscv_vcpu_has_interrupt() function. The KVM_REQ_VCPU_RESET request will be used by SBI emulation (added later) to power-up a VCPU in power-off state. The user-space can use the GET_MPSTATE/SET_MPSTATE ioctls to get/set power state of a VCPU. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 13 +++ arch/riscv/include/uapi/asm/kvm.h | 3 + arch/riscv/kvm/vcpu.c | 174 +++++++++++++++++++++++++++--- 3 files changed, 177 insertions(+), 13 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 244eabe62710..aa89f1922da1 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -125,6 +125,13 @@ struct kvm_vcpu_arch { /* CPU CSR context upon Guest VCPU reset */ struct kvm_vcpu_csr guest_reset_csr; + /* VCPU interrupts */ + raw_spinlock_t irqs_lock; + unsigned long irqs_pending; + + /* VCPU power-off state */ + bool power_off; + /* Don't run the VCPU (blocked) */ bool pause; }; @@ -146,6 +153,12 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, static inline void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch) {} +int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); +int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); +bool kvm_riscv_vcpu_has_interrupt(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu); + void kvm_riscv_halt_guest(struct kvm *kvm); void kvm_riscv_resume_guest(struct kvm *kvm); diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index d15875818b6e..6dbc056d58ba 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -18,6 +18,9 @@ #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 +#define KVM_INTERRUPT_SET -1U +#define KVM_INTERRUPT_UNSET -2U + /* for KVM_GET_REGS and KVM_SET_REGS */ struct kvm_regs { }; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 1ae806f28c0e..c6f57caa95f0 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -42,6 +42,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) { + unsigned long f; struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr; struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; @@ -50,6 +51,10 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) memcpy(csr, reset_csr, sizeof(*csr)); memcpy(cntx, reset_cntx, sizeof(*cntx)); + + raw_spin_lock_irqsave(&vcpu->arch.irqs_lock, f); + vcpu->arch.irqs_pending = 0; + raw_spin_unlock_irqrestore(&vcpu->arch.irqs_lock, f); } struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) @@ -103,6 +108,9 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) cntx->hstatus |= HSTATUS_SP2P; cntx->hstatus |= HSTATUS_SPV; + /* Setup VCPU irqs lock */ + raw_spin_lock_init(&vcpu->arch.irqs_lock); + /* Setup reset state of HEDELEG and HIDELEG CSRs */ csr = &vcpu->arch.guest_reset_csr; csr->hedeleg = 0; @@ -131,8 +139,15 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) { - /* TODO: */ - return 0; + int ret; + unsigned long f, irqs; + + raw_spin_lock_irqsave(&vcpu->arch.irqs_lock, f); + irqs = vcpu->arch.irqs_pending & vcpu->arch.guest_csr.vsie; + ret = (irqs & (1UL << IRQ_S_TIMER)) ? 1 : 0; + raw_spin_unlock_irqrestore(&vcpu->arch.irqs_lock, f); + + return ret; } void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) @@ -145,20 +160,18 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) { - /* TODO: */ - return 0; + return (kvm_riscv_vcpu_has_interrupt(vcpu) && + !vcpu->arch.power_off && !vcpu->arch.pause); } int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) { - /* TODO: */ - return 0; + return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; } bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) { - /* TODO: */ - return false; + return (vcpu->arch.guest_context.sstatus & SR_SPP) ? true : false; } bool kvm_arch_has_vcpu_debugfs(void) @@ -179,7 +192,21 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) long kvm_arch_vcpu_async_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { - /* TODO; */ + struct kvm_vcpu *vcpu = filp->private_data; + void __user *argp = (void __user *)arg; + + if (ioctl == KVM_INTERRUPT) { + struct kvm_interrupt irq; + + if (copy_from_user(&irq, argp, sizeof(irq))) + return -EFAULT; + + if (irq.irq == KVM_INTERRUPT_SET) + return kvm_riscv_vcpu_set_interrupt(vcpu, IRQ_S_EXT); + else + return kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_S_EXT); + } + return -ENOIOCTLCMD; } @@ -228,18 +255,113 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) return -EINVAL; } +static void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu) +{ + unsigned long f; + + raw_spin_lock_irqsave(&vcpu->arch.irqs_lock, f); + if (vcpu->arch.irqs_pending ^ vcpu->arch.guest_csr.vsip) { + csr_write(CSR_VSIP, vcpu->arch.irqs_pending); + vcpu->arch.guest_csr.vsip = vcpu->arch.irqs_pending; + } + raw_spin_unlock_irqrestore(&vcpu->arch.irqs_lock, f); +} + +static void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu) +{ + vcpu->arch.guest_csr.vsip = csr_read(CSR_VSIP); + vcpu->arch.guest_csr.vsie = csr_read(CSR_VSIE); +} + +int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) +{ + unsigned long f; + + if (irq != IRQ_S_SOFT && + irq != IRQ_S_TIMER && + irq != IRQ_S_EXT) + return -EINVAL; + + raw_spin_lock_irqsave(&vcpu->arch.irqs_lock, f); + vcpu->arch.irqs_pending |= (1UL << irq); + raw_spin_unlock_irqrestore(&vcpu->arch.irqs_lock, f); + + kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); + kvm_vcpu_kick(vcpu); + + return 0; +} + +int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) +{ + unsigned long f; + + if (irq != IRQ_S_SOFT && + irq != IRQ_S_TIMER && + irq != IRQ_S_EXT) + return -EINVAL; + + raw_spin_lock_irqsave(&vcpu->arch.irqs_lock, f); + vcpu->arch.irqs_pending &= ~(1UL << irq); + raw_spin_unlock_irqrestore(&vcpu->arch.irqs_lock, f); + + return 0; +} + +bool kvm_riscv_vcpu_has_interrupt(struct kvm_vcpu *vcpu) +{ + bool ret = false; + unsigned long f; + + raw_spin_lock_irqsave(&vcpu->arch.irqs_lock, f); + if (vcpu->arch.irqs_pending & vcpu->arch.guest_csr.vsie) + ret = true; + raw_spin_unlock_irqrestore(&vcpu->arch.irqs_lock, f); + + return ret; +} + +void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu) +{ + vcpu->arch.power_off = true; + kvm_make_request(KVM_REQ_SLEEP, vcpu); + kvm_vcpu_kick(vcpu); +} + +void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu) +{ + vcpu->arch.power_off = false; + kvm_vcpu_wake_up(vcpu); +} + int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, struct kvm_mp_state *mp_state) { - /* TODO: */ + if (vcpu->arch.power_off) + mp_state->mp_state = KVM_MP_STATE_STOPPED; + else + mp_state->mp_state = KVM_MP_STATE_RUNNABLE; + return 0; } int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, struct kvm_mp_state *mp_state) { - /* TODO: */ - return 0; + int ret = 0; + + switch (mp_state->mp_state) { + case KVM_MP_STATE_RUNNABLE: + vcpu->arch.power_off = false; + break; + case KVM_MP_STATE_STOPPED: + kvm_riscv_vcpu_power_off(vcpu); + break; + default: + ret = -EINVAL; + } + + return ret; } int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, @@ -263,8 +385,25 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) { + struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu); + if (kvm_request_pending(vcpu)) { - /* TODO: */ + if (kvm_check_request(KVM_REQ_SLEEP, vcpu)) { + swait_event_interruptible_exclusive(*wq, + ((!vcpu->arch.power_off) && + (!vcpu->arch.pause))); + + if (vcpu->arch.power_off || vcpu->arch.pause) { + /* + * Awaken to handle a signal, request to + * sleep again later. + */ + kvm_make_request(KVM_REQ_SLEEP, vcpu); + } + } + + if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu)) + kvm_riscv_reset_vcpu(vcpu); /* * Clear IRQ_PENDING requests that were made to guarantee @@ -317,6 +456,12 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) run->exit_reason = KVM_EXIT_INTR; } + /* + * We might have got VCPU interrupts updated asynchronously + * so update it in HW. + */ + kvm_riscv_vcpu_flush_interrupts(vcpu); + /* * Ensure we set mode to IN_GUEST_MODE after we disable * interrupts and before the final VCPU requests check. @@ -347,6 +492,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) scause = csr_read(CSR_SCAUSE); stval = csr_read(CSR_STVAL); + /* Syncup interrupts state with HW */ + kvm_riscv_vcpu_sync_interrupts(vcpu); + /* * We may have taken a host interrupt in VS/VU-mode (i.e. * while executing the guest). This interrupt is still From patchwork Mon Jul 29 11:56:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063657 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82BF213A4 for ; Mon, 29 Jul 2019 11:57:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71BB8212DA for ; Mon, 29 Jul 2019 11:57:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65E4C2871F; Mon, 29 Jul 2019 11:57:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CA3C628735 for ; Mon, 29 Jul 2019 11:57:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ycQusb4Yruxf3fuPH9uYoDhj7DUXHoFGa5IaEKHTd18=; b=EEgLBdUG8+Z808 s5FtJmKYVveYrXo50JxVcmJ2AU/iVsZ2+CtDE82wH84sc9o15UiEQruJ5+YvS+JEFiJdroMnLTRW+ wDdyWUUn56BIQn2bFqb0XaVwg9H2QV5DaVDKlDgxHqbka9xsEo4A9NJseDPY6CvMkcpK33SWKFzhU RW0uJFmsijMe6YXyLWDTGrIq2pDekEZHbLgLxUsxZqdSs4G9dLGxUqMwlN636GR+T00K3RXRPI+PQ 2vpb6mS2Dl2NEpQkzvFfwXquCWmSi2vlJFlMVkVtGazsr4DN7uymAENL5TdHs+2WbWR4tQG00DtTM AmwR1xllHVgNHgLVw4jQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4H3-0002eS-GB; Mon, 29 Jul 2019 11:57:05 +0000 Received: from esa4.hgst.iphmx.com ([216.71.154.42]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4H0-0002Yf-FE for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401422; x=1595937422; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=SLztWMpN1BMZjIREkxOhdcN/GnZvookTRE3bBKMf5zQ=; b=GAa4S8RW7XyAdYNtTo1c+qwR3am194U7WXXZYOcew+GQpOVb3PUw68Kf dh9MLVLlp3lVUy2xXGRuor6QRGhrm7YvUJP3YwAR9X3FLT+wxdzDs8HiU VMQ2ZhfQqV/6htvECtAhTG00LKii0tE3Jy5l3hZS6f/SF6XO/S20qOQQO DApfq3z9LF2eGONBToXiRHOuk7Ju2ieIHf2ADV6HHKjfYafjineA1Ylm8 /zMUDo9IcZNmzYxyyZ526/nHmdi4B9CUL0Fskzzd+QqgxZLzWaIbWc59s E/aF5Rnmf/wN+MMss+KcNWHUDIVdbBZhROiopRkTC0SdjV06bbdauIFtw g==; IronPort-SDR: 20zP986tBUR+9gVvxCeEjpLLzfmXNcXSpIcAYBLym2Ww5k7wZ/tQ9mwPpnIaTWcKbeF9YXVUW1 I+hbmCLyU+NDTFiaV9UH2kwXsnbq8oDv64xAO4LKQuSC5uIiR94TWEDyWvjA1ttnSNK2lQO8Tp zj/Y+6IWrhNa7eDYdhGnE+sxHmuD74ionyZJJ0ZYJ2pFsiZamFxTvoYFZo5jLaFfAotWLF117q 9z6W/Djx9MAxUH3vdiH3xso80JV8c+PTFPcrenL1pp03JQU4/QO776pZq5CR2u7N1XniJHMMOo eMk= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="114381582" Received: from mail-by2nam01lp2054.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.54]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:01 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=e3+H4+Kz0Ak3mr4glSh2amUxoyVz/n7iCIEbL5B3rxJCnZOmu5rkYLVctyKcKLDEs13xJNkZMv1a4XYGTulK8bs8VxSx/5onOaLgT0a86bRSc1ryy/hW9c/r/zNHUDqxzHgTDN6dqT+IGAYLt0Qr5uAajDKiePkUkKOdgpXUbKB2NyI2rQNZLIOqzj8RFAGiZaRKMwcRU26wScMWaG1bgGOtM6JW73wvWIAT69y7FmOTIRonG+FYa6HWQ/QrHdX7AZD/wRGuZvfhNIoo6D4MLcCPqvmE9icYZmZd+Avrp4/LlCjnaiU1QVvxwO20+BCyJBiqZlp3EgJzg0hQVN45Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RdUX9z2/znQwmduWYJOfNKkr+aMrZRZYsWdW8UPhABY=; b=Ol+xdXR4RPXXLV895/HAfcH4ltYfBF7dBV6POs+2PFygq/5d90/oxO7/OLwgV0DWQzJFLm986vu19PVfgHKga02Z+twU32IAy9CFqTTCZYZqOj8+YGE/o9hhIkRTHb8tsy4290FwWpjgk9sPAvrfMui99nLgip7Jgq7jbZs0PN5uRjWIe2pM2twkSN/iWVCFPIhoBluX5y4HfQdAB/fkp5Mzleu+8rGJlxBhnbuwzEx2c5SbmZfbUH8kwdIGgj2vYrJB7EhSRlhFTJQV85EofRduIyW6rliDS+ZXDSbk03NYjfjX6Xx1lKzZkpbK/q5ODNpviojSKaLY4RmlnHtmmQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RdUX9z2/znQwmduWYJOfNKkr+aMrZRZYsWdW8UPhABY=; b=bX7WaIIY9LhbjWEyv6K3DKjXVE4aJzkyFiOJgz2ix8c/KooN4PdEU3dJzsoGGyB78yFtgCHWouL5sDSee0+9ygeCcP7oHeA6WnEJXB21s8LROh1s35sV3wh+H0TgnVjSpDtUwUWZ0KG3HMkYxbNCDa+jMxmO41bZCWBSiqyE76s= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:57:00 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:00 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 06/16] RISC-V: KVM: Implement KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls Thread-Topic: [RFC PATCH 06/16] RISC-V: KVM: Implement KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls Thread-Index: AQHVRgS6mWSnxVpDY0OSO4gytag53g== Date: Mon, 29 Jul 2019 11:56:59 +0000 Message-ID: <20190729115544.17895-7-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: c7647304-330f-4669-14f8-08d7141bdca0 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:5236; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: TFLhIOVt1xHwyiftnsJXXV1w1yWWv+/L/P7LqBvvHNf5FhRkSXM/dIRKEFwFCaubseHxPoSJt6vBh5C/klJCooh6L3LVcyLoeBSOppJNoLtX5ioUsMQJ93haGIu59UWlHtelXB+rlQT2feyH3U2BKP889IjJMrI5L4AhZU+2OXKjD/dNwEqshRKUL2Mi977WXAIbxcB+e1UfXjv8dIyHkHHmZhMloXp66oeG82XeJ65Sva+08GD4sHSloh17hVACC1ov5MBLr/QM8IzAJiAwilLg9EZWJZ96xS/op4jLOZ4NR+d0NbA6A+7ANY3dprlNsvJ3CTD9SmPn6IpETAwXGu7DqZHLD1KAvoEet09isAaWOQYGcCS8eFBVjWdTLHoi0ODV52OhMuMzeltFCBsefD8+SC17eIT8vZbaKhL1KK4= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: c7647304-330f-4669-14f8-08d7141bdca0 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:56:59.7323 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045702_591420_7FDDDCA6 X-CRM114-Status: GOOD ( 16.70 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP For KVM RISC-V, we use KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls to access VCPU config and registers from user-space. We have two types of VCPU registers: 1. CONFIG - these are VCPU config and capabilities 2. CORE - these are VCPU general purpose registers The CONFIG registers available to user-space are ISA and TIMEBASE. Out of these, TIMEBASE is a read-only register which inform user-space about VCPU timer base frequency. The ISA register is a read and write register where user-space can only write the desired VCPU ISA capabilities before running the VCPU. The CORE registers available to user-space are PC, RA, SP, GP, TP, A0-A7, T0-T6, S0-S11 and MODE. Most of these are RISC-V general registers except PC and MODE. The PC register represents program counter whereas the MODE register represent VCPU privilege mode (i.e. S/U-mode). In future, more VCPU register types will be added such as FP, CSRs, etc for KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls. Signed-off-by: Anup Patel --- arch/riscv/include/uapi/asm/kvm.h | 24 ++++ arch/riscv/kvm/vcpu.c | 177 +++++++++++++++++++++++++++++- 2 files changed, 199 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index 6dbc056d58ba..6c28a1b6e9be 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -23,8 +23,15 @@ /* for KVM_GET_REGS and KVM_SET_REGS */ struct kvm_regs { + /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */ + struct user_regs_struct regs; + unsigned long mode; }; +/* Possible privilege modes for kvm_regs */ +#define KVM_RISCV_MODE_S 1 +#define KVM_RISCV_MODE_U 0 + /* for KVM_GET_FPU and KVM_SET_FPU */ struct kvm_fpu { }; @@ -45,6 +52,23 @@ struct kvm_sync_regs { struct kvm_sregs { }; +#define KVM_REG_SIZE(id) \ + (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) + +/* If you need to interpret the index values, here is the key: */ +#define KVM_REG_RISCV_TYPE_MASK 0x00000000FF000000 +#define KVM_REG_RISCV_TYPE_SHIFT 24 + +/* Config registers are mapped as type 1 */ +#define KVM_REG_RISCV_CONFIG (0x01 << KVM_REG_RISCV_TYPE_SHIFT) +#define KVM_REG_RISCV_CONFIG_ISA 0x0 +#define KVM_REG_RISCV_CONFIG_TIMEBASE 0x1 + +/* Core registers are mapped as type 2 */ +#define KVM_REG_RISCV_CORE (0x02 << KVM_REG_RISCV_TYPE_SHIFT) +#define KVM_REG_RISCV_CORE_REG(name) \ + (offsetof(struct kvm_regs, name) / sizeof(unsigned long)) + #endif #endif /* __LINUX_KVM_RISCV_H */ diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index c6f57caa95f0..37368eeb6c41 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -189,6 +189,157 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) return VM_FAULT_SIGBUS; } +static int kvm_riscv_vcpu_get_reg_config(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CONFIG); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + + switch (reg_num) { + case KVM_REG_RISCV_CONFIG_ISA: + reg_val = vcpu->arch.isa; + break; + case KVM_REG_RISCV_CONFIG_TIMEBASE: + reg_val = riscv_timebase; + break; + default: + return -EINVAL; + }; + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CONFIG); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + switch (reg_num) { + case KVM_REG_RISCV_CONFIG_ISA: + if (!vcpu->arch.ran_atleast_once) { + vcpu->arch.isa = reg_val; + vcpu->arch.isa &= riscv_isa; + vcpu->arch.isa &= KVM_RISCV_ISA_ALLOWED; + } else { + return -ENOTSUPP; + } + break; + case KVM_REG_RISCV_CONFIG_TIMEBASE: + return -ENOTSUPP; + default: + return -EINVAL; + }; + + return 0; +} + +static int kvm_riscv_vcpu_get_reg_core(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CORE); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + + if (reg_num == KVM_REG_RISCV_CORE_REG(regs.pc)) + reg_val = cntx->sepc; + else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num && + reg_num <= KVM_REG_RISCV_CORE_REG(regs.t6)) + reg_val = ((unsigned long *)cntx)[reg_num]; + else if (reg_num == KVM_REG_RISCV_CORE_REG(mode)) + reg_val = (cntx->sstatus & SR_SPP) ? + KVM_RISCV_MODE_S : KVM_RISCV_MODE_U; + else + return -EINVAL; + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CORE); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + if (reg_num == KVM_REG_RISCV_CORE_REG(regs.pc)) + cntx->sepc = reg_val; + else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num && + reg_num <= KVM_REG_RISCV_CORE_REG(regs.t6)) + ((unsigned long *)cntx)[reg_num] = reg_val; + else if (reg_num == KVM_REG_RISCV_CORE_REG(mode)) { + if (reg_val == KVM_RISCV_MODE_S) + cntx->sstatus |= SR_SPP; + else + cntx->sstatus &= ~SR_SPP; + } else + return -EINVAL; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CONFIG) + return kvm_riscv_vcpu_set_reg_config(vcpu, reg); + else if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CORE) + return kvm_riscv_vcpu_set_reg_core(vcpu, reg); + + return -EINVAL; +} + +static int kvm_riscv_vcpu_get_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CONFIG) + return kvm_riscv_vcpu_get_reg_config(vcpu, reg); + else if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CORE) + return kvm_riscv_vcpu_get_reg_core(vcpu, reg); + + return -EINVAL; +} + long kvm_arch_vcpu_async_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -213,8 +364,30 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp, long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { - /* TODO: */ - return -EINVAL; + struct kvm_vcpu *vcpu = filp->private_data; + void __user *argp = (void __user *)arg; + long r = -EINVAL; + + switch (ioctl) { + case KVM_SET_ONE_REG: + case KVM_GET_ONE_REG: { + struct kvm_one_reg reg; + + r = -EFAULT; + if (copy_from_user(®, argp, sizeof(reg))) + break; + + if (ioctl == KVM_SET_ONE_REG) + r = kvm_riscv_vcpu_set_reg(vcpu, ®); + else + r = kvm_riscv_vcpu_get_reg(vcpu, ®); + break; + } + default: + break; + } + + return r; } int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, From patchwork Mon Jul 29 11:57:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063659 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D464114E5 for ; Mon, 29 Jul 2019 11:57:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C151C20069 for ; Mon, 29 Jul 2019 11:57:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B533928405; Mon, 29 Jul 2019 11:57:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AF1B620069 for ; Mon, 29 Jul 2019 11:57:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PrhR8pz2RoEA62gzX7TFiYPnRHg0qvr7HDxT7s0DNPc=; b=lTiNzxFkVmVmjs GR/xo6X5QEdqCtdM3kwsrQ2xlTBqyRb+OFcez86e0u9tdjPUoqJYMnEiv3ei0kEhhaYOZCoAMi2DE ymBeSTt0gexO5xDyZcj0/3iUsHLi16kXl/8LgszDuZpPp0aDjfm4zuE2NLzWNG3KfjZl/s/lxciYA vWk7GR7OnUwlED1O7l/fmdRDgAGQtK/aS5aMWboDxcZdsDRTFsx2xy47eRDU5C60HKfq2pIsB74SB jYm5ymXjpAhtEz49Z5k3C0C0pv4CNq0i0vbFOxDCAFB3SDPwh7okUKwySqFQpHQqGPhQVr/LTv3KR WiAlyCCYnuTNT6ldAZkQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HO-0002kJ-91; Mon, 29 Jul 2019 11:57:26 +0000 Received: from esa2.hgst.iphmx.com ([68.232.143.124]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HL-0002jS-BC for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401482; x=1595937482; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=GQgfLtHsr7JfLb1Fxf2xnKBq1Q8u2UIITfjraOJkxgE=; b=R7PEB7ROZUOQ3X8Whbg70qM7M5IT/U0ddln1ttVrCNZt091TilUY6V1e BAID9K9xkVj9aWol6YVwEExJXvFp6VUGf75jBvnLnj4IsHTQ3BQrKoBWp /uVpc0vnTZY//quhZRECbl8PfQLhN+dUwpKRWeeYibMeCS1m6fgccfGOa n5QTsgyyJragjvSXeuTam3/UMcIDtFLTOJLonjsqcLJYjIaPHNOx/nJae dklEC5AB7OjkjokGp/gQoz4LVfL9/1Mpms/0k6pcwmf9MpAgu7GkNz93F IAFo1qEvf+kaDCzdSP0aTtde7IM1taVjPGYcu7GNmFVWcklNM2B9RJHiT w==; IronPort-SDR: VlIxZx0NoaiY5rtWAJjBXdzficeY0zmCMObwt71UjKMCRXoUjeqkk7FftorvFijhTWKCo7Sdcs p8PlDDxqOl5DdEHga8bcIYFo2rGv60gZ3uGXfVYMpXBUDV6aWlJGGe4JpZRa3LKAkVSPMr/IlJ wTCqUv6+Fx8DPzfSxAYYfoPn1te5lY6uy+HSkhv1ZUtcZ9Q2L83giFYWYSexO7yak3UITtevzM 48mblByFe3LNW1KAVZQFAfm1aGF6gFUHK2gZS0yc9iw95GPTfliBUUTLCrVezzFT99QpUBGPK+ W/0= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="214553093" Received: from mail-by2nam01lp2057.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.57]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:38 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=leIhJIpF65R/RGQ9WxdDejGG+ttukaR/0w88DvlEouE+RcGQUBFx1Mq2E8VSIkvtmWO3xQAFY5ug09NJ9sP7GEJz2xtZDaib1gtBeMDcQY1hhO2ovL/AEr2Kf1DHh3ySJWSdYyNdp77J3tBu7xH73FLmoapZu9fSSS/T8zcG6Q/zngU7XIi2xUW1R+os6I6VDGUTdL17eFZrejYaQwIHqMoqjZCIiDAmO6h8hwMGW4z7c7tBfCEkyT9bNriSXdtmmKvkXZgIHppkUZiHg13mas/Wwc4xuhAXROcTlgnXgU+fg2rSzrR6axZF26d5qNr2261uyhmyAXRfj8KW0twPZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GNoTB8mCnxyRihoQzRXJ/wcDOSNM8vpCpQG31cNuMyo=; b=m7oKAKtPp5L1dnTXRnzG+r1U8T1FoLEovfK74l7W6nT0FH1ulmJFEXTii1k3MT8JbIkbnAWrrzoh1bWGXfbn2I12excK6lq3kHHP4n2wM079eBDWtXkmLOKBtcL5JP/70joQE4KUWzbG2SnkWOuh4dUsKxtpj+idubhmmIL2CIFFStSIwxlKDOloTuF1xNUE8RdoCqT+IZS5v12UyCHOxQni2MxTZEFZjU1vEwmdDxUvOrQRgaFokaLKXHVmZ4ksSRypYBY6Hp7Fj3+dLb5QXDg0bFu61EkRiX1MA1qK0bUE5Ox6iKAhJKs23JAF9hYzIqsugwPV4esRHDo70nDacg== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GNoTB8mCnxyRihoQzRXJ/wcDOSNM8vpCpQG31cNuMyo=; b=xneAutRyagqaUY6qo/REKo43BPSExPTcJrAtX49dwMUQfWlW9E9IWdpOw9uGTx/Mwa+T679U3MFikI5IFn4X18RdeZmMxN54EH3P6TO9J2MC84W/yldfApjZqr9RCj6Bm7ctprJfjuszbkAS77ue7X4z0e2zlMps6pKDMQ+M5N0= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:57:06 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:06 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 07/16] RISC-V: KVM: Implement VCPU world-switch Thread-Topic: [RFC PATCH 07/16] RISC-V: KVM: Implement VCPU world-switch Thread-Index: AQHVRgS+7FF6yFjrmEuZIbB0xp4zqg== Date: Mon, 29 Jul 2019 11:57:05 +0000 Message-ID: <20190729115544.17895-8-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 29b3e485-1e4e-4f52-4054-08d7141be043 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:7219; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(30864003)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: Hu7/zmRL+2qUzCmbauWImbVJPO3BkV4yjrAlrtya2chfB5s53lQn/inLXZHTNWL8kOjLG1iUTNts/N4cQne6MSGP7UFRqMUCaVMg+7Rf98buG7lIO67oKTapD0Ua7nLrmdl52pNVgvGJqCmkQsdlQgU/ZNrKTU8nX1ElBz3ZEnqHDsUrY0Ftcib9dvgHMBBFqM0tIUP8kfTaa8ke/APvaJXwht6CgMIQ6F0iAaMP9+PPsPhBFB7RYKNQIIBCrtea7eKWqTRKyoYMnbk6fO2MUOCzRLKA+ncUK6uL3heHAE+ULzEgTAItvP7HH6mDX2GoKpqQSovXTl5HDtsZ09JGayBQWhHw/Z/6YBX0iOlnqaC733EYz9581mToZZinOCY6ssVQosnqzqocFN+H9R+iMt7vfLT7GKXvwRCC9L5qb5s= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 29b3e485-1e4e-4f52-4054-08d7141be043 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:05.9368 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045723_442166_4CEE0106 X-CRM114-Status: GOOD ( 14.22 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements the VCPU world-switch for KVM RISC-V. The KVM RISC-V world-switch (i.e. __kvm_riscv_switch_to()) mostly switches general purpose registers, SSTATUS, STVEC, SSCRATCH and HSTATUS CSRs. Other CSRs are switched via vcpu_load() and vcpu_put() interface in kvm_arch_vcpu_load() and kvm_arch_vcpu_put() functions respectively. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 9 +- arch/riscv/kernel/asm-offsets.c | 76 ++++++++++++ arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu.c | 33 ++++- arch/riscv/kvm/vcpu_switch.S | 193 ++++++++++++++++++++++++++++++ 5 files changed, 309 insertions(+), 4 deletions(-) create mode 100644 arch/riscv/kvm/vcpu_switch.S diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index aa89f1922da1..006785bd6474 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -113,6 +113,13 @@ struct kvm_vcpu_arch { /* ISA feature bits (similar to MISA) */ unsigned long isa; + /* SSCRATCH and STVEC of Host */ + unsigned long host_sscratch; + unsigned long host_stvec; + + /* CPU context of Host */ + struct kvm_cpu_context host_context; + /* CPU context of Guest VCPU */ struct kvm_cpu_context guest_context; @@ -151,7 +158,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run); int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long scause, unsigned long stval); -static inline void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch) {} +void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index 9f5628c38ac9..711656710190 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -7,7 +7,9 @@ #define GENERATING_ASM_OFFSETS #include +#include #include +#include #include #include @@ -109,6 +111,80 @@ void asm_offsets(void) OFFSET(PT_SBADADDR, pt_regs, sbadaddr); OFFSET(PT_SCAUSE, pt_regs, scause); + OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero); + OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra); + OFFSET(KVM_ARCH_GUEST_SP, kvm_vcpu_arch, guest_context.sp); + OFFSET(KVM_ARCH_GUEST_GP, kvm_vcpu_arch, guest_context.gp); + OFFSET(KVM_ARCH_GUEST_TP, kvm_vcpu_arch, guest_context.tp); + OFFSET(KVM_ARCH_GUEST_T0, kvm_vcpu_arch, guest_context.t0); + OFFSET(KVM_ARCH_GUEST_T1, kvm_vcpu_arch, guest_context.t1); + OFFSET(KVM_ARCH_GUEST_T2, kvm_vcpu_arch, guest_context.t2); + OFFSET(KVM_ARCH_GUEST_S0, kvm_vcpu_arch, guest_context.s0); + OFFSET(KVM_ARCH_GUEST_S1, kvm_vcpu_arch, guest_context.s1); + OFFSET(KVM_ARCH_GUEST_A0, kvm_vcpu_arch, guest_context.a0); + OFFSET(KVM_ARCH_GUEST_A1, kvm_vcpu_arch, guest_context.a1); + OFFSET(KVM_ARCH_GUEST_A2, kvm_vcpu_arch, guest_context.a2); + OFFSET(KVM_ARCH_GUEST_A3, kvm_vcpu_arch, guest_context.a3); + OFFSET(KVM_ARCH_GUEST_A4, kvm_vcpu_arch, guest_context.a4); + OFFSET(KVM_ARCH_GUEST_A5, kvm_vcpu_arch, guest_context.a5); + OFFSET(KVM_ARCH_GUEST_A6, kvm_vcpu_arch, guest_context.a6); + OFFSET(KVM_ARCH_GUEST_A7, kvm_vcpu_arch, guest_context.a7); + OFFSET(KVM_ARCH_GUEST_S2, kvm_vcpu_arch, guest_context.s2); + OFFSET(KVM_ARCH_GUEST_S3, kvm_vcpu_arch, guest_context.s3); + OFFSET(KVM_ARCH_GUEST_S4, kvm_vcpu_arch, guest_context.s4); + OFFSET(KVM_ARCH_GUEST_S5, kvm_vcpu_arch, guest_context.s5); + OFFSET(KVM_ARCH_GUEST_S6, kvm_vcpu_arch, guest_context.s6); + OFFSET(KVM_ARCH_GUEST_S7, kvm_vcpu_arch, guest_context.s7); + OFFSET(KVM_ARCH_GUEST_S8, kvm_vcpu_arch, guest_context.s8); + OFFSET(KVM_ARCH_GUEST_S9, kvm_vcpu_arch, guest_context.s9); + OFFSET(KVM_ARCH_GUEST_S10, kvm_vcpu_arch, guest_context.s10); + OFFSET(KVM_ARCH_GUEST_S11, kvm_vcpu_arch, guest_context.s11); + OFFSET(KVM_ARCH_GUEST_T3, kvm_vcpu_arch, guest_context.t3); + OFFSET(KVM_ARCH_GUEST_T4, kvm_vcpu_arch, guest_context.t4); + OFFSET(KVM_ARCH_GUEST_T5, kvm_vcpu_arch, guest_context.t5); + OFFSET(KVM_ARCH_GUEST_T6, kvm_vcpu_arch, guest_context.t6); + OFFSET(KVM_ARCH_GUEST_SEPC, kvm_vcpu_arch, guest_context.sepc); + OFFSET(KVM_ARCH_GUEST_SSTATUS, kvm_vcpu_arch, guest_context.sstatus); + OFFSET(KVM_ARCH_GUEST_HSTATUS, kvm_vcpu_arch, guest_context.hstatus); + + OFFSET(KVM_ARCH_HOST_ZERO, kvm_vcpu_arch, host_context.zero); + OFFSET(KVM_ARCH_HOST_RA, kvm_vcpu_arch, host_context.ra); + OFFSET(KVM_ARCH_HOST_SP, kvm_vcpu_arch, host_context.sp); + OFFSET(KVM_ARCH_HOST_GP, kvm_vcpu_arch, host_context.gp); + OFFSET(KVM_ARCH_HOST_TP, kvm_vcpu_arch, host_context.tp); + OFFSET(KVM_ARCH_HOST_T0, kvm_vcpu_arch, host_context.t0); + OFFSET(KVM_ARCH_HOST_T1, kvm_vcpu_arch, host_context.t1); + OFFSET(KVM_ARCH_HOST_T2, kvm_vcpu_arch, host_context.t2); + OFFSET(KVM_ARCH_HOST_S0, kvm_vcpu_arch, host_context.s0); + OFFSET(KVM_ARCH_HOST_S1, kvm_vcpu_arch, host_context.s1); + OFFSET(KVM_ARCH_HOST_A0, kvm_vcpu_arch, host_context.a0); + OFFSET(KVM_ARCH_HOST_A1, kvm_vcpu_arch, host_context.a1); + OFFSET(KVM_ARCH_HOST_A2, kvm_vcpu_arch, host_context.a2); + OFFSET(KVM_ARCH_HOST_A3, kvm_vcpu_arch, host_context.a3); + OFFSET(KVM_ARCH_HOST_A4, kvm_vcpu_arch, host_context.a4); + OFFSET(KVM_ARCH_HOST_A5, kvm_vcpu_arch, host_context.a5); + OFFSET(KVM_ARCH_HOST_A6, kvm_vcpu_arch, host_context.a6); + OFFSET(KVM_ARCH_HOST_A7, kvm_vcpu_arch, host_context.a7); + OFFSET(KVM_ARCH_HOST_S2, kvm_vcpu_arch, host_context.s2); + OFFSET(KVM_ARCH_HOST_S3, kvm_vcpu_arch, host_context.s3); + OFFSET(KVM_ARCH_HOST_S4, kvm_vcpu_arch, host_context.s4); + OFFSET(KVM_ARCH_HOST_S5, kvm_vcpu_arch, host_context.s5); + OFFSET(KVM_ARCH_HOST_S6, kvm_vcpu_arch, host_context.s6); + OFFSET(KVM_ARCH_HOST_S7, kvm_vcpu_arch, host_context.s7); + OFFSET(KVM_ARCH_HOST_S8, kvm_vcpu_arch, host_context.s8); + OFFSET(KVM_ARCH_HOST_S9, kvm_vcpu_arch, host_context.s9); + OFFSET(KVM_ARCH_HOST_S10, kvm_vcpu_arch, host_context.s10); + OFFSET(KVM_ARCH_HOST_S11, kvm_vcpu_arch, host_context.s11); + OFFSET(KVM_ARCH_HOST_T3, kvm_vcpu_arch, host_context.t3); + OFFSET(KVM_ARCH_HOST_T4, kvm_vcpu_arch, host_context.t4); + OFFSET(KVM_ARCH_HOST_T5, kvm_vcpu_arch, host_context.t5); + OFFSET(KVM_ARCH_HOST_T6, kvm_vcpu_arch, host_context.t6); + OFFSET(KVM_ARCH_HOST_SEPC, kvm_vcpu_arch, host_context.sepc); + OFFSET(KVM_ARCH_HOST_SSTATUS, kvm_vcpu_arch, host_context.sstatus); + OFFSET(KVM_ARCH_HOST_HSTATUS, kvm_vcpu_arch, host_context.hstatus); + OFFSET(KVM_ARCH_HOST_SSCRATCH, kvm_vcpu_arch, host_sscratch); + OFFSET(KVM_ARCH_HOST_STVEC, kvm_vcpu_arch, host_stvec); + /* * THREAD_{F,X}* might be larger than a S-type offset can handle, but * these are used in performance-sensitive assembly so we can't resort diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 37b5a59d4f4f..845579273727 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -8,6 +8,6 @@ ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm kvm-objs := $(common-objs-y) -kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o +kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o vcpu_switch.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 37368eeb6c41..4ab9f803536e 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -546,14 +546,43 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + csr_write(CSR_HIDELEG, csr->hideleg); + csr_write(CSR_HEDELEG, csr->hedeleg); + csr_write(CSR_VSSTATUS, csr->vsstatus); + csr_write(CSR_VSIE, csr->vsie); + csr_write(CSR_VSTVEC, csr->vstvec); + csr_write(CSR_VSSCRATCH, csr->vsscratch); + csr_write(CSR_VSEPC, csr->vsepc); + csr_write(CSR_VSCAUSE, csr->vscause); + csr_write(CSR_VSTVAL, csr->vstval); + csr_write(CSR_VSIP, csr->vsip); + csr_write(CSR_VSATP, csr->vsatp); kvm_riscv_stage2_update_pgtbl(vcpu); + + vcpu->cpu = cpu; } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + vcpu->cpu = -1; + + csr_write(CSR_HGATP, 0); + csr_write(CSR_HIDELEG, 0); + csr_write(CSR_HEDELEG, 0); + csr->vsstatus = csr_read(CSR_VSSTATUS); + csr->vsie = csr_read(CSR_VSIE); + csr->vstvec = csr_read(CSR_VSTVEC); + csr->vsscratch = csr_read(CSR_VSSCRATCH); + csr->vsepc = csr_read(CSR_VSEPC); + csr->vscause = csr_read(CSR_VSCAUSE); + csr->vstval = csr_read(CSR_VSTVAL); + csr->vsip = csr_read(CSR_VSIP); + csr->vsatp = csr_read(CSR_VSATP); } static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S new file mode 100644 index 000000000000..c5b85605bf73 --- /dev/null +++ b/arch/riscv/kvm/vcpu_switch.S @@ -0,0 +1,193 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include + + .text + .altmacro + +ENTRY(__kvm_riscv_switch_to) + /* Save Host GPRs (except A0 and T0-T6) */ + REG_S ra, (KVM_ARCH_HOST_RA)(a0) + REG_S sp, (KVM_ARCH_HOST_SP)(a0) + REG_S gp, (KVM_ARCH_HOST_GP)(a0) + REG_S tp, (KVM_ARCH_HOST_TP)(a0) + REG_S s0, (KVM_ARCH_HOST_S0)(a0) + REG_S s1, (KVM_ARCH_HOST_S1)(a0) + REG_S a1, (KVM_ARCH_HOST_A1)(a0) + REG_S a2, (KVM_ARCH_HOST_A2)(a0) + REG_S a3, (KVM_ARCH_HOST_A3)(a0) + REG_S a4, (KVM_ARCH_HOST_A4)(a0) + REG_S a5, (KVM_ARCH_HOST_A5)(a0) + REG_S a6, (KVM_ARCH_HOST_A6)(a0) + REG_S a7, (KVM_ARCH_HOST_A7)(a0) + REG_S s2, (KVM_ARCH_HOST_S2)(a0) + REG_S s3, (KVM_ARCH_HOST_S3)(a0) + REG_S s4, (KVM_ARCH_HOST_S4)(a0) + REG_S s5, (KVM_ARCH_HOST_S5)(a0) + REG_S s6, (KVM_ARCH_HOST_S6)(a0) + REG_S s7, (KVM_ARCH_HOST_S7)(a0) + REG_S s8, (KVM_ARCH_HOST_S8)(a0) + REG_S s9, (KVM_ARCH_HOST_S9)(a0) + REG_S s10, (KVM_ARCH_HOST_S10)(a0) + REG_S s11, (KVM_ARCH_HOST_S11)(a0) + + /* Save Host SSTATUS, HSTATUS, SCRATCH and STVEC */ + csrr t0, CSR_SSTATUS + REG_S t0, (KVM_ARCH_HOST_SSTATUS)(a0) + csrr t1, CSR_HSTATUS + REG_S t1, (KVM_ARCH_HOST_HSTATUS)(a0) + csrr t2, CSR_SSCRATCH + REG_S t2, (KVM_ARCH_HOST_SSCRATCH)(a0) + csrr t3, CSR_STVEC + REG_S t3, (KVM_ARCH_HOST_STVEC)(a0) + + /* Change Host exception vector to return path */ + la t4, __kvm_switch_return + csrw CSR_STVEC, t4 + + /* Restore Guest HSTATUS, SSTATUS and SEPC */ + REG_L t4, (KVM_ARCH_GUEST_SEPC)(a0) + csrw CSR_SEPC, t4 + REG_L t5, (KVM_ARCH_GUEST_SSTATUS)(a0) + csrw CSR_SSTATUS, t5 + REG_L t6, (KVM_ARCH_GUEST_HSTATUS)(a0) + csrw CSR_HSTATUS, t6 + + /* Restore Guest GPRs (except A0) */ + REG_L ra, (KVM_ARCH_GUEST_RA)(a0) + REG_L sp, (KVM_ARCH_GUEST_SP)(a0) + REG_L gp, (KVM_ARCH_GUEST_GP)(a0) + REG_L tp, (KVM_ARCH_GUEST_TP)(a0) + REG_L t0, (KVM_ARCH_GUEST_T0)(a0) + REG_L t1, (KVM_ARCH_GUEST_T1)(a0) + REG_L t2, (KVM_ARCH_GUEST_T2)(a0) + REG_L s0, (KVM_ARCH_GUEST_S0)(a0) + REG_L s1, (KVM_ARCH_GUEST_S1)(a0) + REG_L a1, (KVM_ARCH_GUEST_A1)(a0) + REG_L a2, (KVM_ARCH_GUEST_A2)(a0) + REG_L a3, (KVM_ARCH_GUEST_A3)(a0) + REG_L a4, (KVM_ARCH_GUEST_A4)(a0) + REG_L a5, (KVM_ARCH_GUEST_A5)(a0) + REG_L a6, (KVM_ARCH_GUEST_A6)(a0) + REG_L a7, (KVM_ARCH_GUEST_A7)(a0) + REG_L s2, (KVM_ARCH_GUEST_S2)(a0) + REG_L s3, (KVM_ARCH_GUEST_S3)(a0) + REG_L s4, (KVM_ARCH_GUEST_S4)(a0) + REG_L s5, (KVM_ARCH_GUEST_S5)(a0) + REG_L s6, (KVM_ARCH_GUEST_S6)(a0) + REG_L s7, (KVM_ARCH_GUEST_S7)(a0) + REG_L s8, (KVM_ARCH_GUEST_S8)(a0) + REG_L s9, (KVM_ARCH_GUEST_S9)(a0) + REG_L s10, (KVM_ARCH_GUEST_S10)(a0) + REG_L s11, (KVM_ARCH_GUEST_S11)(a0) + REG_L t3, (KVM_ARCH_GUEST_T3)(a0) + REG_L t4, (KVM_ARCH_GUEST_T4)(a0) + REG_L t5, (KVM_ARCH_GUEST_T5)(a0) + REG_L t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Save Host A0 in SSCRATCH */ + csrw CSR_SSCRATCH, a0 + + /* Restore Guest A0 */ + REG_L a0, (KVM_ARCH_GUEST_A0)(a0) + + /* Resume Guest */ + sret + + /* Back to Host */ + .align 2 +__kvm_switch_return: + /* Swap Guest A0 with SSCRATCH */ + csrrw a0, CSR_SSCRATCH, a0 + + /* Save Guest GPRs (except A0) */ + REG_S ra, (KVM_ARCH_GUEST_RA)(a0) + REG_S sp, (KVM_ARCH_GUEST_SP)(a0) + REG_S gp, (KVM_ARCH_GUEST_GP)(a0) + REG_S tp, (KVM_ARCH_GUEST_TP)(a0) + REG_S t0, (KVM_ARCH_GUEST_T0)(a0) + REG_S t1, (KVM_ARCH_GUEST_T1)(a0) + REG_S t2, (KVM_ARCH_GUEST_T2)(a0) + REG_S s0, (KVM_ARCH_GUEST_S0)(a0) + REG_S s1, (KVM_ARCH_GUEST_S1)(a0) + REG_S a1, (KVM_ARCH_GUEST_A1)(a0) + REG_S a2, (KVM_ARCH_GUEST_A2)(a0) + REG_S a3, (KVM_ARCH_GUEST_A3)(a0) + REG_S a4, (KVM_ARCH_GUEST_A4)(a0) + REG_S a5, (KVM_ARCH_GUEST_A5)(a0) + REG_S a6, (KVM_ARCH_GUEST_A6)(a0) + REG_S a7, (KVM_ARCH_GUEST_A7)(a0) + REG_S s2, (KVM_ARCH_GUEST_S2)(a0) + REG_S s3, (KVM_ARCH_GUEST_S3)(a0) + REG_S s4, (KVM_ARCH_GUEST_S4)(a0) + REG_S s5, (KVM_ARCH_GUEST_S5)(a0) + REG_S s6, (KVM_ARCH_GUEST_S6)(a0) + REG_S s7, (KVM_ARCH_GUEST_S7)(a0) + REG_S s8, (KVM_ARCH_GUEST_S8)(a0) + REG_S s9, (KVM_ARCH_GUEST_S9)(a0) + REG_S s10, (KVM_ARCH_GUEST_S10)(a0) + REG_S s11, (KVM_ARCH_GUEST_S11)(a0) + REG_S t3, (KVM_ARCH_GUEST_T3)(a0) + REG_S t4, (KVM_ARCH_GUEST_T4)(a0) + REG_S t5, (KVM_ARCH_GUEST_T5)(a0) + REG_S t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Save Guest A0 */ + csrr t0, CSR_SSCRATCH + REG_S t0, (KVM_ARCH_GUEST_A0)(a0) + + /* Save Guest HSTATUS, SSTATUS, and SEPC */ + csrr t0, CSR_SEPC + REG_S t0, (KVM_ARCH_GUEST_SEPC)(a0) + csrr t1, CSR_SSTATUS + REG_S t1, (KVM_ARCH_GUEST_SSTATUS)(a0) + csrr t2, CSR_HSTATUS + REG_S t2, (KVM_ARCH_GUEST_HSTATUS)(a0) + + /* Restore Host SSTATUS, HSTATUS, SCRATCH and STVEC */ + REG_L t3, (KVM_ARCH_HOST_SSTATUS)(a0) + csrw CSR_SSTATUS, t3 + REG_L t4, (KVM_ARCH_HOST_HSTATUS)(a0) + csrw CSR_HSTATUS, t4 + REG_L t5, (KVM_ARCH_HOST_SSCRATCH)(a0) + csrw CSR_SSCRATCH, t5 + REG_L t6, (KVM_ARCH_HOST_STVEC)(a0) + csrw CSR_STVEC, t6 + + /* Restore Host GPRs (except A0 and T0-T6) */ + REG_L ra, (KVM_ARCH_HOST_RA)(a0) + REG_L sp, (KVM_ARCH_HOST_SP)(a0) + REG_L gp, (KVM_ARCH_HOST_GP)(a0) + REG_L tp, (KVM_ARCH_HOST_TP)(a0) + REG_L s0, (KVM_ARCH_HOST_S0)(a0) + REG_L s1, (KVM_ARCH_HOST_S1)(a0) + REG_L a1, (KVM_ARCH_HOST_A1)(a0) + REG_L a2, (KVM_ARCH_HOST_A2)(a0) + REG_L a3, (KVM_ARCH_HOST_A3)(a0) + REG_L a4, (KVM_ARCH_HOST_A4)(a0) + REG_L a5, (KVM_ARCH_HOST_A5)(a0) + REG_L a6, (KVM_ARCH_HOST_A6)(a0) + REG_L a7, (KVM_ARCH_HOST_A7)(a0) + REG_L s2, (KVM_ARCH_HOST_S2)(a0) + REG_L s3, (KVM_ARCH_HOST_S3)(a0) + REG_L s4, (KVM_ARCH_HOST_S4)(a0) + REG_L s5, (KVM_ARCH_HOST_S5)(a0) + REG_L s6, (KVM_ARCH_HOST_S6)(a0) + REG_L s7, (KVM_ARCH_HOST_S7)(a0) + REG_L s8, (KVM_ARCH_HOST_S8)(a0) + REG_L s9, (KVM_ARCH_HOST_S9)(a0) + REG_L s10, (KVM_ARCH_HOST_S10)(a0) + REG_L s11, (KVM_ARCH_HOST_S11)(a0) + + /* Return to C code */ + ret +ENDPROC(__kvm_riscv_switch_to) From patchwork Mon Jul 29 11:57:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063663 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2513313A4 for ; Mon, 29 Jul 2019 11:57:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1215620069 for ; Mon, 29 Jul 2019 11:57:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0634E28405; Mon, 29 Jul 2019 11:57:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EA12820069 for ; Mon, 29 Jul 2019 11:57:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Dm0HqD3CoxEjogYYgjI24+ugMB57cBAZAbudmNEo7lY=; b=iQZM3rT2apX2ai 2r/41ZRdMnuhMSptvndydyvMXFrtWLzHoXgg1exr88+fMbtcvOoE4YoAgTVoSjaA9cfPinpEAHPF4 xPFija8SLKoT4Y+A+qvYH/IE+38aFCL5V2dfSld/6szDsiIiKQgnPKsQYOmzPHu7Hxad3Hx2CEJZQ OHxyIvKn/6cMEbrYozfRwFm4fONF8djpuBU6880a0PAfWI/Kh6v59H8lCzs5ThnyIA/A2FDhcUv21 PpqXieGf/wlasbz1Rfn1JOB7Hq4XJ0uKAR8Po3MfyT3hmcxS4kWeCkw7UOye0B6i0vNlYy/F1Yilh xIX2u3S4cvw27pqkvUJw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HQ-0002mx-6U; Mon, 29 Jul 2019 11:57:28 +0000 Received: from esa2.hgst.iphmx.com ([68.232.143.124]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HM-0002jk-E9 for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401484; x=1595937484; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=xpDXEcQCHQS/xr8mjqVmmvdc/WphGfK2whnnvihv9M8=; b=PdlLbYVZB/Lo0n/crLEN5ebfmU+lrOzBTfN1IuX913U9fN5O/HB8JOHN /Kp/L9c4jI61uygcIN5pLtMYbtNLYoTzDOJCLXNtnQjII/kpz7vb/djAq AZ0G6DFcWptwePb/1d4zJuAxPXaQuN0ofZr4IolXNcxS6TWJrQLJnO4TV kHcPb+ViU+Wg3MTjGEYNLHOJZ+9zYGo95eydSHzJ2K0zQEtpU7rGDyDqf /oLcA8tC0rp61hkFwfKKGU0bi4DDXmiDTBjjFJjSxTa50fQ2KTJBWwMGP 1KVZJWY84V6dfAnRVgCwdzmRK4L3iUgPt8fuXFpOJKNSVdFjBXpoLP48G g==; IronPort-SDR: XWj1eqsTSC2RAWkd4lu8L45GToz442B0M+WxT0G+bEJpBeecSdsDfP875HZViN0rjqdxKJKQsq i9sMBbbgz6YBLyP356muaL9nIurj/6a2RRXiqTi7Pb/3+fTnkA/ndOVUzIT8vzwXAIybyf99Tz dfjU9Ac1/BwVjNrbETd3CbKYoWBsqRzaXHA7lLyP2q3LtSw1iEp2KpmDriCBgP2yWVQnpYHYjm ko33rIg5iDIbMC+RBG1MbDiDCZHDHyJA7honmwpjU/Z9xzi/PtvcsF3JqU594X+XQJUrYfQcCr q1A= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="214553098" Received: from mail-by2nam01lp2054.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.54]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:47 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LM7IdwnO0ctYNOrFfllsI/RqAEKLr7o+PhARxy4TgCLC1WykyBTLRs4uX1+BHmXMq/VU0tNvb9VhQKbLOvNQeBUNSH0/p2phBnMHEgx+Px5Kwx9RG/9Mj2jQ8fXKQR5maQSbF/ptN2VYtFn3IMaum37F/k6egrI5t+U5aia4Shjf6YvWul/m0RjG/vxWtDyKCTXBn22lFuDleivgQG9Av3/I3W+8sHc0XHW6ya9/L4+t0JcIe0fknm5Ri5CxVFsEkehSuqg+8J2poYjoJJTnnGbta5yBs4WV0hnw0A1W5hR5V4875mrWSdHA5huOYJAPjVSPd8QyDMHR8L8z+mgVLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=e2FcxQ+LvCmMPq17s19+HkWsiQ1GmFeddPkcnGiLIYY=; b=POB3RxzQ+4rrAw711pBBq0cOLiiVKHTuzcAJLCbykxw4x8fkdrUaZAc3r4sk/M/1rwa3Tm6ykfdEFudjzSGvdtt9O2aHnbu22TTH3k/4GjuArkkNxaI4MtSao/uabA46qasgUYR66h+q15+CHQzVuw/4tAA4//xeAfjjZZ6xWbUP4J3FCJ7Jy66NmND/Pepn3cTxntGUlF6q6bkgXmBjorEFdjpn4u5vDj421w3DmIelbfmQdpoYa88FLnV8zdvydisFwWvumlL2a26icE0H6fvWnRtOqr8X4pQSB/ilKMvblf2UY3ZiqzDht2fnO/Luru5codSjAVlRn6/B7P64jQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=e2FcxQ+LvCmMPq17s19+HkWsiQ1GmFeddPkcnGiLIYY=; b=shpgNNzzffGktpQmFtOTm3x0mi9NwpcTthjGued2Powm+J7W9mRVsVenGoSfMV+etnB7ZFJbl7cO/7bX2R+PE4pPZiQx2j6i+d90ZfqY2SBdYOw5donsuVS6LmN+Goh2jkL+e4BRr5dCWUdXWWxyNaegXgjUmge1S5BJo9qZo/M= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:57:12 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:12 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 08/16] RISC-V: KVM: Handle MMIO exits for VCPU Thread-Topic: [RFC PATCH 08/16] RISC-V: KVM: Handle MMIO exits for VCPU Thread-Index: AQHVRgTBV1nnPakppkiiH6g+ziCH+A== Date: Mon, 29 Jul 2019 11:57:11 +0000 Message-ID: <20190729115544.17895-9-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 4dc42f81-e0a8-4dad-1e37-08d7141be3dd x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:419; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(43544003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(30864003)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: WxZ5at1u0+YOQIGmOb4UYdB5XIR6Zb6PNughxYt3nOEoOaLOd5LAGl8RCH48XlyNXpSpD4aJkwaEABJwiVtopCNpMYrWKTazni90Q2lW4nOdVBFjFuB87pbtoNchuVgSYb08c/XKJM9+gZnbK/B6WxAcsoRotk84Bj191NH7RenODcdgbCIb8KZ6GoS6ob28t1HpSC01cHsRcWsFKUIgeIE+H7sUISQEvokXlOo/OKArUJftXMvIarBOVIfiHGI7iitbiKwkBIqR8S33in5J9SQwNVjmflNfId3SsLx3KKSVbAG0TzPG3aX6jyHMsg30ufZSly8z1xEhAy6XR+u+Nfhq13jDrUVYT9f4iFFwDVjBUlbKWdDdWedOakZBmE23nz+OBpSH9jBqJrId62svla+5c2GjEpSeBATOPi7hcL8= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4dc42f81-e0a8-4dad-1e37-08d7141be3dd X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:11.8164 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045724_531793_87409027 X-CRM114-Status: GOOD ( 16.92 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We will get stage2 page faults whenever Guest/VM access SW emulated MMIO device or unmapped Guest RAM. This patch implements MMIO read/write emulation by extracting MMIO details from the trapped load/store instruction and forwarding the MMIO read/write to user-space. The actual MMIO emulation will happen in user-space and KVM kernel module will only take care of register updates before resuming the trapped VCPU. The handling for stage2 page faults for unmapped Guest RAM will be implemeted by a separate patch later. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 11 + arch/riscv/kvm/mmu.c | 7 + arch/riscv/kvm/vcpu_exit.c | 435 +++++++++++++++++++++++++++++- 3 files changed, 450 insertions(+), 3 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 006785bd6474..82e568ae0260 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -54,6 +54,12 @@ struct kvm_arch { phys_addr_t pgd_phys; }; +struct kvm_mmio_decode { + unsigned long insn; + int len; + int shift; +}; + struct kvm_cpu_context { unsigned long zero; unsigned long ra; @@ -136,6 +142,9 @@ struct kvm_vcpu_arch { raw_spinlock_t irqs_lock; unsigned long irqs_pending; + /* MMIO instruction details */ + struct kvm_mmio_decode mmio_decode; + /* VCPU power-off state */ bool power_off; @@ -149,6 +158,8 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} +int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, + bool is_write); void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu); int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm); void kvm_riscv_stage2_free_pgd(struct kvm *kvm); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index cead012a8399..963f3c373781 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -61,6 +61,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, return 0; } +int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, + bool is_write) +{ + /* TODO: */ + return 0; +} + void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu) { /* TODO: */ diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index e4d7c8f0807a..4dafefa59338 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -6,9 +6,370 @@ * Anup Patel */ +#include #include #include #include +#include + +#define INSN_MATCH_LB 0x3 +#define INSN_MASK_LB 0x707f +#define INSN_MATCH_LH 0x1003 +#define INSN_MASK_LH 0x707f +#define INSN_MATCH_LW 0x2003 +#define INSN_MASK_LW 0x707f +#define INSN_MATCH_LD 0x3003 +#define INSN_MASK_LD 0x707f +#define INSN_MATCH_LBU 0x4003 +#define INSN_MASK_LBU 0x707f +#define INSN_MATCH_LHU 0x5003 +#define INSN_MASK_LHU 0x707f +#define INSN_MATCH_LWU 0x6003 +#define INSN_MASK_LWU 0x707f +#define INSN_MATCH_SB 0x23 +#define INSN_MASK_SB 0x707f +#define INSN_MATCH_SH 0x1023 +#define INSN_MASK_SH 0x707f +#define INSN_MATCH_SW 0x2023 +#define INSN_MASK_SW 0x707f +#define INSN_MATCH_SD 0x3023 +#define INSN_MASK_SD 0x707f + +#define INSN_MATCH_C_LD 0x6000 +#define INSN_MASK_C_LD 0xe003 +#define INSN_MATCH_C_SD 0xe000 +#define INSN_MASK_C_SD 0xe003 +#define INSN_MATCH_C_LW 0x4000 +#define INSN_MASK_C_LW 0xe003 +#define INSN_MATCH_C_SW 0xc000 +#define INSN_MASK_C_SW 0xe003 +#define INSN_MATCH_C_LDSP 0x6002 +#define INSN_MASK_C_LDSP 0xe003 +#define INSN_MATCH_C_SDSP 0xe002 +#define INSN_MASK_C_SDSP 0xe003 +#define INSN_MATCH_C_LWSP 0x4002 +#define INSN_MASK_C_LWSP 0xe003 +#define INSN_MATCH_C_SWSP 0xc002 +#define INSN_MASK_C_SWSP 0xe003 + +#define INSN_LEN(insn) ((((insn) & 0x3) < 0x3) ? 2 : 4) + +#ifdef CONFIG_64BIT +#define LOG_REGBYTES 3 +#else +#define LOG_REGBYTES 2 +#endif +#define REGBYTES (1 << LOG_REGBYTES) + +#define SH_RD 7 +#define SH_RS1 15 +#define SH_RS2 20 +#define SH_RS2C 2 + +#define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1)) +#define RVC_LW_IMM(x) ((RV_X(x, 6, 1) << 2) | \ + (RV_X(x, 10, 3) << 3) | \ + (RV_X(x, 5, 1) << 6)) +#define RVC_LD_IMM(x) ((RV_X(x, 10, 3) << 3) | \ + (RV_X(x, 5, 2) << 6)) +#define RVC_LWSP_IMM(x) ((RV_X(x, 4, 3) << 2) | \ + (RV_X(x, 12, 1) << 5) | \ + (RV_X(x, 2, 2) << 6)) +#define RVC_LDSP_IMM(x) ((RV_X(x, 5, 2) << 3) | \ + (RV_X(x, 12, 1) << 5) | \ + (RV_X(x, 2, 3) << 6)) +#define RVC_SWSP_IMM(x) ((RV_X(x, 9, 4) << 2) | \ + (RV_X(x, 7, 2) << 6)) +#define RVC_SDSP_IMM(x) ((RV_X(x, 10, 3) << 3) | \ + (RV_X(x, 7, 3) << 6)) +#define RVC_RS1S(insn) (8 + RV_X(insn, SH_RD, 3)) +#define RVC_RS2S(insn) (8 + RV_X(insn, SH_RS2C, 3)) +#define RVC_RS2(insn) RV_X(insn, SH_RS2C, 5) + +#define SHIFT_RIGHT(x, y) \ + ((y) < 0 ? ((x) << -(y)) : ((x) >> (y))) + +#define REG_MASK \ + ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES)) + +#define REG_OFFSET(insn, pos) \ + (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK) + +#define REG_PTR(insn, pos, regs) \ + (ulong *)((ulong)(regs) + REG_OFFSET(insn, pos)) + +#define GET_RM(insn) (((insn) >> 12) & 7) + +#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs)) +#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs)) +#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs)) +#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs)) +#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs)) +#define GET_SP(regs) (*REG_PTR(2, 0, regs)) +#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val)) +#define IMM_I(insn) ((s32)(insn) >> 20) +#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \ + (s32)(((insn) >> 7) & 0x1f)) +#define MASK_FUNCT3 0x7000 + +#define STR(x) XSTR(x) +#define XSTR(x) #x + +static ulong get_insn(struct kvm_vcpu *vcpu) +{ + ulong __sepc = vcpu->arch.guest_context.sepc; + ulong __hstatus, __sstatus, __vsstatus; +#ifdef CONFIG_RISCV_ISA_C + ulong rvc_mask = 3, tmp; +#endif + ulong flags, val; + + local_irq_save(flags); + + __vsstatus = csr_read(CSR_VSSTATUS); + __sstatus = csr_read(CSR_SSTATUS); + __hstatus = csr_read(CSR_HSTATUS); + + csr_write(CSR_VSSTATUS, __vsstatus | SR_MXR); + csr_write(CSR_SSTATUS, vcpu->arch.guest_context.sstatus | SR_MXR); + csr_write(CSR_HSTATUS, vcpu->arch.guest_context.hstatus | HSTATUS_SPRV); + +#ifndef CONFIG_RISCV_ISA_C + asm ("\n" +#ifdef CONFIG_64BIT + STR(LWU) " %[insn], (%[addr])\n" +#else + STR(LW) " %[insn], (%[addr])\n" +#endif + : [insn] "=&r" (val) : [addr] "r" (__sepc)); +#else + asm ("and %[tmp], %[addr], 2\n" + "bnez %[tmp], 1f\n" +#ifdef CONFIG_64BIT + STR(LWU) " %[insn], (%[addr])\n" +#else + STR(LW) " %[insn], (%[addr])\n" +#endif + "and %[tmp], %[insn], %[rvc_mask]\n" + "beq %[tmp], %[rvc_mask], 2f\n" + "sll %[insn], %[insn], %[xlen_minus_16]\n" + "srl %[insn], %[insn], %[xlen_minus_16]\n" + "j 2f\n" + "1:\n" + "lhu %[insn], (%[addr])\n" + "and %[tmp], %[insn], %[rvc_mask]\n" + "bne %[tmp], %[rvc_mask], 2f\n" + "lhu %[tmp], 2(%[addr])\n" + "sll %[tmp], %[tmp], 16\n" + "add %[insn], %[insn], %[tmp]\n" + "2:" + : [vsstatus] "+&r" (__vsstatus), [insn] "=&r" (val), + [tmp] "=&r" (tmp) + : [addr] "r" (__sepc), [rvc_mask] "r" (rvc_mask), + [xlen_minus_16] "i" (__riscv_xlen - 16)); +#endif + + csr_write(CSR_HSTATUS, __hstatus); + csr_write(CSR_SSTATUS, __sstatus); + csr_write(CSR_VSSTATUS, __vsstatus); + + local_irq_restore(flags); + + return val; +} + +static int emulate_load(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long fault_addr) +{ + int shift = 0, len = 0; + ulong insn = get_insn(vcpu); + + /* Decode length of MMIO and shift */ + if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) { + len = 4; + shift = 8 * (sizeof(ulong) - len); + } else if ((insn & INSN_MASK_LB) == INSN_MATCH_LB) { + len = 1; + shift = 8 * (sizeof(ulong) - len); + } else if ((insn & INSN_MASK_LBU) == INSN_MATCH_LBU) { + len = 1; + shift = 8 * (sizeof(ulong) - len); +#ifdef CONFIG_64BIT + } else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) { + len = 8; + shift = 8 * (sizeof(ulong) - len); + } else if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) { + len = 4; +#endif + } else if ((insn & INSN_MASK_LH) == INSN_MATCH_LH) { + len = 2; + shift = 8 * (sizeof(ulong) - len); + } else if ((insn & INSN_MASK_LHU) == INSN_MATCH_LHU) { + len = 2; +#ifdef CONFIG_RISCV_ISA_C +#ifdef CONFIG_64BIT + } else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) { + len = 8; + shift = 8 * (sizeof(ulong) - len); + insn = RVC_RS2S(insn) << SH_RD; + } else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP && + ((insn >> SH_RD) & 0x1f)) { + len = 8; + shift = 8 * (sizeof(ulong) - len); +#endif + } else if ((insn & INSN_MASK_C_LW) == INSN_MATCH_C_LW) { + len = 4; + shift = 8 * (sizeof(ulong) - len); + insn = RVC_RS2S(insn) << SH_RD; + } else if ((insn & INSN_MASK_C_LWSP) == INSN_MATCH_C_LWSP && + ((insn >> SH_RD) & 0x1f)) { + len = 4; + shift = 8 * (sizeof(ulong) - len); +#endif + } else { + return -ENOTSUPP; + } + + /* Fault address should be aligned to length of MMIO */ + if (fault_addr & (len - 1)) + return -EIO; + + /* Save instruction decode info */ + vcpu->arch.mmio_decode.insn = insn; + vcpu->arch.mmio_decode.shift = shift; + vcpu->arch.mmio_decode.len = len; + + /* Exit to userspace for MMIO emulation */ + vcpu->stat.mmio_exit_user++; + run->exit_reason = KVM_EXIT_MMIO; + run->mmio.is_write = false; + run->mmio.phys_addr = fault_addr; + run->mmio.len = len; + + /* Move to next instruction */ + vcpu->arch.guest_context.sepc += INSN_LEN(insn); + + return 0; +} + +static int emulate_store(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long fault_addr) +{ + u8 data8; + u16 data16; + u32 data32; + u64 data64; + ulong data; + int len = 0; + ulong insn = get_insn(vcpu); + + data = GET_RS2(insn, &vcpu->arch.guest_context); + data8 = data16 = data32 = data64 = data; + + if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) { + len = 4; + } else if ((insn & INSN_MASK_SB) == INSN_MATCH_SB) { + len = 1; +#ifdef CONFIG_64BIT + } else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) { + len = 8; +#endif + } else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) { + len = 2; +#ifdef CONFIG_RISCV_ISA_C +#ifdef CONFIG_64BIT + } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) { + len = 8; + data64 = GET_RS2S(insn, &vcpu->arch.guest_context); + } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP && + ((insn >> SH_RD) & 0x1f)) { + len = 8; + data64 = GET_RS2C(insn, &vcpu->arch.guest_context); +#endif + } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) { + len = 4; + data32 = GET_RS2S(insn, &vcpu->arch.guest_context); + } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP && + ((insn >> SH_RD) & 0x1f)) { + len = 4; + data32 = GET_RS2C(insn, &vcpu->arch.guest_context); +#endif + } else { + return -ENOTSUPP; + } + + /* Fault address should be aligned to length of MMIO */ + if (fault_addr & (len - 1)) + return -EIO; + + /* Clear instruction decode info */ + vcpu->arch.mmio_decode.insn = 0; + vcpu->arch.mmio_decode.shift = 0; + vcpu->arch.mmio_decode.len = 0; + + /* Copy data to kvm_run instance */ + switch (len) { + case 1: + *((u8 *)run->mmio.data) = data8; + break; + case 2: + *((u16 *)run->mmio.data) = data16; + break; + case 4: + *((u32 *)run->mmio.data) = data32; + break; + case 8: + *((u64 *)run->mmio.data) = data64; + break; + default: + return -ENOTSUPP; + }; + + /* Exit to userspace for MMIO emulation */ + vcpu->stat.mmio_exit_user++; + run->exit_reason = KVM_EXIT_MMIO; + run->mmio.is_write = true; + run->mmio.phys_addr = fault_addr; + run->mmio.len = len; + + /* Move to next instruction */ + vcpu->arch.guest_context.sepc += INSN_LEN(insn); + + return 0; +} + +static int stage2_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long scause, unsigned long stval) +{ + struct kvm_memory_slot *memslot; + unsigned long hva; + bool writable; + gfn_t gfn; + int ret; + + gfn = stval >> PAGE_SHIFT; + memslot = gfn_to_memslot(vcpu->kvm, gfn); + hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable); + + if (kvm_is_error_hva(hva) || + (scause == EXC_STORE_PAGE_FAULT && !writable)) { + switch (scause) { + case EXC_LOAD_PAGE_FAULT: + return emulate_load(vcpu, run, stval); + case EXC_STORE_PAGE_FAULT: + return emulate_store(vcpu, run, stval); + default: + return -ENOTSUPP; + }; + } + + ret = kvm_riscv_stage2_map(vcpu, stval, hva, + (scause == EXC_STORE_PAGE_FAULT) ? true : false); + if (ret < 0) + return ret; + + return 1; +} /** * kvm_riscv_vcpu_mmio_return -- Handle MMIO loads after user space emulation @@ -19,7 +380,44 @@ */ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) { - /* TODO: */ + u8 data8; + u16 data16; + u32 data32; + u64 data64; + ulong insn; + int len, shift; + + if (run->mmio.is_write) + return 0; + + insn = vcpu->arch.mmio_decode.insn; + len = vcpu->arch.mmio_decode.len; + shift = vcpu->arch.mmio_decode.shift; + switch (len) { + case 1: + data8 = *((u8 *)run->mmio.data); + SET_RD(insn, &vcpu->arch.guest_context, + (ulong)data8 << shift >> shift); + break; + case 2: + data16 = *((u16 *)run->mmio.data); + SET_RD(insn, &vcpu->arch.guest_context, + (ulong)data16 << shift >> shift); + break; + case 4: + data32 = *((u32 *)run->mmio.data); + SET_RD(insn, &vcpu->arch.guest_context, + (ulong)data32 << shift >> shift); + break; + case 8: + data64 = *((u64 *)run->mmio.data); + SET_RD(insn, &vcpu->arch.guest_context, + (ulong)data64 << shift >> shift); + break; + default: + return -ENOTSUPP; + }; + return 0; } @@ -30,6 +428,37 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long scause, unsigned long stval) { - /* TODO: */ - return 0; + int ret; + + /* If we got host interrupt then do nothing */ + if (scause & SCAUSE_IRQ_FLAG) + return 1; + + /* Handle guest traps */ + ret = -EFAULT; + run->exit_reason = KVM_EXIT_UNKNOWN; + switch (scause) { + case EXC_INST_PAGE_FAULT: + case EXC_LOAD_PAGE_FAULT: + case EXC_STORE_PAGE_FAULT: + if ((vcpu->arch.guest_context.hstatus & HSTATUS_SPV) && + (vcpu->arch.guest_context.hstatus & HSTATUS_STL)) + ret = stage2_page_fault(vcpu, run, scause, stval); + break; + default: + break; + }; + + /* Print details in-case of error */ + if (ret < 0) { + kvm_err("VCPU exit error %d\n", ret); + kvm_err("SEPC=0x%lx SSTATUS=0x%lx HSTATUS=0x%lx\n", + vcpu->arch.guest_context.sepc, + vcpu->arch.guest_context.sstatus, + vcpu->arch.guest_context.hstatus); + kvm_err("SCAUSE=0x%lx STVAL=0x%lx\n", + scause, stval); + } + + return ret; } From patchwork Mon Jul 29 11:57:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063665 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BFCD14E5 for ; Mon, 29 Jul 2019 11:57:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7AF62200E5 for ; Mon, 29 Jul 2019 11:57:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F51D212DA; Mon, 29 Jul 2019 11:57:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 81719200E5 for ; Mon, 29 Jul 2019 11:57:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wSW9P9OFbKPBZ458dpC4BkSNr6FmG2n3DZ7QzOoMswk=; b=M2Cao6IK3AAUf1 uTOh4W5Frlwj9e7R6ujz7zLaIavZLKTiBR7H8IRYWHDl028CQjcpgWtyl73WTpe4a6ashNSUR+Xl4 SStRqcgIJAY+JZAdM/wdGFqouAwW3X1QERknAXYWILVIlD/wlbL/eCj6DrlD8Yryhgda8VuEkD/bb TgjTG9H63dRVFOjrokWcAswxaKxMHJnCEhTBbtQ5z4/Vds79yHBZQathipXuV4cQNJO3UO3BzwHx6 lFzequJaFRvzI0jjh/NzhtY9vIQ+goeL8MQ5NimtKMMEIPxYShx3h0i55M2INzUs2v5HOvbZA8ICg 9hBNmlxrbJQflJkhoaew==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HR-0002p9-D2; Mon, 29 Jul 2019 11:57:29 +0000 Received: from esa2.hgst.iphmx.com ([68.232.143.124]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HN-0002jS-EI for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401485; x=1595937485; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=GsHF4gHHIbU3ilO1vh5sF7BYtbgWJM7ACKryGsSPY9I=; b=mm+MDu5yKHaEM4p3ug5C2nMf304s+qVbaz5/ZkH0HxNWEi53dMLdknPl xPXFkqb5u3GhrRw0Zy2+/5zXFUfPSihoWyoei+ye1Awmo3Oj/5NOIKVnL T9Ho5ND+/zv9p/iD132Xv3LQpbJk39au9tT/VUUOr28IsTvwSeP3JBqXn gEVEn8CA+P1OXV4NzTvg46Pfg/W9NvZSIsT1NPG4g2HoiMdmqUAwT7T6U 51Tjtig01PbAT4ausxUi4l7I9flVb3GTCEAm/oXQhpKHqqU5wYF5F+J2q LBjbNjcPmXoS45bCfbcB4ToO4mSN/BN9ERne5FiqRxTwRiOe4w1WEINGD Q==; IronPort-SDR: F6ImFoCDhuGj8zTlhJfMRYbcmd+4jksiH829yljufbK8yx63ZGQmvilM78ue8fhN0tKODGXxXL WbEDTORd47btTC3y56vkg/SHgr+XVijPx1t2+baUS7hOBQiEQiLMiZp+9w9zC6UUc5TUj8dq7Z cSR7m4CFhw7Ja3+ysyKmIkeaviZhaHjvJkr77UGerQzP/BZWBl6HDeU566Ga/dclp+khYMisfn V/nvttyuf5hyld2g13fso/+NnHcuf2Qba0lRI+pc3wDzvFzj+O5DutCGA1Zv9+lViWSRzosX58 7iE= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="214553109" Received: from mail-sn1nam04lp2054.outbound.protection.outlook.com (HELO NAM04-SN1-obe.outbound.protection.outlook.com) ([104.47.44.54]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:54 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=odiEZFkfbE62b8uPeGwAfqzz8bDC1mjc6WjvV0/5N7wPgqMWmWp7968CV2Qfw7I2JuPajDxMQM4cAAajNkUuXib3oJuSMTlmmQtrULMWcTIS6Q4Ket/H5kXCuBudSagsdtXo6AjmxuO2rhMBBAwj+G5h2o20Knqe0RasNRAiViCxG4YBHiAIHiycW4PLgNfX+H4vTwz0OATMs4ovkVwjwD8om75793eSMuIa/iBylPdaiiaa8tSOT0fwhJDw7AAiS3YwJonI6KHSJKc1PreKiyXgNAfMy24RTTt/IZblViu27TqUdjykxB7642dZDAMxxNpdpljxAC2jImWxBjj16g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=a2IZcUTSXw05WdZG7vyD2MEZ25YP0LbtIOSpJsTLdtg=; b=AYT1CLN8cjAn509hRFNn5LH9XLYyrZJ8Xw8yR7Rf2HcfdKOqxMJyn8xGuvJl+3WtErj9a3lrA8UnHH+SRawIgjSek1tsQB2YwVArgoORhq6ZZd4Qx6sMQMBy6I4w37SFcBG2O94TKDdxqHJyGO3d6GLK3fKR65Itt5PHadbFMWFAfG8sgsorUsYP/0AdXExe3E4LWC/Q6f1f5FZodFtNnGAf/XRKgk+G7a3bOfsqSWKXnywSPPz7uz8y6Uh+UHEUnhoOfpfHIbRovBdAMEjPmWGUF4lHQu3Q4awzv/Tw24JPKXdFGEmgr2DPAEwDMGgM95E8Nwt+jao9Bb0KbqbI0A== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=a2IZcUTSXw05WdZG7vyD2MEZ25YP0LbtIOSpJsTLdtg=; b=RF3IBKY8wVs9lTF78eZVPbyEbaaAka759BjvEREaC2lQ5TCrBFCj1k9MU/h9KKrwZHMEPjeziW9D0C44N4+bkYXVlI17WNbppBRbpIN8EzSquAFgOb/eQKZMmexsMnlG0kN3r46b+g68i6DfoxNv89bikHlwaw3E3M0DPlreJJs= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5952.namprd04.prod.outlook.com (20.179.21.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.15; Mon, 29 Jul 2019 11:57:17 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:17 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 09/16] RISC-V: KVM: Handle WFI exits for VCPU Thread-Topic: [RFC PATCH 09/16] RISC-V: KVM: Handle WFI exits for VCPU Thread-Index: AQHVRgTELJpWYbc670q25LJ6UKFygQ== Date: Mon, 29 Jul 2019 11:57:17 +0000 Message-ID: <20190729115544.17895-10-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 9f7472b8-62fe-49a5-75aa-08d7141be738 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5952; x-ms-traffictypediagnostic: MN2PR04MB5952: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:4502; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(39860400002)(366004)(376002)(396003)(136003)(346002)(189003)(199004)(478600001)(2906002)(446003)(6436002)(486006)(6512007)(53936002)(36756003)(11346002)(2616005)(44832011)(78486014)(386003)(6506007)(102836004)(55236004)(4326008)(71200400001)(476003)(76176011)(71190400001)(9456002)(7736002)(26005)(50226002)(81166006)(81156014)(8676002)(8936002)(186003)(99286004)(6486002)(68736007)(14444005)(1076003)(256004)(7416002)(305945005)(66446008)(25786009)(66066001)(6116002)(3846002)(52116002)(14454004)(316002)(54906003)(86362001)(66556008)(66476007)(110136005)(66946007)(5660300002)(64756008); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5952; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: P1a/H6Nxfd4zCZqMFJq7IZRe/8F4yfC4WY68eLGMJufeSkqKPiPsRgeQ1s7WgKnJ1w22O+etvu90JH5+IaO7keH6TFgDWEKMlPu28bppizLnHgoPlEKXrXudz/pVSM303b0kmSL2FIQWOyflBEpDXeQieISeMR3Z4mh2mAR5Wt6ZgAdiRq+WWkgm/P2pmUlFi8wp5WNn8z1ev1jv0CQbSLS6S5k8yveY1lYhpx5w12RmpyhFLsqHnuxTiN2X+wQeptTINmz9VNBauyWDnQxXQYhtBIlVSiSqti5v9L0lbwgA6nRfbEGU63MWUHNzP4TxT4RyzR6x80vXbeqoi7NLZeCfninD49TbxULedhL+7uPMOBaBmPDJEIUChfrXfiYOAClSLU+tfRjxfujCxQ8XQHiBv1TOJjcIFZOek127uPI= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9f7472b8-62fe-49a5-75aa-08d7141be738 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:17.4352 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5952 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045725_527916_680BE206 X-CRM114-Status: GOOD ( 14.30 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We get illegal instruction trap whenever Guest/VM executes WFI instruction. This patch handles WFI trap by blocking the trapped VCPU using kvm_vcpu_block() API. The blocked VCPU will be automatically resumed whenever a VCPU interrupt is injected from user-space or from in-kernel IRQCHIP emulation. Signed-off-by: Anup Patel --- arch/riscv/kvm/vcpu_exit.c | 86 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 86 insertions(+) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 4dafefa59338..2d09640c98b2 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -12,6 +12,9 @@ #include #include +#define INSN_MASK_WFI 0xffffff00 +#define INSN_MATCH_WFI 0x10500000 + #define INSN_MATCH_LB 0x3 #define INSN_MASK_LB 0x707f #define INSN_MATCH_LH 0x1003 @@ -178,6 +181,85 @@ static ulong get_insn(struct kvm_vcpu *vcpu) return val; } +typedef int (*illegal_insn_func)(struct kvm_vcpu *vcpu, + struct kvm_run *run, + ulong insn); + +static int truly_illegal_insn(struct kvm_vcpu *vcpu, + struct kvm_run *run, + ulong insn) +{ + /* TODO: Redirect trap to Guest VCPU */ + return -ENOTSUPP; +} + +static int system_opcode_insn(struct kvm_vcpu *vcpu, + struct kvm_run *run, + ulong insn) +{ + if ((insn & INSN_MASK_WFI) == INSN_MATCH_WFI) { + vcpu->stat.wfi_exit_stat++; + if (!kvm_riscv_vcpu_has_interrupt(vcpu)) { + kvm_vcpu_block(vcpu); + kvm_clear_request(KVM_REQ_UNHALT, vcpu); + } + vcpu->arch.guest_context.sepc += INSN_LEN(insn); + return 1; + } + + return truly_illegal_insn(vcpu, run, insn); +} + +static illegal_insn_func illegal_insn_table[32] = { + truly_illegal_insn, /* 0 */ + truly_illegal_insn, /* 1 */ + truly_illegal_insn, /* 2 */ + truly_illegal_insn, /* 3 */ + truly_illegal_insn, /* 4 */ + truly_illegal_insn, /* 5 */ + truly_illegal_insn, /* 6 */ + truly_illegal_insn, /* 7 */ + truly_illegal_insn, /* 8 */ + truly_illegal_insn, /* 9 */ + truly_illegal_insn, /* 10 */ + truly_illegal_insn, /* 11 */ + truly_illegal_insn, /* 12 */ + truly_illegal_insn, /* 13 */ + truly_illegal_insn, /* 14 */ + truly_illegal_insn, /* 15 */ + truly_illegal_insn, /* 16 */ + truly_illegal_insn, /* 17 */ + truly_illegal_insn, /* 18 */ + truly_illegal_insn, /* 19 */ + truly_illegal_insn, /* 20 */ + truly_illegal_insn, /* 21 */ + truly_illegal_insn, /* 22 */ + truly_illegal_insn, /* 23 */ + truly_illegal_insn, /* 24 */ + truly_illegal_insn, /* 25 */ + truly_illegal_insn, /* 26 */ + truly_illegal_insn, /* 27 */ + system_opcode_insn, /* 28 */ + truly_illegal_insn, /* 29 */ + truly_illegal_insn, /* 30 */ + truly_illegal_insn /* 31 */ +}; + +static int illegal_inst_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, + unsigned long stval) +{ + ulong insn = stval; + + if (unlikely((insn & 3) != 3)) { + if (insn == 0) + insn = get_insn(vcpu); + if ((insn & 3) != 3) + return truly_illegal_insn(vcpu, run, insn); + } + + return illegal_insn_table[(insn & 0x7c) >> 2](vcpu, run, insn); +} + static int emulate_load(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long fault_addr) { @@ -438,6 +520,10 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, ret = -EFAULT; run->exit_reason = KVM_EXIT_UNKNOWN; switch (scause) { + case EXC_INST_ILLEGAL: + if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) + ret = illegal_inst_fault(vcpu, run, stval); + break; case EXC_INST_PAGE_FAULT: case EXC_LOAD_PAGE_FAULT: case EXC_STORE_PAGE_FAULT: From patchwork Mon Jul 29 11:57:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063667 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2C0B174A for ; Mon, 29 Jul 2019 11:57:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF34720069 for ; Mon, 29 Jul 2019 11:57:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B386320408; Mon, 29 Jul 2019 11:57:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EA61820069 for ; Mon, 29 Jul 2019 11:57:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=x6WQmNQOLg3OaLyKEDfdBeA73qK+bs2vxYp/Nxs2hvo=; b=q6oPzYK4LcA8RD gvbMRGArgGelw1IA72/RCJ4npbWF+G6IiU1Ct+t71BoqR1oa62G6EkRSKm37V29uu4I6+ylL9dCSU qWxK7vxK8JvH95nUrHH/Wgh49kYw6ZgOaJnxBbxK33slhGu2nTVyiGBlB6IFGvSzKrDPUPmuZBL+F OmhevsMZM2O6xYZY/AP/PuD4ac/RqGL5cH/zi7tmnq68vXEjnc9PHmDKEy71vhC12hE9JsPjO0JpA WGrl/VG+XC+Mi7ZNj78xj7hmSSM5A8mmgk037iRMn3qupF4hQFuAEsaOCq69HNGLS3nzmHpFHQH77 zV39CWt9e++m3EX5hgnA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HS-0002rZ-JY; Mon, 29 Jul 2019 11:57:30 +0000 Received: from esa2.hgst.iphmx.com ([68.232.143.124]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HO-0002kM-FE for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:28 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401487; x=1595937487; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=qO3iV8UjB5qokmzezR5QIinfyt759MU1YwVuBI+pyqE=; b=dcMyyxcRGHsx1cZj6ZLMLAwvA+buqoS8R/ldv8IfxW3WcHtHqvtWy7Ps k2Xbb8Fv8ShE93AOguXLV8uwhb8ipsVJyh4rWQG7Z4rvYRnkjw9/Zmf2b fWn6dqQxVMHLi0uNp8E2uKpYf6X0zMHbCgJAWAUUWhakcHbque9X4GlUp iFuBTzygZoR6g8Hqgk2ybC9iTpFFyETOpa2FnlNi3NYnNxXAzTkjdsImY KTh6rKvAMWaqxZOydvScOwKpqgSpWwMQd7BNZvyUEfGKaQDC9SDIbnpOJ q/dvJePBva+f2WZqgOusMQI6pJSBGps9m4iH2glv+O+of+RQJ043LSpnE A==; IronPort-SDR: nFdGaX8W3xGMunvCBSNXSMZ1pRSg54mvLeOvf+vzKXISEMToSJ7Gf/XPuUEs+yQfuSV648KvbJ 1k/scs95hycNw5nkZfl7nGPcqNv3T/UD/yPRcE8Z53lDuBOFqekYptw4rINr2h1URMQjmZl4EN h2wWotH1gj6dC+RLJqtJ0z0mpXwn9LBDTw8efq8R+nqXL5b6JmsHxYUyQUQvYKmXrSdSET6pKW NQOBEElroK1AMgcwp/AGKaW4ssXFNsIB2BWMckrCK5ul6v31h79Reafz0q/fx6Iv9W2XNAf4M2 GMw= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="214553121" Received: from mail-sn1nam04lp2050.outbound.protection.outlook.com (HELO NAM04-SN1-obe.outbound.protection.outlook.com) ([104.47.44.50]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:58:05 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DQQg97YAQSMyIchSB2y//yrHFtIEV5Rb7YMJ7gbUf2jaq17qWpptOOvh+BH4yHKlVlO4nY1F32/qR4E4MiN2sXDlyuejsrq3xnZ/PyTGF7hp5RxJWUPFwYPE9oAJizVaGN8M2wxNqevAbGVqFvvT4cPuS25qptDNb8CaWulvpt3vtDgqCA6IZTWtrJ4LD+6fG+Xwj7H1oub02Qj8oxV2ukxM94XD8zwcSwFaWddjUUZ4rXLAWodPNGdXARaXlUypZ0XFCNTm4X4vLoOUlWzQ35nvl/PmTck5MXIalHOJXaEDc2P72+6JBJK44gsnu98WSx1NXyLxiv3GWOSkGU+Ycg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JhfuskpawPkTszGKUWYP2xaunO7zZGCJLwgP8J1alEY=; b=fdb6TPw9I7s0tjyd1unr5JQkRO5DmXS4OGeuSvFp+/YkpN2zQ0okkV/FquolBbashAAK5fWAWqyvfERmW7xy9iAVNlxWdZS6IdMUMGIvOI7VaIjQ4Xm84zNZofxTUD/9cPtFaMS/Jy26yjxbY/iqx+y52nKSoeAiBUWPJVQOvx4Nx+2ImEHbhf2DTXYldfAuidD0V37HDpI/Aesn0bizwsz8G7xrXLhC2RzYF3zI4eEDANJiF9FMwqef7uADIgmfYpzo3bHFddX5EPlVZHJCBHsnPj1LdxsBxW2Syi0e1k7b7b0+w/1pF3cIDV5evnWGL/9ghksC8DCAAvqI+Ckb8Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JhfuskpawPkTszGKUWYP2xaunO7zZGCJLwgP8J1alEY=; b=zxjvNcgQLtgtLGZ9OL1eh1eWJPBidky8H2ruLhR1RW8/uECA7MVi/rq+1IRefBJ1fPH9TE4DnYvItn6IOCijszEMZMH0hVYpl5+PPb3uhOf/Nw7OIPeJA31F5sMnnz9QCMny93KUesSd/H6chvZQRA8p4E07X3IrkRNxqHmBiSc= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5952.namprd04.prod.outlook.com (20.179.21.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.15; Mon, 29 Jul 2019 11:57:24 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:24 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 10/16] RISC-V: KVM: Implement VMID allocator Thread-Topic: [RFC PATCH 10/16] RISC-V: KVM: Implement VMID allocator Thread-Index: AQHVRgTIvu0BuDm2IkeysZXr8CQ0bw== Date: Mon, 29 Jul 2019 11:57:23 +0000 Message-ID: <20190729115544.17895-11-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 2e525f40-1054-4093-fc9c-08d7141beb12 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5952; x-ms-traffictypediagnostic: MN2PR04MB5952: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:8273; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(346002)(136003)(366004)(396003)(376002)(39860400002)(199004)(189003)(66446008)(25786009)(305945005)(66066001)(52116002)(3846002)(6116002)(68736007)(6486002)(99286004)(1076003)(256004)(7416002)(14444005)(66476007)(66556008)(64756008)(5660300002)(66946007)(110136005)(14454004)(316002)(86362001)(54906003)(53936002)(36756003)(6436002)(6512007)(486006)(11346002)(2616005)(44832011)(478600001)(2906002)(446003)(50226002)(8676002)(81156014)(81166006)(26005)(7736002)(9456002)(8936002)(186003)(55236004)(102836004)(78486014)(6506007)(386003)(71190400001)(4326008)(76176011)(476003)(71200400001); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5952; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: UHjERth1QFd5jkzMk3HkGb/Lz3AhZ4WtwpZcdLh4bg3KWez1pmwX7IBOOMj2hMY3njdqQmlTvx7zCb5LlK6hfA3DUkzlrRnv4PGw4+jbHLoCifRvUJmhXaplbZi3JqnciYlQ+0Ugx1dRlE3afvLdzBvKXdiyt3kWQNR/NlpwcvI25yoVy0Y5/mvx4zrIRRpOf95ua3sq3AeD9QIyYPNUle8kCdNkcRykh9jnfDkyOW6rNJewmfrNqhd4uJrnKHcd16i9T7cOnY6nMhVJhQ5XeQzcE3QixT2XMg1x++Bq7ZaAT225I4hAwH/niHXqvXDBeBvC88vcD6cq5+EQUwBlj8bYzwxi0ePeuLOtYgO5SqBOL66+iguZnONdmHpPKpNXERrJL2wZqE/LM+BhCt4Mk8756zmP5y4saVlmt04WA3w= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2e525f40-1054-4093-fc9c-08d7141beb12 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:23.8835 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5952 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045726_573219_7DF2AA36 X-CRM114-Status: GOOD ( 18.60 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We implement a simple VMID allocator for Guests/VMs which: 1. Detects number of VMID bits at boot-time 2. Uses atomic number to track VMID version and increments VMID version whenever we run-out of VMIDs 3. Flushes Guest TLBs on all host CPUs whenever we run-out of VMIDs 4. Force updates HW Stage2 VMID for each Guest VCPU whenever VMID changes using VCPU request KVM_REQ_UPDATE_PGTBL Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 21 +++++ arch/riscv/kvm/Makefile | 3 +- arch/riscv/kvm/main.c | 4 + arch/riscv/kvm/tlb.S | 42 ++++++++++ arch/riscv/kvm/vcpu.c | 6 ++ arch/riscv/kvm/vm.c | 6 ++ arch/riscv/kvm/vmid.c | 130 ++++++++++++++++++++++++++++++ 7 files changed, 211 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/kvm/tlb.S create mode 100644 arch/riscv/kvm/vmid.c diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 82e568ae0260..dcc31f9ca13d 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -28,6 +28,7 @@ KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1) #define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2) +#define KVM_REQ_UPDATE_PGTBL KVM_ARCH_REQ(3) struct kvm_vm_stat { ulong remote_tlb_flush; @@ -48,7 +49,15 @@ struct kvm_vcpu_stat { struct kvm_arch_memory_slot { }; +struct kvm_vmid { + unsigned long vmid_version; + unsigned long vmid; +}; + struct kvm_arch { + /* stage2 vmid */ + struct kvm_vmid vmid; + /* stage2 page table */ pgd_t *pgd; phys_addr_t pgd_phys; @@ -158,6 +167,12 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} +extern void __kvm_riscv_hfence_gvma_vmid_gpa(unsigned long vmid, + unsigned long gpa); +extern void __kvm_riscv_hfence_gvma_vmid(unsigned long vmid); +extern void __kvm_riscv_hfence_gvma_gpa(unsigned long gpa); +extern void __kvm_riscv_hfence_gvma_all(void); + int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, bool is_write); void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu); @@ -165,6 +180,12 @@ int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm); void kvm_riscv_stage2_free_pgd(struct kvm *kvm); void kvm_riscv_stage2_update_pgtbl(struct kvm_vcpu *vcpu); +void kvm_riscv_stage2_vmid_detect(void); +unsigned long kvm_riscv_stage2_vmid_bits(void); +int kvm_riscv_stage2_vmid_init(struct kvm *kvm); +bool kvm_riscv_stage2_vmid_ver_changed(struct kvm_vmid *vmid); +void kvm_riscv_stage2_vmid_update(struct kvm_vcpu *vcpu); + int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run); int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long scause, unsigned long stval); diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 845579273727..c0f57f26c13d 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -8,6 +8,7 @@ ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm kvm-objs := $(common-objs-y) -kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o vcpu_switch.o +kvm-objs += main.o vm.o vmid.o tlb.o mmu.o +kvm-objs += vcpu.o vcpu_exit.o vcpu_switch.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 8cac0571a264..c029686100e4 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -44,8 +44,12 @@ int kvm_arch_init(void *opaque) return -ENODEV; } + kvm_riscv_stage2_vmid_detect(); + kvm_info("hypervisor extension available\n"); + kvm_info("host has %ld VMID bits\n", kvm_riscv_stage2_vmid_bits()); + return 0; } diff --git a/arch/riscv/kvm/tlb.S b/arch/riscv/kvm/tlb.S new file mode 100644 index 000000000000..13740d8020f5 --- /dev/null +++ b/arch/riscv/kvm/tlb.S @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include + + .text + .altmacro + + /* + * Instruction encoding of hfence.gvma is: + * 0110001 rs2(5) rs1(5) 000 00000 1110011 + */ + +ENTRY(__kvm_riscv_hfence_gvma_vmid_gpa) + /* hfence.gvma a1, a0 */ + .word 0x62a60073 + ret +ENDPROC(__kvm_riscv_hfence_gvma_vmid_gpa) + +ENTRY(__kvm_riscv_hfence_gvma_vmid) + /* hfence.gvma zero, a0 */ + .word 0x62a00073 + ret +ENDPROC(__kvm_riscv_hfence_gvma_vmid) + +ENTRY(__kvm_riscv_hfence_gvma_gpa) + /* hfence.gvma a0 */ + .word 0x62050073 + ret +ENDPROC(__kvm_riscv_hfence_gvma_gpa) + +ENTRY(__kvm_riscv_hfence_gvma_all) + /* hfence.gvma */ + .word 0x62000073 + ret +ENDPROC(__kvm_riscv_hfence_gvma_all) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 4ab9f803536e..f3b0cadc1973 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -607,6 +607,9 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu)) kvm_riscv_reset_vcpu(vcpu); + if (kvm_check_request(KVM_REQ_UPDATE_PGTBL, vcpu)) + kvm_riscv_stage2_update_pgtbl(vcpu); + /* * Clear IRQ_PENDING requests that were made to guarantee * that a VCPU sees new virtual interrupts. @@ -643,6 +646,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) /* Check conditions before entering the guest */ cond_resched(); + kvm_riscv_stage2_vmid_update(vcpu); + kvm_riscv_check_vcpu_requests(vcpu); preempt_disable(); @@ -673,6 +678,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) smp_store_mb(vcpu->mode, IN_GUEST_MODE); if (ret <= 0 || + kvm_riscv_stage2_vmid_ver_changed(&vcpu->kvm->arch.vmid) || kvm_request_pending(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; local_irq_enable(); diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 66904def2f93..4bc97ebc4b6e 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -26,6 +26,12 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (r) return r; + r = kvm_riscv_stage2_vmid_init(kvm); + if (r) { + kvm_riscv_stage2_free_pgd(kvm); + return r; + } + return 0; } diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c new file mode 100644 index 000000000000..a2b026fad1bd --- /dev/null +++ b/arch/riscv/kvm/vmid.c @@ -0,0 +1,130 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include +#include + +static atomic_long_t vmid_version = ATOMIC_LONG_INIT(1); +static unsigned long vmid_next; +static unsigned long vmid_bits; +static DEFINE_SPINLOCK(vmid_lock); + +void kvm_riscv_stage2_vmid_detect(void) +{ + unsigned long old; + + /* Figure-out number of VMID bits in HW */ + old = csr_read(CSR_HGATP); + csr_write(CSR_HGATP, old | HGATP_VMID_MASK); + vmid_bits = csr_read(CSR_HGATP); + vmid_bits = (vmid_bits & HGATP_VMID_MASK) >> HGATP_VMID_SHIFT; + vmid_bits = fls_long(vmid_bits); + csr_write(CSR_HGATP, old); + + /* We polluted local TLB so flush all guest TLB */ + __kvm_riscv_hfence_gvma_all(); + + /* We don't use VMID bits if they are not sufficient */ + if ((1UL << vmid_bits) < num_possible_cpus()) + vmid_bits = 0; +} + +unsigned long kvm_riscv_stage2_vmid_bits(void) +{ + return vmid_bits; +} + +int kvm_riscv_stage2_vmid_init(struct kvm *kvm) +{ + /* Mark the initial VMID and VMID version invalid */ + kvm->arch.vmid.vmid_version = 0; + kvm->arch.vmid.vmid = 0; + + return 0; +} + +static void local_guest_tlb_flush(void *info) +{ + __kvm_riscv_hfence_gvma_all(); +} + +static void force_exit_and_guest_tlb_flush(const cpumask_t *mask) +{ + preempt_disable(); + smp_call_function_many(mask, local_guest_tlb_flush, NULL, true); + preempt_enable(); +} + +bool kvm_riscv_stage2_vmid_ver_changed(struct kvm_vmid *vmid) +{ + ulong cur_vmid_version; + + if (!vmid_bits) + return false; + + cur_vmid_version = atomic_long_read(&vmid_version); + + /* Ensure atomic read to VMID version is completed */ + smp_rmb(); + + return unlikely(READ_ONCE(vmid->vmid_version) != cur_vmid_version); +} + +void kvm_riscv_stage2_vmid_update(struct kvm_vcpu *vcpu) +{ + int i; + struct kvm_vcpu *v; + struct kvm_vmid *vmid = &vcpu->kvm->arch.vmid; + + if (!kvm_riscv_stage2_vmid_ver_changed(vmid)) + return; + + spin_lock(&vmid_lock); + + /* + * We need to re-check the vmid_version here to ensure that if + * another vcpu already allocated a valid vmid for this vm. + */ + if (!kvm_riscv_stage2_vmid_ver_changed(vmid)) { + spin_unlock(&vmid_lock); + return; + } + + /* First user of a new VMID version? */ + if (unlikely(vmid_next == 0)) { + atomic_long_inc(&vmid_version); + vmid_next = 1; + + /* + * On SMP we know no other CPUs can use this CPU's or + * each other's VMID after forced exit returns since the + * vmid_lock blocks them from re-entry to the guest. + */ + force_exit_and_guest_tlb_flush(cpu_all_mask); + } + + vmid->vmid = vmid_next; + vmid_next++; + vmid_next &= (1 << vmid_bits) - 1; + + /* Ensure VMID next update is completed */ + smp_wmb(); + + WRITE_ONCE(vmid->vmid_version, atomic_long_read(&vmid_version)); + + spin_unlock(&vmid_lock); + + /* Request stage2 page table update for all VCPUs */ + kvm_for_each_vcpu(i, v, vcpu->kvm) + kvm_make_request(KVM_REQ_UPDATE_PGTBL, v); +} From patchwork Mon Jul 29 11:57:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063671 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 753D213A4 for ; Mon, 29 Jul 2019 11:57:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6134D200E5 for ; Mon, 29 Jul 2019 11:57:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5451428405; Mon, 29 Jul 2019 11:57:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 19C2220408 for ; Mon, 29 Jul 2019 11:57:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L9U/h6kEYF4bjC5vf+7f83GbsCas1BMnpyz0cyZgzxs=; b=fWbrrXElJjXETz KKgZPjBBA3ISV3Mk2JuGKGcoUSk8Gfr0ZYSJotFJJ5AJJkE7naZJxUAdwa9u5C/aOP60QRX7l3Rrk m9wW/f9uHmmBVqL339WPp5gz91pJkBgHbm0B03LyE8/IwyAG/gCJIbOydHuH1+4Lr6zDtoZyCtuai LhCySU4VuYCLXqSVYQKM8Fdfd90xV3BwMqMmuYWWVJiz6PD7qvT2qvqDefehByfRyk1ONFd/FsYza 2Y1MT6DFw94KbKutqSCt77qRTSCUJeaJGp+lYah+MsxLnXK50vLdW7cdcPTa92wf6mie7/qtWnXur ckgdIrPqSa/H7idIiPqA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HY-0002yp-Ss; Mon, 29 Jul 2019 11:57:36 +0000 Received: from esa2.hgst.iphmx.com ([68.232.143.124]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4HU-0002vU-JB for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:34 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401496; x=1595937496; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=9TtH9E8Bk3v3JNyHLxqJzJnl52zb+9YRDc0iPmjrjew=; b=SKzUfMHhIdCFapfg4RyoKb6h4qORIkUYf418W4a0P1YggEVzMneyNVFI m51Dgh6aBR8ynDatP3kC5a5FYQ44ZuKTHwxoAbe3KpQyeJrgw9kaXWyfo +0Yy9cW59c+fDL8ni1kbrpXMlQkWx+B50VAivJ4m9Bh9nloiZCH2jep/+ PoQ0XtUvp22kn7+fM0podi7IxZeE02BNNwpXnlpERVvifAeKQntEmJMbB SuALP0dss69R8zPeWu3qI03+dVKt4dq4R3zHLC8Qm88/XnuxjA0qm3zGV y733PVsdQBocTmW/d2xPZ33fJ2dOQmTpOB1Y6bvsHAj9scnOnEqavRq7t Q==; IronPort-SDR: qE5oVM3M3p5SMqCPYnLy/jgGdvuA9I+1OxXWR86/lQbIAl01wwbmtMSbi8QeEza8GPmksbc/Ac //JHb9LM6JI5TJGGBlz6Te9BfQuXgXPJP9pKB04mZ6H4xp0fGoRYD1iEXyHKAwyWg8l0HNssBq PnwcQB652gv+FVXVHpcIBzLelq6i2UMwIaz+jiA9fQG/krzRkfSZAlqTgC0n5+XdvbuvYM+Zx9 3Zl2rTGskzZl+cng2AExJ1TtTgme0/paQho2+h4C4NIIPfhjv+/VmsUSOOKKqy+oUrz++Q2Vsl Ku4= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="214553150" Received: from mail-by2nam01lp2051.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.51]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:58:14 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eX4ll9ZO/nX3lNwnt+A8xS+gSZJQsPDVY/ckVl5s2twZTsvLsWGwFC8C35N00/RJf4yFrR7mNoxe/rAaP59AHCQp4pjuhuMycPHCXQr6Mwl9MOK7Os9udUcv47+RvECgDIipKYF/MxF5IJp1ffT6GcqqUMJvzS0gfHxjjASE6hOZaP7xnkZO+dzDsBGRF3x+iLeZkVpBxdYNNak6qhkJHZ0s3F64sRuuZInaO0b4AovCzk4ZOsoLuR5lJSN7mGZciDLUTVSjtg8bKTqn/FGbxDwpIOanOt0Nqn43cwoVM/2QuE0PDEMyqrxW09rg1OuFk97cVLCgiP1EKCA4c1d8sA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yBRBhW5YKHj3jAlMQDx+Ooj/mkK5nIb8WtMGBuEk1rQ=; b=HvryuH6P2ZlP0ZoUzKQWixPzUp8D9nqLM7EIbJyZfwc/aFC9pLE3E6/DERfjbY9sAHWRTsISGaw0dVmR8r/bURETX97jc4bUclTNjmpcrvXV5amti2FCHJa+2dgR4V6Jr/V2JNGMTZg3w8K5naR0AyHZj6+t8ozr3Fo7O33fcWunJtelG09oRN4mrZcK03R66/a3QfHPE09Fif5RMMWCwb/mTWPCibsjlq8TIf5zgFmkkMdgrnGZLGZfec7HKyefGYfWCCuWhsLNIQeLe+63JBk633zNiBRrQX0oEkCrZ01TJYinmXmFAyP8FwMTTlHpOBj2fsYMpDNY7zMqqkmbbA== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yBRBhW5YKHj3jAlMQDx+Ooj/mkK5nIb8WtMGBuEk1rQ=; b=WCWoIW+lrR4dFJfFiN1mkfDb2YJM7N64Dz/vjiQmcwiFf2pzocOn9tlJl6/4V4AH2nlDpBvObt4JWThpTAZvbsftVLMZRLpmdqA8v3npYCX2d+5zcTDFZyEAAL07V5jle0AIRyA2CAbvaVwGMDcvZBxl4wYvGLqFKXQwozN41Is= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:57:29 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:29 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 11/16] RISC-V: KVM: Implement stage2 page table programming Thread-Topic: [RFC PATCH 11/16] RISC-V: KVM: Implement stage2 page table programming Thread-Index: AQHVRgTMjvWB0tfM1UmiHgwTo4JEWA== Date: Mon, 29 Jul 2019 11:57:29 +0000 Message-ID: <20190729115544.17895-12-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 0d2a246b-32ec-4369-2bbc-08d7141bee80 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:3044; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(53946003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(30864003)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: 6NKxQx/kfG+WuOj+P1eG/GjJxW0Z0MiLEwCBAz/10R2PCeOGImirkcX5Q8aS2EVxR0z47jdLG+tuxw4wqkObnMfo+Souyv9WZw/UflFquWviJZgwcZezs+TfoIvvljNN6mXwKiIJlKvY41kCpxrwF4Id11m0DfcU3jxKyE610zF1Pzy0UUqIiY/Uk5HCOKEHzrmgzcb+kQ/cChlPBP/ylg6rvT5lGF5X2tMAT5CzdXEwz59KbmhCSWLPxRa/lNUXrw+y6WoCxyQ42ZCY/Nu/j39vqw6yIg+j+tP7h7bvPyzc+1zGFjysFSAS4HN9YtwPNbVuAgzbyJ6UThkH4QoVpr0w/2V2ZfSJiWLzql3PpgnCadMuE5dACYLSojuJkpyBYe2QgV5BnCIslnz9VsWcxOoDgFJnZvzVNa8NJ7XTnhs= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0d2a246b-32ec-4369-2bbc-08d7141bee80 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:29.7022 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045732_707244_514F3241 X-CRM114-Status: GOOD ( 18.61 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements all required functions for programming the stage2 page table for each Guest/VM. At high-level, the flow of stage2 related functions is similar from KVM ARM/ARM64 implementation but the stage2 page table format is quite different for KVM RISC-V. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 10 + arch/riscv/include/asm/pgtable-bits.h | 1 + arch/riscv/kvm/mmu.c | 636 +++++++++++++++++++++++++- 3 files changed, 637 insertions(+), 10 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index dcc31f9ca13d..354d179c43cf 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -69,6 +69,13 @@ struct kvm_mmio_decode { int shift; }; +#define KVM_MMU_PAGE_CACHE_NR_OBJS 32 + +struct kvm_mmu_page_cache { + int nobjs; + void *objects[KVM_MMU_PAGE_CACHE_NR_OBJS]; +}; + struct kvm_cpu_context { unsigned long zero; unsigned long ra; @@ -154,6 +161,9 @@ struct kvm_vcpu_arch { /* MMIO instruction details */ struct kvm_mmio_decode mmio_decode; + /* Cache pages needed to program page tables with spinlock held */ + struct kvm_mmu_page_cache mmu_page_cache; + /* VCPU power-off state */ bool power_off; diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h index bbaeb5d35842..be49d62fcc2b 100644 --- a/arch/riscv/include/asm/pgtable-bits.h +++ b/arch/riscv/include/asm/pgtable-bits.h @@ -26,6 +26,7 @@ #define _PAGE_SPECIAL _PAGE_SOFT #define _PAGE_TABLE _PAGE_PRESENT +#define _PAGE_LEAF (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) /* * _PAGE_PROT_NONE is set on not-present pages (and ignored by the hardware) to diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 963f3c373781..9561c5e85f75 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -18,6 +18,432 @@ #include #include +#ifdef CONFIG_64BIT +#define stage2_have_pmd true +#define stage2_gpa_size ((phys_addr_t)(1ULL << 39)) +#define stage2_cache_min_pages 2 +#else +#define pmd_index(x) 0 +#define pfn_pmd(x, y) ({ pmd_t __x = { 0 }; __x; }) +#define stage2_have_pmd false +#define stage2_gpa_size ((phys_addr_t)(1ULL << 32)) +#define stage2_cache_min_pages 1 +#endif + +static int stage2_cache_topup(struct kvm_mmu_page_cache *pcache, + int min, int max) +{ + void *page; + + BUG_ON(max > KVM_MMU_PAGE_CACHE_NR_OBJS); + if (pcache->nobjs >= min) + return 0; + while (pcache->nobjs < max) { + page = (void *)__get_free_page(GFP_KERNEL | __GFP_ZERO); + if (!page) + return -ENOMEM; + pcache->objects[pcache->nobjs++] = page; + } + + return 0; +} + +static void stage2_cache_flush(struct kvm_mmu_page_cache *pcache) +{ + while (pcache && pcache->nobjs) + free_page((unsigned long)pcache->objects[--pcache->nobjs]); +} + +static void *stage2_cache_alloc(struct kvm_mmu_page_cache *pcache) +{ + void *p; + + if (!pcache) + return NULL; + + BUG_ON(!pcache->nobjs); + p = pcache->objects[--pcache->nobjs]; + + return p; +} + +struct local_guest_tlb_info { + struct kvm_vmid *vmid; + gpa_t addr; +}; + +static void local_guest_tlb_flush_vmid_gpa(void *info) +{ + struct local_guest_tlb_info *infop = info; + + __kvm_riscv_hfence_gvma_vmid_gpa(READ_ONCE(infop->vmid->vmid_version), + infop->addr); +} + +static void stage2_remote_tlb_flush(struct kvm *kvm, gpa_t addr) +{ + struct local_guest_tlb_info info; + struct kvm_vmid *vmid = &kvm->arch.vmid; + + /* TODO: This should be SBI call */ + info.vmid = vmid; + info.addr = addr; + preempt_disable(); + smp_call_function_many(cpu_all_mask, local_guest_tlb_flush_vmid_gpa, + &info, true); + preempt_enable(); +} + +static int stage2_set_pgd(struct kvm *kvm, gpa_t addr, const pgd_t *new_pgd) +{ + pgd_t *pgdp = &kvm->arch.pgd[pgd_index(addr)]; + + *pgdp = *new_pgd; + if (pgd_val(*pgdp) & _PAGE_LEAF) + stage2_remote_tlb_flush(kvm, addr); + + return 0; +} + +static int stage2_set_pmd(struct kvm *kvm, struct kvm_mmu_page_cache *pcache, + gpa_t addr, const pmd_t *new_pmd) +{ + int rc; + pmd_t *pmdp; + pgd_t new_pgd; + pgd_t *pgdp = &kvm->arch.pgd[pgd_index(addr)]; + + if (!pgd_val(*pgdp)) { + pmdp = stage2_cache_alloc(pcache); + if (!pmdp) + return -ENOMEM; + new_pgd = pfn_pgd(PFN_DOWN(__pa(pmdp)), __pgprot(_PAGE_TABLE)); + rc = stage2_set_pgd(kvm, addr, &new_pgd); + if (rc) + return rc; + } + + if (pgd_val(*pgdp) & _PAGE_LEAF) + return -EEXIST; + + pmdp = (void *)pgd_page_vaddr(*pgdp); + pmdp = &pmdp[pmd_index(addr)]; + + *pmdp = *new_pmd; + if (pmd_val(*pmdp) & _PAGE_LEAF) + stage2_remote_tlb_flush(kvm, addr); + + return 0; +} + +static int stage2_set_pte(struct kvm *kvm, + struct kvm_mmu_page_cache *pcache, + gpa_t addr, const pte_t *new_pte) +{ + int rc; + pte_t *ptep; + pmd_t new_pmd; + pmd_t *pmdp; + pgd_t new_pgd; + pgd_t *pgdp = &kvm->arch.pgd[pgd_index(addr)]; + + if (!pgd_val(*pgdp)) { + pmdp = stage2_cache_alloc(pcache); + if (!pmdp) + return -ENOMEM; + new_pgd = pfn_pgd(PFN_DOWN(__pa(pmdp)), __pgprot(_PAGE_TABLE)); + rc = stage2_set_pgd(kvm, addr, &new_pgd); + if (rc) + return rc; + } + + if (pgd_val(*pgdp) & _PAGE_LEAF) + return -EEXIST; + + if (stage2_have_pmd) { + pmdp = (void *)pgd_page_vaddr(*pgdp); + pmdp = &pmdp[pmd_index(addr)]; + if (!pmd_present(*pmdp)) { + ptep = stage2_cache_alloc(pcache); + if (!ptep) + return -ENOMEM; + new_pmd = pfn_pmd(PFN_DOWN(__pa(ptep)), + __pgprot(_PAGE_TABLE)); + rc = stage2_set_pmd(kvm, pcache, addr, &new_pmd); + if (rc) + return rc; + } + + if (pmd_val(*pmdp) & _PAGE_LEAF) + return -EEXIST; + + ptep = (void *)pmd_page_vaddr(*pmdp); + } else { + ptep = (void *)pgd_page_vaddr(*pgdp); + } + + ptep = &ptep[pte_index(addr)]; + + *ptep = *new_pte; + if (pte_val(*ptep) & _PAGE_LEAF) + stage2_remote_tlb_flush(kvm, addr); + + return 0; +} + +static int stage2_map_page(struct kvm *kvm, + struct kvm_mmu_page_cache *pcache, + gpa_t gpa, phys_addr_t hpa, + unsigned long page_size, pgprot_t prot) +{ + pte_t new_pte; + pmd_t new_pmd; + pgd_t new_pgd; + + if (page_size == PAGE_SIZE) { + new_pte = pfn_pte(PFN_DOWN(hpa), prot); + return stage2_set_pte(kvm, pcache, gpa, &new_pte); + } + + if (stage2_have_pmd && page_size == PMD_SIZE) { + new_pmd = pfn_pmd(PFN_DOWN(hpa), prot); + return stage2_set_pmd(kvm, pcache, gpa, &new_pmd); + } + + if (page_size == PGDIR_SIZE) { + new_pgd = pfn_pgd(PFN_DOWN(hpa), prot); + return stage2_set_pgd(kvm, gpa, &new_pgd); + } + + return -EINVAL; +} + +enum stage2_op { + STAGE2_OP_NOP = 0, /* Nothing */ + STAGE2_OP_CLEAR, /* Clear/Unmap */ + STAGE2_OP_WP, /* Write-protect */ +}; + +static void stage2_op_pte(struct kvm *kvm, gpa_t addr, pte_t *ptep, + enum stage2_op op) +{ + BUG_ON(addr & (PAGE_SIZE - 1)); + + if (!pte_present(*ptep)) + return; + + if (op == STAGE2_OP_CLEAR) + set_pte(ptep, __pte(0)); + else if (op == STAGE2_OP_WP) + set_pte(ptep, __pte(pte_val(*ptep) & ~_PAGE_WRITE)); + stage2_remote_tlb_flush(kvm, addr); +} + +static void stage2_op_pmd(struct kvm *kvm, gpa_t addr, pmd_t *pmdp, + enum stage2_op op) +{ + int i; + pte_t *ptep; + + BUG_ON(addr & (PMD_SIZE - 1)); + + if (!pmd_present(*pmdp)) + return; + + if (pmd_val(*pmdp) & _PAGE_LEAF) + ptep = NULL; + else + ptep = (pte_t *)pmd_page_vaddr(*pmdp); + + if (op == STAGE2_OP_CLEAR) + set_pmd(pmdp, __pmd(0)); + + if (ptep) { + for (i = 0; i < PTRS_PER_PTE; i++) + stage2_op_pte(kvm, addr + i * PAGE_SIZE, &ptep[i], op); + if (op == STAGE2_OP_CLEAR) + put_page(virt_to_page(ptep)); + } else { + if (op == STAGE2_OP_WP) + set_pmd(pmdp, __pmd(pmd_val(*pmdp) & ~_PAGE_WRITE)); + stage2_remote_tlb_flush(kvm, addr); + } +} + +static void stage2_op_pgd(struct kvm *kvm, gpa_t addr, pgd_t *pgdp, + enum stage2_op op) +{ + int i; + pte_t *ptep; + pmd_t *pmdp; + + BUG_ON(addr & (PGDIR_SIZE - 1)); + + if (!pgd_val(*pgdp)) + return; + + ptep = NULL; + pmdp = NULL; + if (!(pgd_val(*pgdp) & _PAGE_LEAF)) { + if (stage2_have_pmd) + pmdp = (pmd_t *)pgd_page_vaddr(*pgdp); + else + ptep = (pte_t *)pgd_page_vaddr(*pgdp); + } + + if (op == STAGE2_OP_CLEAR) + set_pgd(pgdp, __pgd(0)); + + if (pmdp) { + for (i = 0; i < PTRS_PER_PMD; i++) + stage2_op_pmd(kvm, addr + i * PMD_SIZE, &pmdp[i], op); + if (op == STAGE2_OP_CLEAR) + put_page(virt_to_page(pmdp)); + } else if (ptep) { + for (i = 0; i < PTRS_PER_PTE; i++) + stage2_op_pte(kvm, addr + i * PAGE_SIZE, &ptep[i], op); + if (op == STAGE2_OP_CLEAR) + put_page(virt_to_page(ptep)); + } else { + if (op == STAGE2_OP_WP) + set_pgd(pgdp, __pgd(pgd_val(*pgdp) & ~_PAGE_WRITE)); + stage2_remote_tlb_flush(kvm, addr); + } +} + +static void stage2_unmap_range(struct kvm *kvm, gpa_t start, gpa_t size) +{ + pmd_t *pmdp; + pte_t *ptep; + pgd_t *pgdp; + gpa_t addr = start, end = start + size; + + while (addr < end) { + pgdp = &kvm->arch.pgd[pgd_index(addr)]; + if (!pgd_val(*pgdp)) { + addr += PGDIR_SIZE; + continue; + } else if (!(addr & (PGDIR_SIZE - 1)) && + ((end - addr) >= PGDIR_SIZE)) { + stage2_op_pgd(kvm, addr, pgdp, STAGE2_OP_CLEAR); + addr += PGDIR_SIZE; + continue; + } + + if (stage2_have_pmd) { + pmdp = (pmd_t *)pgd_page_vaddr(*pgdp); + if (!pmd_present(*pmdp)) { + addr += PMD_SIZE; + continue; + } else if (!(addr & (PMD_SIZE - 1)) && + ((end - addr) >= PMD_SIZE)) { + stage2_op_pmd(kvm, addr, pmdp, + STAGE2_OP_CLEAR); + addr += PMD_SIZE; + continue; + } + ptep = (pte_t *)pmd_page_vaddr(*pmdp); + } else { + ptep = (pte_t *)pgd_page_vaddr(*pgdp); + } + + stage2_op_pte(kvm, addr, ptep, STAGE2_OP_CLEAR); + addr += PAGE_SIZE; + } +} + +static void stage2_wp_range(struct kvm *kvm, gpa_t start, gpa_t end) +{ + pmd_t *pmdp; + pte_t *ptep; + pgd_t *pgdp; + gpa_t addr = start; + + while (addr < end) { + pgdp = &kvm->arch.pgd[pgd_index(addr)]; + if (!pgd_val(*pgdp)) { + addr += PGDIR_SIZE; + continue; + } else if (!(addr & (PGDIR_SIZE - 1)) && + ((end - addr) >= PGDIR_SIZE)) { + stage2_op_pgd(kvm, addr, pgdp, STAGE2_OP_WP); + addr += PGDIR_SIZE; + continue; + } + + if (stage2_have_pmd) { + pmdp = (pmd_t *)pgd_page_vaddr(*pgdp); + if (!pmd_present(*pmdp)) { + addr += PMD_SIZE; + continue; + } else if (!(addr & (PMD_SIZE - 1)) && + ((end - addr) >= PMD_SIZE)) { + stage2_op_pmd(kvm, addr, pmdp, STAGE2_OP_WP); + addr += PMD_SIZE; + continue; + } + ptep = (pte_t *)pmd_page_vaddr(*pmdp); + } else { + ptep = (pte_t *)pgd_page_vaddr(*pgdp); + } + + stage2_op_pte(kvm, addr, ptep, STAGE2_OP_WP); + addr += PAGE_SIZE; + } +} + +void stage2_wp_memory_region(struct kvm *kvm, int slot) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot = id_to_memslot(slots, slot); + phys_addr_t start = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + stage2_wp_range(kvm, start, end); + spin_unlock(&kvm->mmu_lock); + kvm_flush_remote_tlbs(kvm); +} + +int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, + unsigned long size, bool writable) +{ + pte_t pte; + int ret = 0; + unsigned long pfn; + phys_addr_t addr, end; + struct kvm_mmu_page_cache pcache = { 0, }; + + end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; + pfn = __phys_to_pfn(hpa); + + for (addr = gpa; addr < end; addr += PAGE_SIZE) { + pte = pfn_pte(pfn, PAGE_KERNEL); + + if (!writable) + pte = pte_wrprotect(pte); + + ret = stage2_cache_topup(&pcache, + stage2_cache_min_pages, + KVM_MMU_PAGE_CACHE_NR_OBJS); + if (ret) + goto out; + + spin_lock(&kvm->mmu_lock); + ret = stage2_set_pte(kvm, &pcache, addr, &pte); + spin_unlock(&kvm->mmu_lock); + if (ret) + goto out; + + pfn++; + } + +out: + stage2_cache_flush(&pcache); + return ret; + +} + void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, struct kvm_memory_slot *dont) { @@ -35,7 +461,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) void kvm_arch_flush_shadow_all(struct kvm *kvm) { - /* TODO: */ + kvm_riscv_stage2_free_pgd(kvm); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, @@ -49,7 +475,13 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - /* TODO: */ + /* + * At this point memslot has been committed and there is an + * allocated dirty_bitmap[], dirty pages will be be tracked while the + * memory slot is write protected. + */ + if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) + stage2_wp_memory_region(kvm, mem->slot); } int kvm_arch_prepare_memory_region(struct kvm *kvm, @@ -57,34 +489,218 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, enum kvm_mr_change change) { - /* TODO: */ - return 0; + hva_t hva = mem->userspace_addr; + hva_t reg_end = hva + mem->memory_size; + bool writable = !(mem->flags & KVM_MEM_READONLY); + int ret = 0; + + if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && + change != KVM_MR_FLAGS_ONLY) + return 0; + + /* + * Prevent userspace from creating a memory region outside of the GPA + * space addressable by the KVM guest GPA space. + */ + if ((memslot->base_gfn + memslot->npages) >= + (stage2_gpa_size >> PAGE_SHIFT)) + return -EFAULT; + + down_read(¤t->mm->mmap_sem); + + /* + * A memory region could potentially cover multiple VMAs, and + * any holes between them, so iterate over all of them to find + * out if we can map any of them right now. + * + * +--------------------------------------------+ + * +---------------+----------------+ +----------------+ + * | : VMA 1 | VMA 2 | | VMA 3 : | + * +---------------+----------------+ +----------------+ + * | memory region | + * +--------------------------------------------+ + */ + do { + struct vm_area_struct *vma = find_vma(current->mm, hva); + hva_t vm_start, vm_end; + + if (!vma || vma->vm_start >= reg_end) + break; + + /* + * Mapping a read-only VMA is only allowed if the + * memory region is configured as read-only. + */ + if (writable && !(vma->vm_flags & VM_WRITE)) { + ret = -EPERM; + break; + } + + /* Take the intersection of this VMA with the memory region */ + vm_start = max(hva, vma->vm_start); + vm_end = min(reg_end, vma->vm_end); + + if (vma->vm_flags & VM_PFNMAP) { + gpa_t gpa = mem->guest_phys_addr + + (vm_start - mem->userspace_addr); + phys_addr_t pa; + + pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT; + pa += vm_start - vma->vm_start; + + /* IO region dirty page logging not allowed */ + if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) { + ret = -EINVAL; + goto out; + } + + ret = stage2_ioremap(kvm, gpa, pa, + vm_end - vm_start, writable); + if (ret) + break; + } + hva = vm_end; + } while (hva < reg_end); + + if (change == KVM_MR_FLAGS_ONLY) + goto out; + + spin_lock(&kvm->mmu_lock); + if (ret) + stage2_unmap_range(kvm, mem->guest_phys_addr, + mem->memory_size); + spin_unlock(&kvm->mmu_lock); + +out: + up_read(¤t->mm->mmap_sem); + return ret; } int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, bool is_write) { - /* TODO: */ - return 0; + int ret; + short lsb; + kvm_pfn_t hfn; + bool writeable; + gfn_t gfn = gpa >> PAGE_SHIFT; + struct vm_area_struct *vma; + struct kvm *kvm = vcpu->kvm; + struct kvm_mmu_page_cache *pcache = &vcpu->arch.mmu_page_cache; + unsigned long vma_pagesize; + + down_read(¤t->mm->mmap_sem); + + vma = find_vma_intersection(current->mm, hva, hva + 1); + if (unlikely(!vma)) { + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); + up_read(¤t->mm->mmap_sem); + return -EFAULT; + } + + vma_pagesize = vma_kernel_pagesize(vma); + + up_read(¤t->mm->mmap_sem); + + if (vma_pagesize != PGDIR_SIZE && + vma_pagesize != PMD_SIZE && + vma_pagesize != PAGE_SIZE) { + kvm_err("Invalid VMA page size 0x%lx\n", vma_pagesize); + return -EFAULT; + } + + /* We need minimum second+third level pages */ + ret = stage2_cache_topup(pcache, stage2_cache_min_pages, + KVM_MMU_PAGE_CACHE_NR_OBJS); + if (ret) { + kvm_err("Failed to topup stage2 cache\n"); + return ret; + } + + hfn = gfn_to_pfn_prot(kvm, gfn, is_write, &writeable); + if (hfn == KVM_PFN_ERR_HWPOISON) { + if (is_vm_hugetlb_page(vma)) + lsb = huge_page_shift(hstate_vma(vma)); + else + lsb = PAGE_SHIFT; + + send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, + lsb, current); + return 0; + } + if (is_error_noslot_pfn(hfn)) + return -EFAULT; + if (!writeable && is_write) + return -EPERM; + + spin_lock(&kvm->mmu_lock); + + if (writeable) { + kvm_set_pfn_dirty(hfn); + ret = stage2_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, + vma_pagesize, PAGE_WRITE_EXEC); + } else { + ret = stage2_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, + vma_pagesize, PAGE_READ_EXEC); + } + + if (ret) + kvm_err("Failed to map in stage2\n"); + + spin_unlock(&kvm->mmu_lock); + kvm_set_pfn_accessed(hfn); + kvm_release_pfn_clean(hfn); + return ret; } void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu) { - /* TODO: */ + stage2_cache_flush(&vcpu->arch.mmu_page_cache); } int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm) { - /* TODO: */ + if (kvm->arch.pgd != NULL) { + kvm_err("kvm_arch already initialized?\n"); + return -EINVAL; + } + + kvm->arch.pgd = alloc_pages_exact(PAGE_SIZE, GFP_KERNEL | __GFP_ZERO); + if (!kvm->arch.pgd) + return -ENOMEM; + kvm->arch.pgd_phys = virt_to_phys(kvm->arch.pgd); + return 0; } void kvm_riscv_stage2_free_pgd(struct kvm *kvm) { - /* TODO: */ + void *pgd = NULL; + + spin_lock(&kvm->mmu_lock); + if (kvm->arch.pgd) { + stage2_unmap_range(kvm, 0UL, stage2_gpa_size); + pgd = READ_ONCE(kvm->arch.pgd); + kvm->arch.pgd = NULL; + kvm->arch.pgd_phys = 0; + } + spin_unlock(&kvm->mmu_lock); + + /* Free the HW pgd, one page at a time */ + if (pgd) + free_pages_exact(pgd, PAGE_SIZE); } void kvm_riscv_stage2_update_pgtbl(struct kvm_vcpu *vcpu) { - /* TODO: */ + unsigned long hgatp = HGATP_MODE; + struct kvm_arch *k = &vcpu->kvm->arch; + + hgatp |= (k->vmid.vmid << HGATP_VMID_SHIFT) & HGATP_VMID_MASK; + hgatp |= (k->pgd_phys >> PAGE_SHIFT) & HGATP_PPN; + + csr_write(CSR_HGATP, hgatp); + + if (!kvm_riscv_stage2_vmid_bits()) + __kvm_riscv_hfence_gvma_all(); } From patchwork Mon Jul 29 11:57:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063675 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DFFAF14E5 for ; Mon, 29 Jul 2019 11:57:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA2A320069 for ; Mon, 29 Jul 2019 11:57:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BB4AE20408; Mon, 29 Jul 2019 11:57:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0DD5020069 for ; Mon, 29 Jul 2019 11:57:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NxSzQf+6WWuVSzGeJxKSeMJ7IktP0iC7OoCC/xaTj/U=; b=gOyLkz6q7/Dtqt qKUZGcA9e64zcCe1NrsNnftL3VU1KnzU7fu5IRKH9AcKLUOCApa5EfzJkMj9ATGm8MHd1fK3dyeiM dmAQRlIoYW2eTdmESyjKw+3LDVXsms5ivDDkvJUbPow2v3jePTj0sjYcBsZqOMVV2kSDxU5TZ33/j tAK/KkxuiBR7uO/u3xyzBUzf3w1CYIOgEP65KWEiiN4r9kfOMdMtsftlP680PVsunG1gTDJJFQqiE x190eI/NmPAsA/T/0xNPCu7FdruKLksbxdOCZ+8L3HmSzEY79P71LTpZ4pkkht9pkROwwo9AO4jga KRP4gw14+331Y6QKHglw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Hi-000342-MU; Mon, 29 Jul 2019 11:57:46 +0000 Received: from esa5.hgst.iphmx.com ([216.71.153.144]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4He-00032k-Tf for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401463; x=1595937463; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=2uj9RJOL/IUN81lh4o3H+8jvMRuhXSv/h40mQKac/yo=; b=k7K6THTCoFN1gBZ7o4YAJYH+RMKuorRvmmLuBoAMPMiWWDhXqw/l7Uf1 Kt+zufg+vFDrs19KDnTOuFfTs3oQCSBcv1kK6YPrHhZ4q31st0pFAXqfQ lUHof90G42wJXDuMmAnKArBuBvKcQjfSzuNh25v27tC1qrD4Bn8P6YGHv VUFtuq/o1clnx0ZfonopZFV001MrG7TKooEQSzrFIGDZ0rsBBKr/cjr8H Oeip5h/je7y/JI6JSGZaGGlesZx0zrAnR2/B5UQAaF/futwoBU6yevVbS 7Z7iy903rB6TAynhOfC+EPvG4j+7OeXEKDQwy7QiKtFVXzdWmb993OKg0 A==; IronPort-SDR: KiZims6sAF7seS/lH5FSpWJeaE1hST4HLfaB+QMn541TinlZT5PsnWlqEXT6xoHcWxigliDNBQ Htj5ZtPjk+ZDvxDmlMqpWbollRL4shR7pK0K8eO07dU7l7uCb4hXLlOx8FbpPJHVwV3zf+MeVb TnNf57iiFYh9F7de4bbTj0a0F8ry/yeLFKuJ8Jqfdi1ugYgWThGC7OXqjcNzRVeXr7g3y9z0IV yHQ84EndvZ0UIShz0SuJslzPQA/tFioYPhlYBfv225FOaPyA5wPhJt+KGV4ec4KUWGV7N/qiFw TEk= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="115403245" Received: from mail-by2nam01lp2057.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.57]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:39 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hW97hUzcYpyWuISnp+FDyIWu1gxo0iCZTnLqaYlqK4Dxr5D9vFAQDxVB0MISIH59RripnVeBnJxDuFzlxeoQqxI11pcEnVz3wk8TBdZke/zbCZTUj/lcdPzjne//Sjq+FQMULj1gJYSvvDGXpb+vtzqW/4fiAISh1k0VnSXvubdniMNwloN4rom+ezp40c0hzRkbFhB2yZlP30qcLwTzGIU9xGA8ChvzZUegCgbhJtXzvzKdm9u+xSd+iUcQj3VDV+AkX2yqIUFh0ojVpKhmIChEcu40ZKp5slzj8ssUDH6vG582V8ewkzslcZubZ+E1wyG84mJmR2Obpey1ADEBhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WSTZ7sOCJRD95mpCZGWNzrYtrX4nMmywkXcQy9lnkng=; b=lpsiPe8Dxkb5iWZLNw4vEM9r+qvCZmMpmTslZ8MRpPWUlsnsOeqZZ2JXwpsgkIF76lbPWh7PwEGzt4NcFR48+mgwKqyZH/R7oM+BPNxivPpNbUs2E+fS2YNDQlWM2nJ/TQ+7GLZ/YjM1Mr2ADytl1SAj4C/Ici43F+r08rpTq5GlW0+PDd8Fdfk2ChEurs/pdmmy9qiVomAV9TF4V1lRXIz9m9vkckrgjvZMcO0Tgz7hkpuwUSsvTxEUdyGk34OsVdcEM0WAy7ZVoObSAqNxd9rdaEBuZjVQ809ASSxmbyG7YGe0BBbn0uaNTJIkUSmHOG19QnxZNijSUOrC1ANLUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WSTZ7sOCJRD95mpCZGWNzrYtrX4nMmywkXcQy9lnkng=; b=e50BB/6eevZai87IBSSbwNA2b5pwCXVTyC/U1xyF261+cj+pO1MnAkxx5tYsBIfEyv5S+9DiCjzeq8k3YdMrzFNIxWkhHKeV4Nd/Oh/HS7FYK84KA/strZ5tNtzJQ9FNU/KMyIJWayIBFY1NC8E5R6dXLhrmN65mIoRlbnjj2VE= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:57:36 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:35 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 12/16] RISC-V: KVM: Implement MMU notifiers Thread-Topic: [RFC PATCH 12/16] RISC-V: KVM: Implement MMU notifiers Thread-Index: AQHVRgTPe6HLXwSg9UaPtRtTuoF9hQ== Date: Mon, 29 Jul 2019 11:57:35 +0000 Message-ID: <20190729115544.17895-13-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 44b6e46f-b9cc-4e8f-79a6-08d7141bf225 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:785; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: +/BO+uTT9Y2JSNmJ9XRKbiaQlotzF0aPC+7knAsA2+c/u5/EnDQA0+amR+Fhabjd/nnqOwiR7/uFR/EWKD8Uy2D7vf5bgLlfNU/Ew2PoeWW+JO7uob8Xn7Qj6N+ZRjDmfjFYcpieHpCjnAZhbBIeXZGsg9IDXtJLvKjtXpVK29EjURhD7IuSt52lke2dLJdq0TwjI7QNULM6wJEmE3LB1QIBOz0bT/2PpA8OzHCOsKoZK54Oa9yL643Vur/RxBk+LEF8dkkbzMrLmP/DWdKXVMQnp0QRpk7WvLo/010lna9+Ggu6HvxP87G2WdMeCilzNUQ+9EI2YWtOQxpS5xIfG3OE0mc5MkZgA+derwPnCWpEjKUaxAwgkT0un3TZC3lwZ7CX9FSE5YrXpXvA9a9yqRR4x/hPMxYl/COUBKwnVSI= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 44b6e46f-b9cc-4e8f-79a6-08d7141bf225 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:35.8137 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045743_127717_AFAB8C4E X-CRM114-Status: GOOD ( 16.59 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements MMU notifiers for KVM RISC-V so that Guest physical address space is in-sync with Host physical address space. This will allow swapping, page migration, etc to work transparently with KVM RISC-V. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 7 ++ arch/riscv/kvm/Kconfig | 1 + arch/riscv/kvm/mmu.c | 200 +++++++++++++++++++++++++++++- 3 files changed, 207 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 354d179c43cf..58f61ce28461 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -177,6 +177,13 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} +#define KVM_ARCH_WANT_MMU_NOTIFIER +int kvm_unmap_hva_range(struct kvm *kvm, + unsigned long start, unsigned long end); +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); +int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); + extern void __kvm_riscv_hfence_gvma_vmid_gpa(unsigned long vmid, unsigned long gpa); extern void __kvm_riscv_hfence_gvma_vmid(unsigned long vmid); diff --git a/arch/riscv/kvm/Kconfig b/arch/riscv/kvm/Kconfig index 35fd30d0e432..002e14ee37f6 100644 --- a/arch/riscv/kvm/Kconfig +++ b/arch/riscv/kvm/Kconfig @@ -20,6 +20,7 @@ if VIRTUALIZATION config KVM tristate "Kernel-based Virtual Machine (KVM) support" depends on OF + select MMU_NOTIFIER select PREEMPT_NOTIFIERS select ANON_INODES select KVM_MMIO diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 9561c5e85f75..5c992d4b4317 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -67,6 +67,66 @@ static void *stage2_cache_alloc(struct kvm_mmu_page_cache *pcache) return p; } +static int stage2_pgdp_test_and_clear_young(pgd_t *pgd) +{ + return ptep_test_and_clear_young(NULL, 0, (pte_t *)pgd); +} + +static int stage2_pmdp_test_and_clear_young(pmd_t *pmd) +{ + return ptep_test_and_clear_young(NULL, 0, (pte_t *)pmd); +} + +static int stage2_ptep_test_and_clear_young(pte_t *pte) +{ + return ptep_test_and_clear_young(NULL, 0, pte); +} + +static bool stage2_get_leaf_entry(struct kvm *kvm, gpa_t addr, + pgd_t **pgdpp, pmd_t **pmdpp, pte_t **ptepp) +{ + pgd_t *pgdp; + pmd_t *pmdp; + pte_t *ptep; + + *pgdpp = NULL; + *pmdpp = NULL; + *ptepp = NULL; + + pgdp = &kvm->arch.pgd[pgd_index(addr)]; + if (!pgd_val(*pgdp)) + return false; + if (pgd_val(*pgdp) & _PAGE_LEAF) { + *pgdpp = pgdp; + return true; + } + + if (stage2_have_pmd) { + pmdp = (void *)pgd_page_vaddr(*pgdp); + pmdp = &pmdp[pmd_index(addr)]; + if (!pmd_present(*pmdp)) + return false; + if (pmd_val(*pmdp) & _PAGE_LEAF) { + *pmdpp = pmdp; + return true; + } + + ptep = (void *)pmd_page_vaddr(*pmdp); + } else { + ptep = (void *)pgd_page_vaddr(*pgdp); + } + + ptep = &ptep[pte_index(addr)]; + if (!pte_present(*ptep)) + return false; + if (pte_val(*ptep) & _PAGE_LEAF) { + *ptepp = ptep; + return true; + } + + return false; +} + struct local_guest_tlb_info { struct kvm_vmid *vmid; gpa_t addr; @@ -444,6 +504,38 @@ int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, } +static int handle_hva_to_gpa(struct kvm *kvm, + unsigned long start, + unsigned long end, + int (*handler)(struct kvm *kvm, + gpa_t gpa, u64 size, + void *data), + void *data) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int ret = 0; + + slots = kvm_memslots(kvm); + + /* we only care about the pages that the guest sees */ + kvm_for_each_memslot(memslot, slots) { + unsigned long hva_start, hva_end; + gfn_t gpa; + + hva_start = max(start, memslot->userspace_addr); + hva_end = min(end, memslot->userspace_addr + + (memslot->npages << PAGE_SHIFT)); + if (hva_start >= hva_end) + continue; + + gpa = hva_to_gfn_memslot(hva_start, memslot) << PAGE_SHIFT; + ret |= handler(kvm, gpa, (u64)(hva_end - hva_start), data); + } + + return ret; +} + void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, struct kvm_memory_slot *dont) { @@ -576,6 +668,106 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, return ret; } +static int kvm_unmap_hva_handler(struct kvm *kvm, + gpa_t gpa, u64 size, void *data) +{ + stage2_unmap_range(kvm, gpa, size); + return 0; +} + +int kvm_unmap_hva_range(struct kvm *kvm, + unsigned long start, unsigned long end) +{ + if (!kvm->arch.pgd) + return 0; + + handle_hva_to_gpa(kvm, start, end, + &kvm_unmap_hva_handler, NULL); + return 0; +} + +static int kvm_set_spte_handler(struct kvm *kvm, + gpa_t gpa, u64 size, void *data) +{ + pte_t *pte = (pte_t *)data; + + WARN_ON(size != PAGE_SIZE); + stage2_set_pte(kvm, NULL, gpa, pte); + + return 0; +} + +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) +{ + unsigned long end = hva + PAGE_SIZE; + kvm_pfn_t pfn = pte_pfn(pte); + pte_t stage2_pte; + + if (!kvm->arch.pgd) + return 0; + + stage2_pte = pfn_pte(pfn, PAGE_WRITE_EXEC); + handle_hva_to_gpa(kvm, hva, end, + &kvm_set_spte_handler, &stage2_pte); + + return 0; +} + +static int kvm_age_hva_handler(struct kvm *kvm, + gpa_t gpa, u64 size, void *data) +{ + pgd_t *pgd; + pmd_t *pmd; + pte_t *pte; + + WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PGDIR_SIZE); + if (!stage2_get_leaf_entry(kvm, gpa, &pgd, &pmd, &pte)) + return 0; + + if (pgd) + return stage2_pgdp_test_and_clear_young(pgd); + else if (pmd) + return stage2_pmdp_test_and_clear_young(pmd); + else + return stage2_ptep_test_and_clear_young(pte); +} + +int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) +{ + if (!kvm->arch.pgd) + return 0; + + return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL); +} + +static int kvm_test_age_hva_handler(struct kvm *kvm, + gpa_t gpa, u64 size, void *data) +{ + pgd_t *pgd; + pmd_t *pmd; + pte_t *pte; + + WARN_ON(size != PAGE_SIZE && size != PMD_SIZE); + if (!stage2_get_leaf_entry(kvm, gpa, &pgd, &pmd, &pte)) + return 0; + + if (pgd) + return pte_young(*((pte_t *)pgd)); + else if (pmd) + return pte_young(*((pte_t *)pmd)); + else + return pte_young(*pte); +} + +int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) +{ + if (!kvm->arch.pgd) + return 0; + + return handle_hva_to_gpa(kvm, hva, hva, + kvm_test_age_hva_handler, NULL); +} + int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, bool is_write) { @@ -587,7 +779,7 @@ int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, struct vm_area_struct *vma; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_page_cache *pcache = &vcpu->arch.mmu_page_cache; - unsigned long vma_pagesize; + unsigned long vma_pagesize, mmu_seq; down_read(¤t->mm->mmap_sem); @@ -617,6 +809,8 @@ int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, return ret; } + mmu_seq = kvm->mmu_notifier_seq; + hfn = gfn_to_pfn_prot(kvm, gfn, is_write, &writeable); if (hfn == KVM_PFN_ERR_HWPOISON) { if (is_vm_hugetlb_page(vma)) @@ -635,6 +829,9 @@ int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, spin_lock(&kvm->mmu_lock); + if (mmu_notifier_retry(kvm, mmu_seq)) + goto out_unlock; + if (writeable) { kvm_set_pfn_dirty(hfn); ret = stage2_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, @@ -647,6 +844,7 @@ int kvm_riscv_stage2_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva, if (ret) kvm_err("Failed to map in stage2\n"); +out_unlock: spin_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(hfn); kvm_release_pfn_clean(hfn); From patchwork Mon Jul 29 11:57:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063677 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07C4213A4 for ; Mon, 29 Jul 2019 11:57:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E7E0E20069 for ; Mon, 29 Jul 2019 11:57:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D15A220408; Mon, 29 Jul 2019 11:57:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2B12B20069 for ; Mon, 29 Jul 2019 11:57:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BBkUf7YupmEhskv2iIR9Lt3KlMuIY1vgIMpLngUtQ0A=; b=uQl2gOKbNq5ivC 4Tcnw6/5b1gkYZelJNSPOGXLXGMgsU7TSJ8uA0LFu3YyMJCorml2knRc4+/ORqwHDrn8Cvv24fLHj qmD7SgTEG+3wlCcLQ6AvJve1YLuE8V+OuYLVKib5LiBA/PeFjmbEhaGFZrmmKKwl1sQOZJj/roorm nG6DrypWyItwEWVFMEtiEMROoIcvWK4Fu+DrcitfYC4AZ7xkInTvWuAoWz6RvHaA1ST1V/aP6kMrG 7wXTRR0koIkr0db9+oU0IGQmxZmC/0k0MXlwR1PG0zvVaLJyeXy07GVXZiUyjYkAGjkwi+NtLkMqk QV6EL+VEUydv3wyFskgg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Hk-00036i-Hk; Mon, 29 Jul 2019 11:57:48 +0000 Received: from esa5.hgst.iphmx.com ([216.71.153.144]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Hg-00032k-VY for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401465; x=1595937465; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=nQtRPffguA9RyyS6vFbTH4fPHJmxgJ2czC64X/kh9AY=; b=b45TBOz9RXB9i2kylOlRvTDuxkJieCY3QYaLEzQObppm5VRC/EyaabOr Df3PIfsKAfSdzdaBOAC5F65/7I01zro6jaBmkDD7HOkaZhlIcnaxnjG95 Uu646v++CP4rV5vZeMezDlhQV4MO+DmD1f4UQVQ3RmlE7p37AjgO5N2Xh 2xvGkzPWyQOR/4uIUTiZR9k8fZHsig3h/Bm7eEQHJzT0rJuuQ6whQaT+7 VDy98jRI9unz+VZRABx5UY4PFESik88/4g6QYorPpnBAmidjkz/uNey6F 54lFluUc2Fkvs2nZ6My1rs6Uav9GFBWeUPh5UHtclTLHb4402v4ZjBX8V g==; IronPort-SDR: ZFLpCKmccZ4pRtkpBe1iVO3TTepGLCrkghfgFaS000ckxRtgkAkWRWSVuGwDiNS6th1sgIrHwU +cU240l+Q+wM/SHPlsEqftX7MvgMnYmiV0UWFPSqpE6654wI7PXdSo+ebAJ9EDRw7QTfgT/2yP t/xf5f5dAixxlj1AkgbAydCiiHsnyqSzwNbWgZ3mSeCFxluTZrQMFEUvv4T5oa/yBKGDAwSB2s tNYnKIOwguU45YTkoyFjP5dDS8KcrLreedcntPsWp3Bp4Qu1aApo1mLu4YvZTW6N66dTs8zZMI Nfw= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="115403248" Received: from mail-by2nam01lp2057.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.57]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:43 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dTiR1ghz2iJsuF6DZSU1s0RdH399QoAo8/GyeshmTl32Oh6xeuh6DV0i9GA+NimF8wZUMxW85O/QAtfJkb8ZJWbgnwrTAFM545AY0Xx/bLsivA3o3qR6g7yrojokx6YtrFBKR+9fvtHTt1sCKCoJLgKBdq78OksIQ3kuvq2j83rrB2QQL/x5dKivjSEDpQm64RGwzFHGLo7vSEfvSq+6O3/l41XOFCYAW/HPA5OYqoItxqF6BaVlR3wM3yrZAY91mMpTN5jnUltHR3W5xQhajR42+lk7J3TyZi2+xaVZ4yLVU4+x865niQL54U1OaTYAGVeXeTyTmfGipMfbmTuGSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mCFA8wGPUgs8VNvwGb4+twyGdrV+5gsY1wkupEU0Ugo=; b=lyvFDEyXtXmG18j05Jq/EwdeM2Pj9ClkYbl/mlCx8mdElGuJA6f+lKCAx6+oikpXOZlTEj+msMGQ1IIqVOAdBRIZorMm0PokRmdK4uDuoeZUe3OvgxCsyqz8pZThz/RP5y/ex2VJQvt1mX7rgQdmKb0yqKwqJDbkQt1Myu6bdtYE6z/18kyB3S8B79MPlY2S6YxLJN3b2p7dZh40N+tSWUR2qNeh6yyKvOFdOaoou0Wd9HNi3YDTl7a7rxA9IxPwMOWyMNX/7sIFtjKY22JVraa54zpt/yrdlBEClod8BtwC/W6GTPQDV4PSVFEZJgJ98pnoYNVojLEiZPXmjurdmQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mCFA8wGPUgs8VNvwGb4+twyGdrV+5gsY1wkupEU0Ugo=; b=YsUVU94K/BnZzpvhOvqmmJLwrtEuZxiBmdKe4UpDOkhdWj/agYpXzcdKnRO5twiPfksYhoWE2NsQ3wtDj87Boli1fjgGI2tsHZ0/VKEXhf1ASNZahLfJBg12uchUp+SM3TQyo5Ikd8XvhQdJBgCtCCzfgec3TnQegRzi1VxBUKs= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:57:42 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:42 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 13/16] RISC-V: KVM: Add timer functionality Thread-Topic: [RFC PATCH 13/16] RISC-V: KVM: Add timer functionality Thread-Index: AQHVRgTTKdPqElF2fUCYfAtXBvxjpA== Date: Mon, 29 Jul 2019 11:57:42 +0000 Message-ID: <20190729115544.17895-14-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 3a565744-d6cf-478b-4e09-08d7141bf609 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:6108; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: mScB3NwOYO9UUwO5fWovguBe7XDUb4w7w7iAb+N7l9KokNqZOchJ5AZPCih1SPHeWwp24fya/Gp9Hc7pp3tvuYAhLBWPf84R/gp/r+92scskgZRWOxXAvwKQQK7wPbCfKiFVJEUOx0WsE0Ummq2cNhnWh2vesYcoE05q0N22np9bDeo31crV0KH8nL/IzpC2WK0mlCeASmG8HQvBKKiBzFkOa7bp26ZVm1gTFaqvpL69NtKgkT196jygAV7sCx5tmzWiwp0uh2R4NIUEceX+DHRRY/sYNEAuAQkZBIzhkU+Iw0MMan9aUVFZAOEQHEDu6dogjYH3UMXuF/UtS+vCJad/e01pyTptzMp75cPbuL7BGHDed5dma6P/W18vJtsm9dqPsyS84nHrTRQOpbyJPBbUfgrknYylTwW9ON8GVoA= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3a565744-d6cf-478b-4e09-08d7141bf609 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:42.3040 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045745_132570_2D256413 X-CRM114-Status: GOOD ( 20.45 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Atish Patra The RISC-V hypervisor specification doesn't have any virtual timer feature. Due to this, the guest VCPU timer will be programmed via SBI calls. The host will use a separate hrtimer event for each guest VCPU to provide timer functionality. We inject a virtual timer interrupt to the guest VCPU whenever the guest VCPU hrtimer event expires. The following features are not supported yet and will be added in future: 1. A time offset to adjust guest time from host time 2. A saved next event in guest vcpu for vm migration Signed-off-by: Atish Patra Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 4 + arch/riscv/include/asm/kvm_vcpu_timer.h | 32 +++++++ arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu.c | 6 ++ arch/riscv/kvm/vcpu_timer.c | 106 ++++++++++++++++++++++++ drivers/clocksource/timer-riscv.c | 6 ++ include/clocksource/timer-riscv.h | 14 ++++ 7 files changed, 169 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/kvm_vcpu_timer.h create mode 100644 arch/riscv/kvm/vcpu_timer.c create mode 100644 include/clocksource/timer-riscv.h diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 58f61ce28461..193a7ff0eb31 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef CONFIG_64BIT #define KVM_MAX_VCPUS (1U << 16) @@ -158,6 +159,9 @@ struct kvm_vcpu_arch { raw_spinlock_t irqs_lock; unsigned long irqs_pending; + /* VCPU Timer */ + struct kvm_vcpu_timer timer; + /* MMIO instruction details */ struct kvm_mmio_decode mmio_decode; diff --git a/arch/riscv/include/asm/kvm_vcpu_timer.h b/arch/riscv/include/asm/kvm_vcpu_timer.h new file mode 100644 index 000000000000..df67ea86988e --- /dev/null +++ b/arch/riscv/include/asm/kvm_vcpu_timer.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#ifndef __KVM_VCPU_RISCV_TIMER_H +#define __KVM_VCPU_RISCV_TIMER_H + +#include + +#define VCPU_TIMER_PROGRAM_THRESHOLD_NS 1000 + +struct kvm_vcpu_timer { + bool init_done; + /* Check if the timer is programmed */ + bool is_set; + struct hrtimer hrt; + /* Mult & Shift values to get nanosec from cycles */ + u32 mult; + u32 shift; +}; + +int kvm_riscv_vcpu_timer_init(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_timer_next_event(struct kvm_vcpu *vcpu, + unsigned long ncycles); + +#endif diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index c0f57f26c13d..3e0c7558320d 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -9,6 +9,6 @@ ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm kvm-objs := $(common-objs-y) kvm-objs += main.o vm.o vmid.o tlb.o mmu.o -kvm-objs += vcpu.o vcpu_exit.o vcpu_switch.o +kvm-objs += vcpu.o vcpu_exit.o vcpu_switch.o vcpu_timer.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index f3b0cadc1973..ed1f06b17953 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -52,6 +52,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) memcpy(cntx, reset_cntx, sizeof(*cntx)); + kvm_riscv_vcpu_timer_reset(vcpu); + raw_spin_lock_irqsave(&vcpu->arch.irqs_lock, f); vcpu->arch.irqs_pending = 0; raw_spin_unlock_irqrestore(&vcpu->arch.irqs_lock, f); @@ -125,6 +127,9 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) csr->hideleg |= SIE_STIE; csr->hideleg |= SIE_SEIE; + /* Setup VCPU timer */ + kvm_riscv_vcpu_timer_init(vcpu); + /* Reset VCPU */ kvm_riscv_reset_vcpu(vcpu); @@ -133,6 +138,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { + kvm_riscv_vcpu_timer_deinit(vcpu); kvm_riscv_stage2_flush_cache(vcpu); kmem_cache_free(kvm_vcpu_cache, vcpu); } diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c new file mode 100644 index 000000000000..a45ca06e1aa6 --- /dev/null +++ b/arch/riscv/kvm/vcpu_timer.c @@ -0,0 +1,106 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include + +static enum hrtimer_restart kvm_riscv_vcpu_hrtimer_expired(struct hrtimer *h) +{ + struct kvm_vcpu_timer *t = container_of(h, struct kvm_vcpu_timer, hrt); + struct kvm_vcpu *vcpu = container_of(t, struct kvm_vcpu, arch.timer); + + t->is_set = false; + kvm_riscv_vcpu_set_interrupt(vcpu, IRQ_S_TIMER); + + return HRTIMER_NORESTART; +} + +static u64 kvm_riscv_delta_cycles2ns(u64 cycles, struct kvm_vcpu_timer *t) +{ + unsigned long flags; + u64 cycles_now, cycles_delta, delta_ns; + + local_irq_save(flags); + cycles_now = get_cycles64(); + if (cycles_now < cycles) + cycles_delta = cycles - cycles_now; + else + cycles_delta = 0; + delta_ns = (cycles_delta * t->mult) >> t->shift; + local_irq_restore(flags); + + return delta_ns; +} + +static int kvm_riscv_vcpu_timer_cancel(struct kvm_vcpu_timer *t) +{ + if (!t->init_done || !t->is_set) + return -EINVAL; + + hrtimer_cancel(&t->hrt); + t->is_set = false; + + return 0; +} + +int kvm_riscv_vcpu_timer_next_event(struct kvm_vcpu *vcpu, + unsigned long ncycles) +{ + struct kvm_vcpu_timer *t = &vcpu->arch.timer; + u64 delta_ns = kvm_riscv_delta_cycles2ns(ncycles, t); + + if (!t->init_done) + return -EINVAL; + + kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_S_TIMER); + + if (delta_ns > VCPU_TIMER_PROGRAM_THRESHOLD_NS) { + hrtimer_start(&t->hrt, ktime_add_ns(ktime_get(), delta_ns), + HRTIMER_MODE_ABS); + t->is_set = true; + } else + kvm_riscv_vcpu_set_interrupt(vcpu, IRQ_S_TIMER); + + return 0; +} + +int kvm_riscv_vcpu_timer_init(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_timer *t = &vcpu->arch.timer; + + if (t->init_done) + return -EINVAL; + + hrtimer_init(&t->hrt, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + t->hrt.function = kvm_riscv_vcpu_hrtimer_expired; + t->init_done = true; + t->is_set = false; + + riscv_cs_get_mult_shift(&t->mult, &t->shift); + + return 0; +} + +int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu) +{ + int ret; + + ret = kvm_riscv_vcpu_timer_cancel(&vcpu->arch.timer); + vcpu->arch.timer.init_done = false; + + return ret; +} + +int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu) +{ + return kvm_riscv_vcpu_timer_cancel(&vcpu->arch.timer); +} diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c index 09e031176bc6..749b25876cad 100644 --- a/drivers/clocksource/timer-riscv.c +++ b/drivers/clocksource/timer-riscv.c @@ -80,6 +80,12 @@ static int riscv_timer_dying_cpu(unsigned int cpu) return 0; } +void riscv_cs_get_mult_shift(u32 *mult, u32 *shift) +{ + *mult = riscv_clocksource.mult; + *shift = riscv_clocksource.shift; +} + /* called directly from the low-level interrupt handler */ void riscv_timer_interrupt(void) { diff --git a/include/clocksource/timer-riscv.h b/include/clocksource/timer-riscv.h new file mode 100644 index 000000000000..ecb9f70e2f98 --- /dev/null +++ b/include/clocksource/timer-riscv.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#ifndef __KVM_TIMER_RISCV_H +#define __KVM_TIMER_RISCV_H + +void riscv_cs_get_mult_shift(u32 *mult, u32 *shift); + +#endif From patchwork Mon Jul 29 11:57:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063681 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25F10174A for ; Mon, 29 Jul 2019 11:58:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 121A620069 for ; Mon, 29 Jul 2019 11:58:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 05ECB20408; Mon, 29 Jul 2019 11:58:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 06594200E5 for ; Mon, 29 Jul 2019 11:57:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mMqitPsqsJfCpXkJCG4nCBYBWDNr+NDO2uzT5gLkJ7s=; b=VLB64lP9w1TvEE 8UOaR/gK14a7yy/n11ZC0i5/idZ/G1F1zjjVEOc043glIXVPEFxyrz44O5qDRJdDGQzgGjorhVlmP rlt+EJhkKQfETZRthB88l0KsGD068rDPR6c0DbRd92Z/hM3pwzDineIHXQvfAnbzYfc7LUOuDwrjk 2OntCrtF5f0SSyeijROSUv/fAgxfPV09QH/AsR6XzOza0iMtzaAVomUARt780EsNifhS6jaAzSlAv Eb9/TJhNcIzLDI3gywn0IUNG01IGjSqkE2R7ZjMS0/jTfVauYS8OMid/RuFz5Fcz8/KQyLgi4nPYx 0OD269rpGa2QlZjV5DiA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Hr-0003CU-Pn; Mon, 29 Jul 2019 11:57:55 +0000 Received: from esa5.hgst.iphmx.com ([216.71.153.144]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Hn-0003AO-K1 for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401471; x=1595937471; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=Uu59jDp/K1nnzcez40mdfvwFKgriw67uN4ufSF9dCuY=; b=gJkA5qMh1y5ryGJlIUDaHZXJYqx64pVjK7S/rdjbrSZ8TvttRzfP1tvj HMHgTi1mxESL8j1KF7lb50k5kOHqaeVVsKMsoqhe4OyoUgHTsWZRAxrKV 0DQFK6fZAwbFFsTS6MJuaqcyo5jPskhd1K8KAmm3TWEn+8H1PVpnsJaEe YOm1iVPvOb21E7ohf1/YWduI8PPCySD4VLMmP8mQKfAG+pR0SjZhaJQfw Iy5MNpEsh/7LC0TtSkBC0wkobvw93wcs8PvwMJAoDLrZ4KXQXPUg6EbE7 OEp8tXBiAf2d3nlTRFWr3GH+5hXRAc6xPXv9BcjbSY61+mXXEUVhoE52P A==; IronPort-SDR: FWsf0WRLVob71DmHVAOdha8sqvyGwWPeuAAjSA/FkAwI8QVnRJJt/9UMkH6nxKTyllt+BY4b6q W7u/VR/5pjssRinDdL55np0RCBler13dBAKaJRRygHkKi1Qogug5OKDEMr7V21s8U0kZO5enjp is+itxLkbi+pqYWgeODOKVorNPG9NfmkHevcqoJ+f9LSpzQbWPY4kuVlXydzKEgN1nd0a0b/TD 3MvlkwNcf9txEplUxXjNQbFCRMCNq7jf/1xfa9RT6jsKg+xrqNu20UOBjg+cHcLkIBRLrsqqqa +cw= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="115403256" Received: from mail-by2nam01lp2053.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) ([104.47.34.53]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:50 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CC1HdqcL0VsOTkdcoAKlEYxujR+EQzvchoji6iuAs2c8fFdWOTdFVbBi/O0xmLR6rEmdPeM+4Rx0zHgzqDOjg1m3Aai0yA8jzhF6Emw4wOdIVb0Qi/ebAbFY9ZHsf/VNz0op6zy4BxjJ/Y7tYwGNMJqwttxDq614vRRotno/gjcR3PPbPrYlI0K8HwkYBdptZHi8PnTnEIrxWt1w6AxvRynnG+5BDlVqL50ZWcBTkqpnV/nF9hzkAPoFLj0I1kbnrbczVbWbMQctcO6VmpnIYwxmRxjtUd2RYjMmK0Qz3bId+b7B5pXuTb3ALiWchKL2W9Zgdsq1Skyhk5IIoGvJWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OZp+5gpkCVIzUXx2Eo46YjyDp7k8TkMMKd5bxbFMm8I=; b=gjtBLAMpT4I0hsUchUTVnKWTMFm+fSqkmIha7RhZDfssSdDm1jXigIT8a5Ff6s9IodUuobYhpikdmngW4+IJMWvbB05vpwH+DrdfUKjXP/C/kMvG/TkCxpOChG/PNobzVmr+Dgy0fhgWIorZgxODqj36qZ+dRknoLKrVN6QyPl1MBiZE1Ta4VBajF0v30d3bIKYng5SWDoXY/n14JyVjmY9+SIC8dM/WYK5EFZC+POrFVdIRhrun9HPfMnj+X9IM1jEBZIDI2LfDZpRIo7veXNBZtQJqDIwTPExTRQQ1zftyvbsLP5yPovN7/buoS/x+203wtJ5GRxjexAyfA+6fXQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OZp+5gpkCVIzUXx2Eo46YjyDp7k8TkMMKd5bxbFMm8I=; b=lnuNRvPEKHPNM4iEspP/VfG9laZuB4ELlaKjdfRfet4msjuEFgsDYJaz0646PTbpbMj4ZyT0hkdpmcxRv75/4b6WiKi7tVsQ7Y3ioebs0diDyFvkbNQnZh9KO+b/QWQC5q8cgVLI5cxTBZ/KqwjVu5Sqz73IpL7Rio9KDeK+jWI= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB5678.namprd04.prod.outlook.com (20.179.21.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.14; Mon, 29 Jul 2019 11:57:48 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:48 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 14/16] RISC-V: KVM: FP lazy save/restore Thread-Topic: [RFC PATCH 14/16] RISC-V: KVM: FP lazy save/restore Thread-Index: AQHVRgTXLsTKZv9ye0WZ5nDcokAn/A== Date: Mon, 29 Jul 2019 11:57:48 +0000 Message-ID: <20190729115544.17895-15-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: ea64eaf3-c5c3-49c7-6c93-08d7141bf9a9 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB5678; x-ms-traffictypediagnostic: MN2PR04MB5678: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:7219; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(199004)(189003)(7416002)(52116002)(6436002)(6486002)(7736002)(476003)(2616005)(2906002)(5660300002)(66066001)(4326008)(446003)(53946003)(68736007)(11346002)(81156014)(81166006)(14454004)(53936002)(26005)(186003)(78486014)(99286004)(36756003)(44832011)(486006)(305945005)(8676002)(54906003)(110136005)(25786009)(8936002)(478600001)(76176011)(102836004)(71200400001)(30864003)(6512007)(1076003)(66446008)(64756008)(66946007)(256004)(55236004)(316002)(9456002)(86362001)(66476007)(50226002)(66556008)(6506007)(386003)(71190400001)(14444005)(6116002)(3846002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB5678; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: p3VYc5pWbpysvfJV4rPSJqsZQ5p5KFYuIuSUOFhzT/90pBoPTPvKVNJzfZo8SijPs6TRnpw2RJfnaRd/OoULi6Z4oeHopeOoc9TKS1kJ9p85MfFPzSWsWPPHuj/mHgnrZHlmHu7UPa1vruvDrRUhvZXnzTKjoQuHK7XWHd/eaRG95klZ0wcV+ts7EeGilT5MRzA0gkq7NwKph/ujLqz6Yjoypwhb7Mebtfi7g5IWYscgGW2dlVjAiY2AKlB8Oc1F/Yuv0UXP98aAhsNA+5iO+0Y1cAq4HWSqZY+En8zRczsVxU1nRrnoJKJC2A4vyOu87dkcdDKk4o3dDTsCB9RI0iLePesCq3WiXuuH+tM5FK6wZc6u3X3yRJoI0b5K/z9Q3kwJSz3b36/9tVDSPD71CYmSrP9mKzL7F6Drl4CqDEA= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: ea64eaf3-c5c3-49c7-6c93-08d7141bf9a9 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:48.3725 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB5678 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045751_846460_DF17992B X-CRM114-Status: GOOD ( 12.25 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Atish Patra This patch adds floating point (F and D extension) context save/restore for guest VCPUs. The FP context is saved and restored lazily only when kernel enter/exits the in-kernel run loop and not during the KVM world switch. This way FP save/restore has minimal impact on KVM performance. Signed-off-by: Atish Patra Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 5 + arch/riscv/kernel/asm-offsets.c | 72 +++++++++++++ arch/riscv/kvm/vcpu.c | 75 +++++++++++++ arch/riscv/kvm/vcpu_switch.S | 174 ++++++++++++++++++++++++++++++ 4 files changed, 326 insertions(+) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 193a7ff0eb31..1bb4befa89da 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -113,6 +113,7 @@ struct kvm_cpu_context { unsigned long sepc; unsigned long sstatus; unsigned long hstatus; + union __riscv_fp_state fp; }; struct kvm_vcpu_csr { @@ -212,6 +213,10 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long scause, unsigned long stval); void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); +void __kvm_riscv_vcpu_fp_f_save(struct kvm_cpu_context *context); +void __kvm_riscv_vcpu_fp_f_restore(struct kvm_cpu_context *context); +void __kvm_riscv_vcpu_fp_d_save(struct kvm_cpu_context *context); +void __kvm_riscv_vcpu_fp_d_restore(struct kvm_cpu_context *context); int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index 711656710190..9980069a1acf 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -185,6 +185,78 @@ void asm_offsets(void) OFFSET(KVM_ARCH_HOST_SSCRATCH, kvm_vcpu_arch, host_sscratch); OFFSET(KVM_ARCH_HOST_STVEC, kvm_vcpu_arch, host_stvec); + /* F extension */ + + OFFSET(KVM_ARCH_FP_F_F0, kvm_cpu_context, fp.f.f[0]); + OFFSET(KVM_ARCH_FP_F_F1, kvm_cpu_context, fp.f.f[1]); + OFFSET(KVM_ARCH_FP_F_F2, kvm_cpu_context, fp.f.f[2]); + OFFSET(KVM_ARCH_FP_F_F3, kvm_cpu_context, fp.f.f[3]); + OFFSET(KVM_ARCH_FP_F_F4, kvm_cpu_context, fp.f.f[4]); + OFFSET(KVM_ARCH_FP_F_F5, kvm_cpu_context, fp.f.f[5]); + OFFSET(KVM_ARCH_FP_F_F6, kvm_cpu_context, fp.f.f[6]); + OFFSET(KVM_ARCH_FP_F_F7, kvm_cpu_context, fp.f.f[7]); + OFFSET(KVM_ARCH_FP_F_F8, kvm_cpu_context, fp.f.f[8]); + OFFSET(KVM_ARCH_FP_F_F9, kvm_cpu_context, fp.f.f[9]); + OFFSET(KVM_ARCH_FP_F_F10, kvm_cpu_context, fp.f.f[10]); + OFFSET(KVM_ARCH_FP_F_F11, kvm_cpu_context, fp.f.f[11]); + OFFSET(KVM_ARCH_FP_F_F12, kvm_cpu_context, fp.f.f[12]); + OFFSET(KVM_ARCH_FP_F_F13, kvm_cpu_context, fp.f.f[13]); + OFFSET(KVM_ARCH_FP_F_F14, kvm_cpu_context, fp.f.f[14]); + OFFSET(KVM_ARCH_FP_F_F15, kvm_cpu_context, fp.f.f[15]); + OFFSET(KVM_ARCH_FP_F_F16, kvm_cpu_context, fp.f.f[16]); + OFFSET(KVM_ARCH_FP_F_F17, kvm_cpu_context, fp.f.f[17]); + OFFSET(KVM_ARCH_FP_F_F18, kvm_cpu_context, fp.f.f[18]); + OFFSET(KVM_ARCH_FP_F_F19, kvm_cpu_context, fp.f.f[19]); + OFFSET(KVM_ARCH_FP_F_F20, kvm_cpu_context, fp.f.f[20]); + OFFSET(KVM_ARCH_FP_F_F21, kvm_cpu_context, fp.f.f[21]); + OFFSET(KVM_ARCH_FP_F_F22, kvm_cpu_context, fp.f.f[22]); + OFFSET(KVM_ARCH_FP_F_F23, kvm_cpu_context, fp.f.f[23]); + OFFSET(KVM_ARCH_FP_F_F24, kvm_cpu_context, fp.f.f[24]); + OFFSET(KVM_ARCH_FP_F_F25, kvm_cpu_context, fp.f.f[25]); + OFFSET(KVM_ARCH_FP_F_F26, kvm_cpu_context, fp.f.f[26]); + OFFSET(KVM_ARCH_FP_F_F27, kvm_cpu_context, fp.f.f[27]); + OFFSET(KVM_ARCH_FP_F_F28, kvm_cpu_context, fp.f.f[28]); + OFFSET(KVM_ARCH_FP_F_F29, kvm_cpu_context, fp.f.f[29]); + OFFSET(KVM_ARCH_FP_F_F30, kvm_cpu_context, fp.f.f[30]); + OFFSET(KVM_ARCH_FP_F_F31, kvm_cpu_context, fp.f.f[31]); + OFFSET(KVM_ARCH_FP_F_FCSR, kvm_cpu_context, fp.f.fcsr); + + /* D extension */ + + OFFSET(KVM_ARCH_FP_D_F0, kvm_cpu_context, fp.d.f[0]); + OFFSET(KVM_ARCH_FP_D_F1, kvm_cpu_context, fp.d.f[1]); + OFFSET(KVM_ARCH_FP_D_F2, kvm_cpu_context, fp.d.f[2]); + OFFSET(KVM_ARCH_FP_D_F3, kvm_cpu_context, fp.d.f[3]); + OFFSET(KVM_ARCH_FP_D_F4, kvm_cpu_context, fp.d.f[4]); + OFFSET(KVM_ARCH_FP_D_F5, kvm_cpu_context, fp.d.f[5]); + OFFSET(KVM_ARCH_FP_D_F6, kvm_cpu_context, fp.d.f[6]); + OFFSET(KVM_ARCH_FP_D_F7, kvm_cpu_context, fp.d.f[7]); + OFFSET(KVM_ARCH_FP_D_F8, kvm_cpu_context, fp.d.f[8]); + OFFSET(KVM_ARCH_FP_D_F9, kvm_cpu_context, fp.d.f[9]); + OFFSET(KVM_ARCH_FP_D_F10, kvm_cpu_context, fp.d.f[10]); + OFFSET(KVM_ARCH_FP_D_F11, kvm_cpu_context, fp.d.f[11]); + OFFSET(KVM_ARCH_FP_D_F12, kvm_cpu_context, fp.d.f[12]); + OFFSET(KVM_ARCH_FP_D_F13, kvm_cpu_context, fp.d.f[13]); + OFFSET(KVM_ARCH_FP_D_F14, kvm_cpu_context, fp.d.f[14]); + OFFSET(KVM_ARCH_FP_D_F15, kvm_cpu_context, fp.d.f[15]); + OFFSET(KVM_ARCH_FP_D_F16, kvm_cpu_context, fp.d.f[16]); + OFFSET(KVM_ARCH_FP_D_F17, kvm_cpu_context, fp.d.f[17]); + OFFSET(KVM_ARCH_FP_D_F18, kvm_cpu_context, fp.d.f[18]); + OFFSET(KVM_ARCH_FP_D_F19, kvm_cpu_context, fp.d.f[19]); + OFFSET(KVM_ARCH_FP_D_F20, kvm_cpu_context, fp.d.f[20]); + OFFSET(KVM_ARCH_FP_D_F21, kvm_cpu_context, fp.d.f[21]); + OFFSET(KVM_ARCH_FP_D_F22, kvm_cpu_context, fp.d.f[22]); + OFFSET(KVM_ARCH_FP_D_F23, kvm_cpu_context, fp.d.f[23]); + OFFSET(KVM_ARCH_FP_D_F24, kvm_cpu_context, fp.d.f[24]); + OFFSET(KVM_ARCH_FP_D_F25, kvm_cpu_context, fp.d.f[25]); + OFFSET(KVM_ARCH_FP_D_F26, kvm_cpu_context, fp.d.f[26]); + OFFSET(KVM_ARCH_FP_D_F27, kvm_cpu_context, fp.d.f[27]); + OFFSET(KVM_ARCH_FP_D_F28, kvm_cpu_context, fp.d.f[28]); + OFFSET(KVM_ARCH_FP_D_F29, kvm_cpu_context, fp.d.f[29]); + OFFSET(KVM_ARCH_FP_D_F30, kvm_cpu_context, fp.d.f[30]); + OFFSET(KVM_ARCH_FP_D_F31, kvm_cpu_context, fp.d.f[31]); + OFFSET(KVM_ARCH_FP_D_FCSR, kvm_cpu_context, fp.d.fcsr); + /* * THREAD_{F,X}* might be larger than a S-type offset can handle, but * these are used in performance-sensitive assembly so we can't resort diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index ed1f06b17953..82719ada3baa 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -31,6 +31,72 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { { NULL } }; +#ifdef CONFIG_FPU +static void kvm_riscv_vcpu_fp_reset(struct kvm_vcpu *vcpu) +{ + unsigned long isa = vcpu->arch.isa; + struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + + cntx->sstatus &= ~SR_FS; + if ((riscv_isa_extension_available(F) && (isa & RISCV_ISA_EXT_F)) || + (riscv_isa_extension_available(D) && (isa & RISCV_ISA_EXT_D))) + cntx->sstatus |= SR_FS_INITIAL; + else + cntx->sstatus |= SR_FS_OFF; +} + +static void kvm_riscv_vcpu_fp_clean(struct kvm_cpu_context *cntx) +{ + cntx->sstatus &= ~SR_FS; + cntx->sstatus |= SR_FS_CLEAN; +} + +static void kvm_riscv_vcpu_guest_fp_save(struct kvm_cpu_context *cntx) +{ + if ((cntx->sstatus & SR_FS) == SR_FS_DIRTY) { + if (riscv_isa_extension_available(D)) + __kvm_riscv_vcpu_fp_d_save(cntx); + else if (riscv_isa_extension_available(F)) + __kvm_riscv_vcpu_fp_f_save(cntx); + kvm_riscv_vcpu_fp_clean(cntx); + } +} + +static void kvm_riscv_vcpu_guest_fp_restore(struct kvm_cpu_context *cntx) +{ + if ((cntx->sstatus & SR_FS) != SR_FS_OFF) { + if (riscv_isa_extension_available(D)) + __kvm_riscv_vcpu_fp_d_restore(cntx); + else if (riscv_isa_extension_available(F)) + __kvm_riscv_vcpu_fp_f_restore(cntx); + kvm_riscv_vcpu_fp_clean(cntx); + } +} + +static void kvm_riscv_vcpu_host_fp_save(struct kvm_cpu_context *cntx) +{ + /* No need to check host sstatus as it can be modified outside */ + if (riscv_isa_extension_available(D)) + __kvm_riscv_vcpu_fp_d_save(cntx); + else if (riscv_isa_extension_available(F)) + __kvm_riscv_vcpu_fp_f_save(cntx); +} + +static void kvm_riscv_vcpu_host_fp_restore(struct kvm_cpu_context *cntx) +{ + if (riscv_isa_extension_available(D)) + __kvm_riscv_vcpu_fp_d_restore(cntx); + else if (riscv_isa_extension_available(F)) + __kvm_riscv_vcpu_fp_f_restore(cntx); +} +#else +static void kvm_riscv_vcpu_fp_reset(struct kvm_vcpu *vcpu) {} +static void kvm_riscv_vcpu_guest_fp_save(struct kvm_cpu_context *cntx) {} +static void kvm_riscv_vcpu_guest_fp_restore(struct kvm_cpu_context *cntx) {} +static void kvm_riscv_vcpu_host_fp_save(struct kvm_cpu_context *cntx) {} +static void kvm_riscv_vcpu_host_fp_restore(struct kvm_cpu_context *cntx) {} +#endif + #define KVM_RISCV_ISA_ALLOWED (RISCV_ISA_EXT_A | \ RISCV_ISA_EXT_C | \ RISCV_ISA_EXT_D | \ @@ -52,6 +118,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) memcpy(cntx, reset_cntx, sizeof(*cntx)); + kvm_riscv_vcpu_fp_reset(vcpu); + kvm_riscv_vcpu_timer_reset(vcpu); raw_spin_lock_irqsave(&vcpu->arch.irqs_lock, f); @@ -247,6 +315,7 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu, vcpu->arch.isa = reg_val; vcpu->arch.isa &= riscv_isa; vcpu->arch.isa &= KVM_RISCV_ISA_ALLOWED; + kvm_riscv_vcpu_fp_reset(vcpu); } else { return -ENOTSUPP; } @@ -566,6 +635,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) csr_write(CSR_VSIP, csr->vsip); csr_write(CSR_VSATP, csr->vsatp); + kvm_riscv_vcpu_host_fp_save(&vcpu->arch.host_context); + kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context); + kvm_riscv_stage2_update_pgtbl(vcpu); vcpu->cpu = cpu; @@ -577,6 +649,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) vcpu->cpu = -1; + kvm_riscv_vcpu_guest_fp_save(&vcpu->arch.guest_context); + kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context); + csr_write(CSR_HGATP, 0); csr_write(CSR_HIDELEG, 0); csr_write(CSR_HEDELEG, 0); diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S index c5b85605bf73..4ad337ea34c2 100644 --- a/arch/riscv/kvm/vcpu_switch.S +++ b/arch/riscv/kvm/vcpu_switch.S @@ -191,3 +191,177 @@ __kvm_switch_return: /* Return to C code */ ret ENDPROC(__kvm_riscv_switch_to) + +#ifdef CONFIG_FPU + .align 3 + .global __kvm_riscv_vcpu_fp_f_save +__kvm_riscv_vcpu_fp_f_save: + csrr t2, CSR_SSTATUS + li t1, SR_FS + csrs CSR_SSTATUS, t1 + frcsr t0 + fsw f0, KVM_ARCH_FP_F_F0(a0) + fsw f1, KVM_ARCH_FP_F_F1(a0) + fsw f2, KVM_ARCH_FP_F_F2(a0) + fsw f3, KVM_ARCH_FP_F_F3(a0) + fsw f4, KVM_ARCH_FP_F_F4(a0) + fsw f5, KVM_ARCH_FP_F_F5(a0) + fsw f6, KVM_ARCH_FP_F_F6(a0) + fsw f7, KVM_ARCH_FP_F_F7(a0) + fsw f8, KVM_ARCH_FP_F_F8(a0) + fsw f9, KVM_ARCH_FP_F_F9(a0) + fsw f10, KVM_ARCH_FP_F_F10(a0) + fsw f11, KVM_ARCH_FP_F_F11(a0) + fsw f12, KVM_ARCH_FP_F_F12(a0) + fsw f13, KVM_ARCH_FP_F_F13(a0) + fsw f14, KVM_ARCH_FP_F_F14(a0) + fsw f15, KVM_ARCH_FP_F_F15(a0) + fsw f16, KVM_ARCH_FP_F_F16(a0) + fsw f17, KVM_ARCH_FP_F_F17(a0) + fsw f18, KVM_ARCH_FP_F_F18(a0) + fsw f19, KVM_ARCH_FP_F_F19(a0) + fsw f20, KVM_ARCH_FP_F_F20(a0) + fsw f21, KVM_ARCH_FP_F_F21(a0) + fsw f22, KVM_ARCH_FP_F_F22(a0) + fsw f23, KVM_ARCH_FP_F_F23(a0) + fsw f24, KVM_ARCH_FP_F_F24(a0) + fsw f25, KVM_ARCH_FP_F_F25(a0) + fsw f26, KVM_ARCH_FP_F_F26(a0) + fsw f27, KVM_ARCH_FP_F_F27(a0) + fsw f28, KVM_ARCH_FP_F_F28(a0) + fsw f29, KVM_ARCH_FP_F_F29(a0) + fsw f30, KVM_ARCH_FP_F_F30(a0) + fsw f31, KVM_ARCH_FP_F_F31(a0) + sw t0, KVM_ARCH_FP_F_FCSR(a0) + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_vcpu_fp_d_save +__kvm_riscv_vcpu_fp_d_save: + csrr t2, CSR_SSTATUS + li t1, SR_FS + csrs CSR_SSTATUS, t1 + frcsr t0 + fsd f0, KVM_ARCH_FP_D_F0(a0) + fsd f1, KVM_ARCH_FP_D_F1(a0) + fsd f2, KVM_ARCH_FP_D_F2(a0) + fsd f3, KVM_ARCH_FP_D_F3(a0) + fsd f4, KVM_ARCH_FP_D_F4(a0) + fsd f5, KVM_ARCH_FP_D_F5(a0) + fsd f6, KVM_ARCH_FP_D_F6(a0) + fsd f7, KVM_ARCH_FP_D_F7(a0) + fsd f8, KVM_ARCH_FP_D_F8(a0) + fsd f9, KVM_ARCH_FP_D_F9(a0) + fsd f10, KVM_ARCH_FP_D_F10(a0) + fsd f11, KVM_ARCH_FP_D_F11(a0) + fsd f12, KVM_ARCH_FP_D_F12(a0) + fsd f13, KVM_ARCH_FP_D_F13(a0) + fsd f14, KVM_ARCH_FP_D_F14(a0) + fsd f15, KVM_ARCH_FP_D_F15(a0) + fsd f16, KVM_ARCH_FP_D_F16(a0) + fsd f17, KVM_ARCH_FP_D_F17(a0) + fsd f18, KVM_ARCH_FP_D_F18(a0) + fsd f19, KVM_ARCH_FP_D_F19(a0) + fsd f20, KVM_ARCH_FP_D_F20(a0) + fsd f21, KVM_ARCH_FP_D_F21(a0) + fsd f22, KVM_ARCH_FP_D_F22(a0) + fsd f23, KVM_ARCH_FP_D_F23(a0) + fsd f24, KVM_ARCH_FP_D_F24(a0) + fsd f25, KVM_ARCH_FP_D_F25(a0) + fsd f26, KVM_ARCH_FP_D_F26(a0) + fsd f27, KVM_ARCH_FP_D_F27(a0) + fsd f28, KVM_ARCH_FP_D_F28(a0) + fsd f29, KVM_ARCH_FP_D_F29(a0) + fsd f30, KVM_ARCH_FP_D_F30(a0) + fsd f31, KVM_ARCH_FP_D_F31(a0) + sw t0, KVM_ARCH_FP_D_FCSR(a0) + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_vcpu_fp_f_restore +__kvm_riscv_vcpu_fp_f_restore: + csrr t2, CSR_SSTATUS + li t1, SR_FS + lw t0, KVM_ARCH_FP_F_FCSR(a0) + csrs CSR_SSTATUS, t1 + flw f0, KVM_ARCH_FP_F_F0(a0) + flw f1, KVM_ARCH_FP_F_F1(a0) + flw f2, KVM_ARCH_FP_F_F2(a0) + flw f3, KVM_ARCH_FP_F_F3(a0) + flw f4, KVM_ARCH_FP_F_F4(a0) + flw f5, KVM_ARCH_FP_F_F5(a0) + flw f6, KVM_ARCH_FP_F_F6(a0) + flw f7, KVM_ARCH_FP_F_F7(a0) + flw f8, KVM_ARCH_FP_F_F8(a0) + flw f9, KVM_ARCH_FP_F_F9(a0) + flw f10, KVM_ARCH_FP_F_F10(a0) + flw f11, KVM_ARCH_FP_F_F11(a0) + flw f12, KVM_ARCH_FP_F_F12(a0) + flw f13, KVM_ARCH_FP_F_F13(a0) + flw f14, KVM_ARCH_FP_F_F14(a0) + flw f15, KVM_ARCH_FP_F_F15(a0) + flw f16, KVM_ARCH_FP_F_F16(a0) + flw f17, KVM_ARCH_FP_F_F17(a0) + flw f18, KVM_ARCH_FP_F_F18(a0) + flw f19, KVM_ARCH_FP_F_F19(a0) + flw f20, KVM_ARCH_FP_F_F20(a0) + flw f21, KVM_ARCH_FP_F_F21(a0) + flw f22, KVM_ARCH_FP_F_F22(a0) + flw f23, KVM_ARCH_FP_F_F23(a0) + flw f24, KVM_ARCH_FP_F_F24(a0) + flw f25, KVM_ARCH_FP_F_F25(a0) + flw f26, KVM_ARCH_FP_F_F26(a0) + flw f27, KVM_ARCH_FP_F_F27(a0) + flw f28, KVM_ARCH_FP_F_F28(a0) + flw f29, KVM_ARCH_FP_F_F29(a0) + flw f30, KVM_ARCH_FP_F_F30(a0) + flw f31, KVM_ARCH_FP_F_F31(a0) + fscsr t0 + csrw CSR_SSTATUS, t2 + ret + + .align 3 + .global __kvm_riscv_vcpu_fp_d_restore +__kvm_riscv_vcpu_fp_d_restore: + csrr t2, CSR_SSTATUS + li t1, SR_FS + lw t0, KVM_ARCH_FP_D_FCSR(a0) + csrs CSR_SSTATUS, t1 + fld f0, KVM_ARCH_FP_D_F0(a0) + fld f1, KVM_ARCH_FP_D_F1(a0) + fld f2, KVM_ARCH_FP_D_F2(a0) + fld f3, KVM_ARCH_FP_D_F3(a0) + fld f4, KVM_ARCH_FP_D_F4(a0) + fld f5, KVM_ARCH_FP_D_F5(a0) + fld f6, KVM_ARCH_FP_D_F6(a0) + fld f7, KVM_ARCH_FP_D_F7(a0) + fld f8, KVM_ARCH_FP_D_F8(a0) + fld f9, KVM_ARCH_FP_D_F9(a0) + fld f10, KVM_ARCH_FP_D_F10(a0) + fld f11, KVM_ARCH_FP_D_F11(a0) + fld f12, KVM_ARCH_FP_D_F12(a0) + fld f13, KVM_ARCH_FP_D_F13(a0) + fld f14, KVM_ARCH_FP_D_F14(a0) + fld f15, KVM_ARCH_FP_D_F15(a0) + fld f16, KVM_ARCH_FP_D_F16(a0) + fld f17, KVM_ARCH_FP_D_F17(a0) + fld f18, KVM_ARCH_FP_D_F18(a0) + fld f19, KVM_ARCH_FP_D_F19(a0) + fld f20, KVM_ARCH_FP_D_F20(a0) + fld f21, KVM_ARCH_FP_D_F21(a0) + fld f22, KVM_ARCH_FP_D_F22(a0) + fld f23, KVM_ARCH_FP_D_F23(a0) + fld f24, KVM_ARCH_FP_D_F24(a0) + fld f25, KVM_ARCH_FP_D_F25(a0) + fld f26, KVM_ARCH_FP_D_F26(a0) + fld f27, KVM_ARCH_FP_D_F27(a0) + fld f28, KVM_ARCH_FP_D_F28(a0) + fld f29, KVM_ARCH_FP_D_F29(a0) + fld f30, KVM_ARCH_FP_D_F30(a0) + fld f31, KVM_ARCH_FP_D_F31(a0) + fscsr t0 + csrw CSR_SSTATUS, t2 + ret +#endif From patchwork Mon Jul 29 11:57:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063683 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7FE5813A4 for ; Mon, 29 Jul 2019 11:58:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6D30F20069 for ; Mon, 29 Jul 2019 11:58:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6149F212DA; Mon, 29 Jul 2019 11:58:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C593820069 for ; Mon, 29 Jul 2019 11:58:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rO/tEMkGO1M7gxlaKChMn4an442sKfCVvdDvSfSmvGU=; b=uDXesi7aDL0iN0 EvGgg3RqPlGGDpe7SUQC/uWo1gZKf1kI01Y96/V2yWvenjZo25iKslUSiJSwufQZ7zzCveOFEYBdV IUB3Nn+VdGaoUqXLdkocb71BgKhLNjEQJ29JWVdmk+Kolnn8ciJ1jTeINV/WqY7F55vQZ1KIASR8K Zf8OD2UPQ8ERASEaJ7aZzFlGVuWrGvXWwqXXV6DRCHMPCPCEWc2kwmBA58TTblV5ImF5QUYK+BJfT uQF91zBIyTAGcYzo/595a4QkhYU8VkylBCQuxP/2v30UupVRvME6YLOBSJbk5Heja9pnbkljWeVEL qVhkFI2T0k/MTX+7+cAQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Hw-0003Gj-Go; Mon, 29 Jul 2019 11:58:00 +0000 Received: from esa6.hgst.iphmx.com ([216.71.154.45]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Ht-0003EL-VA for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:57:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401478; x=1595937478; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=P6cjCLR4eUG2EJMspJYXY/8LwrM2eLc8uGW3DsDB3p0=; b=eI8E5fRk2GSvZ6TxOZw0YlAGQ6sDmHklZjSOf0N5I4TYqH+Uukv+/S0N //9F2oRhQpKrGoyu039fV/MnKBp9jRaiKRqzmKByQGfkDOERBDtEDrpEu 7cJ9UGw0D+ou06Lk2qyWil0T7c+2kP0b3Y8lZKlBo+i1VCSQEUGz51/+0 mxuopiArQzMlrWTP3OTYLBxhA2e3MwDMW7Oxel2JyDfWe0nqSFglLY9pN ImwQhDLcf91r4C8cmLysR61gU8xmwgg7POZ6yDoHNFO6UMstHAMsO3rou 8bJipKvbhbVC+55b7uaY0avN48wDHQ53263Ikm2yrSOKSg7Sr32u5MQmk Q==; IronPort-SDR: Vd+k4cxo7+rtfaNYKIRFU4DmDJdYg5hgHpBZUyRmJpxVlvyYRM9QhEDB54izVk2VyB4muDeaYC tRx61Uf/guknKMvWDRvIodf4nWKHkxaKIVpX5WA0qpmy/KAfzScGWFJwV2lf/xPrrgCWnp+Mas 4vNbG5t+C692B5Q/2OLfIaDu9dZNHI17XyeTiz4QyesPsw78XzGAty7vD/WjfPibchsH+j854Y DiJl5E90Ufnk+k8di9EJKFJYm+Pwx++SRhSukBmjflNniUjtHam5ytIec+RV50vxncs0IutPhh C04= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="115974028" Received: from mail-bn3nam04lp2056.outbound.protection.outlook.com (HELO NAM04-BN3-obe.outbound.protection.outlook.com) ([104.47.46.56]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:57:56 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jrH/nSPU7F3/+RsVnoZ4ndNH0rj4Hh9/iVohZYciDkxCtvB3yeE0zY852/fz8j7HcADbDlSFSJbcAhfPvER9LGXbtoV+BLo370dKxXRv5XPXxYzXkORCRIsP4JJv2vfOgeS17Y7tzfRt/Kqvizf/ToRfFaf5xOgvs+qEmENvsSXLPl1R/dnfZFRtLMswX5RU/70Y0MN6nVFvpmG13RsTyfzdO00FMmZqqHjlFAB0eb7H1cCwXSt26bBDRyDDBO+a9K86IIg+eEKmgeKXs8Ps5j+/JmtSDyuqxujTW/FFGUQHThSU3HRusM3skjRg3dUPJZICNOkoXkqE1aNkXW0UCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eBmZ/fHeN4BkJC8/R7Fe/4MK2SNUWwUMXdIvKaCDSxI=; b=bfwdSEK/7vrIIAk4RYaOrgE/8PQxUsmiLnkULpwojWoyldf1Rn9iw4tQfwU+qJOHG59dPNsr09sTSxnIEasuiC7zA92z/+ZEwViasiM6Ur3BgDvYE8gXuaSywn35FA5lVoL6C11JSGctsPmQaD1LjxzcOP1N989mgmHV/4TwcYgk+cv17vY4PRl+eAODrp9e1UF1BaW9ytaWbz5IA/XXfXm4nx2ieQi5MqlwLo0hV2IiTg2qgHnhNe7HNUMTiQOyBLPeRVKfOczQTUEO4FWL8PVcoDqVDtAWTkCYCYd3jL4nZr/m7ay2vGLGX9tnzsUXpXSWR3N4/sCHYADJ+ULN7Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eBmZ/fHeN4BkJC8/R7Fe/4MK2SNUWwUMXdIvKaCDSxI=; b=FQ+OOalDUJA3UwIzFTvQ4xNRnTD0pJk872OVoNZnQ2F5BhEGVotDsEYmlEchJPfYQ7omxyBiy2K04+drZw4Ka8r1XxMjuXWZK2w0WQjlk0nLUwWtCTZjSprgKUSese0MNkg6jUClOK20vCPrjaSSFcY7s5YfVoji9pujX8qGwGY= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB6208.namprd04.prod.outlook.com (20.178.248.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.13; Mon, 29 Jul 2019 11:57:54 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:57:54 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 15/16] RISC-V: KVM: Add SBI v0.1 support Thread-Topic: [RFC PATCH 15/16] RISC-V: KVM: Add SBI v0.1 support Thread-Index: AQHVRgTbYdvVLL9QvUubgRD/usDWHg== Date: Mon, 29 Jul 2019 11:57:54 +0000 Message-ID: <20190729115544.17895-16-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 9750a1bf-8e95-4089-e894-08d7141bfd65 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB6208; x-ms-traffictypediagnostic: MN2PR04MB6208: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:6108; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(346002)(366004)(136003)(396003)(376002)(39860400002)(189003)(199004)(6116002)(71190400001)(76176011)(71200400001)(7416002)(11346002)(7736002)(99286004)(446003)(2616005)(2906002)(476003)(256004)(54906003)(78486014)(3846002)(110136005)(316002)(53936002)(25786009)(4326008)(305945005)(486006)(14444005)(44832011)(66066001)(5660300002)(478600001)(68736007)(14454004)(66556008)(66476007)(64756008)(66446008)(6436002)(186003)(6512007)(1076003)(26005)(81166006)(81156014)(8676002)(6486002)(50226002)(8936002)(55236004)(86362001)(9456002)(52116002)(386003)(6506007)(36756003)(102836004)(66946007); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB6208; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: lb7T201sNJ8ix9rQf9V9E6wDoZiqInBUFqS1S/THR6yjpSGHemxnqx45WUiVbv7EdXzowLYZGWgpHfUkg5xFqujKBh5guYqMn8hK3j6Q8UV/Z1rngFqgy6lmoICBbXxhAiQxef8e8plnLgxlFpARIvrUOFvdQs6oaxc2SZoKCxB/2pxt5tomsP4giwraDoGf9Ml4A24irSW46BeSDiQ+H6Ht8SJr5FyG44DD9yUTpaJAvPIrwRm1+uuQzR84u/Znp8dgw1QgyC0YXzeWoShjOls1c9rkZSqngXrCAMUyYDMF6vlT5wvNfwMXeH5zrrTReeEM78o9YXpTfyCz1dUQfX19bIF0ZqJAdl/fqq/nO18DtMGonjW1KHWvjBmvE6bS+Mvp3EEujmie26r7ExSu1Uwk704dkqCld2VqhCxQOUw= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9750a1bf-8e95-4089-e894-08d7141bfd65 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:57:54.6500 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB6208 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045758_043109_73CE0EF9 X-CRM114-Status: GOOD ( 19.31 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Atish Patra The KVM host kernel running in HS-mode needs to handle SBI calls coming from guest kernel running in VS-mode. This patch adds SBI v0.1 support in KVM RISC-V. All the SBI calls are implemented correctly except remote tlb flushes. For remote TLB flushes, we are doing full TLB flush and this will be optimized in future. Signed-off-by: Atish Patra Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 2 + arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu_exit.c | 3 + arch/riscv/kvm/vcpu_sbi.c | 118 ++++++++++++++++++++++++++++++ 4 files changed, 124 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/kvm/vcpu_sbi.c diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 1bb4befa89da..22a62ffc09f5 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -227,4 +227,6 @@ void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu); void kvm_riscv_halt_guest(struct kvm *kvm); void kvm_riscv_resume_guest(struct kvm *kvm); +int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu); + #endif /* __RISCV_KVM_HOST_H__ */ diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 3e0c7558320d..b56dc1650d2c 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -9,6 +9,6 @@ ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm kvm-objs := $(common-objs-y) kvm-objs += main.o vm.o vmid.o tlb.o mmu.o -kvm-objs += vcpu.o vcpu_exit.o vcpu_switch.o vcpu_timer.o +kvm-objs += vcpu.o vcpu_exit.o vcpu_switch.o vcpu_timer.o vcpu_sbi.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 2d09640c98b2..003e43facdfc 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -531,6 +531,9 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, (vcpu->arch.guest_context.hstatus & HSTATUS_STL)) ret = stage2_page_fault(vcpu, run, scause, stval); break; + case EXC_SUPERVISOR_SYSCALL: + if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) + ret = kvm_riscv_vcpu_sbi_ecall(vcpu); default: break; }; diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c new file mode 100644 index 000000000000..8dfdbf744378 --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -0,0 +1,118 @@ +// SPDX-License-Identifier: GPL-2.0 +/** + * Copyright (c) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include + +#define SBI_VERSION_MAJOR 0 +#define SBI_VERSION_MINOR 1 + +static unsigned long kvm_sbi_unpriv_load(const unsigned long *addr, + struct kvm_vcpu *vcpu) +{ + unsigned long flags, val; + unsigned long __hstatus, __sstatus; + + local_irq_save(flags); + __hstatus = csr_read(CSR_HSTATUS); + __sstatus = csr_read(CSR_SSTATUS); + csr_write(CSR_HSTATUS, vcpu->arch.guest_context.hstatus | HSTATUS_SPRV); + csr_write(CSR_SSTATUS, vcpu->arch.guest_context.sstatus); + val = *addr; + csr_write(CSR_HSTATUS, __hstatus); + csr_write(CSR_SSTATUS, __sstatus); + local_irq_restore(flags); + + return val; +} + +static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu, u32 type) +{ + int i; + struct kvm_vcpu *tmp; + + kvm_for_each_vcpu(i, tmp, vcpu->kvm) + tmp->arch.power_off = true; + kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP); + + memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event)); + vcpu->run->system_event.type = type; + vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT; +} + +int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu) +{ + int ret = 1; + u64 next_cycle; + int vcpuid; + struct kvm_vcpu *remote_vcpu; + ulong dhart_mask; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + + if (!cp) + return -EINVAL; + switch (cp->a7) { + case SBI_SET_TIMER: +#if __riscv_xlen == 32 + next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; +#else + next_cycle = (u64)cp->a0; +#endif + kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle); + break; + case SBI_CONSOLE_PUTCHAR: + /* Not implemented */ + cp->a0 = -ENOTSUPP; + break; + case SBI_CONSOLE_GETCHAR: + /* Not implemented */ + cp->a0 = -ENOTSUPP; + break; + case SBI_CLEAR_IPI: + kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_S_SOFT); + break; + case SBI_SEND_IPI: + dhart_mask = kvm_sbi_unpriv_load((unsigned long *)cp->a0, vcpu); + for_each_set_bit(vcpuid, &dhart_mask, BITS_PER_LONG) { + remote_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, vcpuid); + kvm_riscv_vcpu_set_interrupt(remote_vcpu, IRQ_S_SOFT); + } + break; + case SBI_SHUTDOWN: + kvm_sbi_system_shutdown(vcpu, KVM_SYSTEM_EVENT_SHUTDOWN); + ret = 0; + break; + case SBI_REMOTE_FENCE_I: + sbi_remote_fence_i(NULL); + break; + + /*TODO:There should be a way to call remote hfence.bvma. + * Preferred method is now a SBI call. Until then, just flush + * all tlbs. + */ + case SBI_REMOTE_SFENCE_VMA: + /*TODO: Parse vma range.*/ + sbi_remote_sfence_vma(NULL, 0, 0); + break; + case SBI_REMOTE_SFENCE_VMA_ASID: + /*TODO: Parse vma range for given ASID */ + sbi_remote_sfence_vma(NULL, 0, 0); + break; + default: + cp->a0 = ENOTSUPP; + break; + }; + + if (ret >= 0) + cp->sepc += 4; + + return ret; +} From patchwork Mon Jul 29 11:58:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11063689 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B5E714E5 for ; Mon, 29 Jul 2019 11:58:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 282A720069 for ; Mon, 29 Jul 2019 11:58:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 17C6820408; Mon, 29 Jul 2019 11:58:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED,UPPERCASE_50_75 autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3B3B220069 for ; Mon, 29 Jul 2019 11:58:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MfJcspAmznPSMx9hePMzyufgI5+urU2ukPcsN8xVCCw=; b=Q+5AH1l9nezFA5 TzhtZvI1NABdjGcbx2FHQK93Bt05s5XilSu7i7rKKDZazDNrC0IYdc2ETg5fJNp4Wxe/o14wZaf5K goJPy+mEkl3EcFAzl9FVXfxWlQirtsDjX+KVQe6OkM27v3YJAjnl5zKH5EUXCzJE5IfefZ+tdnwDi gCIlsUDqbw0OA52PyduYHkEXytZJQLeMoKq6cSBvvLMMRT6MqRB3csvsKC6e8QNlvSwGR4NlEUAPg 9L5QoYEiX6OaJ2Vwahxyfsb9J8PZyvSNcfmvUercfTLtl4VUVLUiJgdCZ5+PQH7QFzro+LWTeCPWE bVDlGjwywW9gLP2/O7cw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hs4I3-0003Kq-8C; Mon, 29 Jul 2019 11:58:07 +0000 Received: from esa1.hgst.iphmx.com ([68.232.141.245]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hs4Hz-0003J5-M5 for linux-riscv@lists.infradead.org; Mon, 29 Jul 2019 11:58:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1564401483; x=1595937483; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=3sVQD24PwdMT+/uCMiIrGy+yEC76O3ceO712hzdkhZM=; b=eNcx97fCDcZrkdF4xQcGwiZ3EdXyLFu0GZ0d61+XTXOw5hFnSSw35N61 /4OSaWE0kBL45Y6G+BSoGAuUM9iJWHZcJ4vewn3RI1tV+KBT0wBcmTwIh FDTezgJZeG+Jetm/MDKURzcV1I2Sb6pLv2jO5/oPzn4tXqnUPikzpKzBs MtfGjSc4vV1AJ5m1Tag0A7SPJzeETxIoZxmR/0kg5CNU7H+KiwWRCnikz VFEhBbYyTr5HrQTpq8vLHKTcb+AVeJhlR0No2aFeBijLhHYIo0RoEcOwa Lz6mhShOo4K2N/DGZhjtzJCN8750vXlX12Ihkym/P+GB8gistNyjZEWM5 w==; IronPort-SDR: a3ZJ3Ifk9fHCuDwwHh3guDYF9S3/uGewddLc2oQR1e2eJ12T/f0EJhPhpgAAyQtUZY8/spGO+M GeZgape7WKBQ8kySv1F85soh+vSwn10LAA0Fy9zgQL2DBdD1Ur9Mf4FPZYqyw2UerFO5s1cS8Y Hv/jPTLnjxITdXfm5GDuSyByq9npxdLSIvvL5RRe0TMxAASCniD0lLTkgNKmXvYEmF6Xk/Nq6t I1iMCsp3tdE1LERWT2oBbITlFqMIIU7/AaGTToehwbXWLlx/jr19P+06cZZaJMmrcuKGQ9eVeh KvE= X-IronPort-AV: E=Sophos;i="5.64,322,1559491200"; d="scan'208";a="220843445" Received: from mail-bn3nam04lp2055.outbound.protection.outlook.com (HELO NAM04-BN3-obe.outbound.protection.outlook.com) ([104.47.46.55]) by ob1.hgst.iphmx.com with ESMTP; 29 Jul 2019 19:58:01 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UjAOfYkMw4vgQ3ISP0fv74UGR3XrV/YEdAOk85zFowbmxg86Zhq3aIo25f+iv5mcxJE3WQDIaQzhzuSTyxLvJ1Yd0wkJrkJbCfXvQ/M0FZlAju/B6SuFqWb90CdEhI4AH9leGwANccvm2y+19g1abEpAiX851+m/eF+nIglNYiEXUmrKPwhrxgzdTpSJixTJPVqswZXQlT+w44KweowiFUUQKjt0j+VMDDhNU2S1LzH0Lm4UPvB8ZBj9rmjFzE8Vo8scxAb50Xhc6pSvv/4EmrtlkytpT0o8rW6UjCNORRjpnYkZ9WLJeenLPPZIYCGzJk6B5tiX763MibiEa0qsoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ot1gvoL1ac5Bhj8HPN2EC5ZSgrmJBFnsvONQ2G1Vovk=; b=U7MZaMgHIVK/EC9Xm+LANohxDYJIpavvUmiuK+UPYIE+ttzf0G5Pz9fTXrx+ckG0Ozmm6kDJM3zuPXDB7vclgJbLFGLHMugsqVUPScnHjzzq2sVHYfo++FwzGWe58OTspBVM/r/PJDbDO7VfnqAe99LiOSWa5HGztZLA/kSIp7VqtglzsvS++x0oQkC9SKNr5ojeGdZGSQ55xp8fgI2qdOAUTD3rzOaxOp0fd8NVqtltFlhB0By/tg3PgmJNH3+kM0qquYuLBKt4E2dh+x04mUE2RESQeDxtN4uPYgTWt+4GUpltlAmogGrQ2eqsdGoN5dA9SffdVR9tIZL+/hFA7g== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ot1gvoL1ac5Bhj8HPN2EC5ZSgrmJBFnsvONQ2G1Vovk=; b=K10vqZHjEGsGRpmmGuBokuJ6iYPthe67ws9ffibPbu1QlXn2FKoZyvbbViEjD/G4OBXZs9vuEK6sT/nsLaQuacsbImQOl5HpTdhAPzUL17anC/IqnXEh72mgUk67SSPNVUTuUjQME+9DfOSxTitm0ieno2Umdjxz5FcF/b2HMAg= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB6208.namprd04.prod.outlook.com (20.178.248.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2115.13; Mon, 29 Jul 2019 11:58:00 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2115.005; Mon, 29 Jul 2019 11:58:00 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [RFC PATCH 16/16] RISC-V: Enable VIRTIO drivers in RV64 and RV32 defconfig Thread-Topic: [RFC PATCH 16/16] RISC-V: Enable VIRTIO drivers in RV64 and RV32 defconfig Thread-Index: AQHVRgTePmcBp6xfkUSkbD7A64bxsA== Date: Mon, 29 Jul 2019 11:58:00 +0000 Message-ID: <20190729115544.17895-17-anup.patel@wdc.com> References: <20190729115544.17895-1-anup.patel@wdc.com> In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PN1PR01CA0116.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c00::32) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.23.101] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 3c2762ee-8f19-48a9-6b8e-08d7141c00fc x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB6208; x-ms-traffictypediagnostic: MN2PR04MB6208: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:238; x-forefront-prvs: 01136D2D90 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(346002)(366004)(136003)(396003)(376002)(39860400002)(189003)(199004)(6116002)(71190400001)(76176011)(71200400001)(7416002)(11346002)(7736002)(99286004)(446003)(2616005)(2906002)(476003)(256004)(54906003)(78486014)(3846002)(110136005)(316002)(53936002)(25786009)(4326008)(305945005)(486006)(14444005)(44832011)(66066001)(5660300002)(478600001)(68736007)(14454004)(66556008)(66476007)(64756008)(66446008)(6436002)(186003)(6512007)(1076003)(26005)(81166006)(81156014)(8676002)(6486002)(50226002)(8936002)(55236004)(86362001)(9456002)(52116002)(386003)(6506007)(36756003)(102836004)(66946007); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB6208; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: /E+4+ypMNd62GMUuhb7yoKN+nCAd7CX/40peCTUuYzMJBdO8FGZjGx8Iswj/SFQr6KWuKEa12ni4tYEbzOlfX9BKkS1l6nQvH4Q5z+IFtPClq3q2mgQ4yFKCcLufVT5oV12RfmLr6XMoWvYagK1h0ThARa4SYdO4bbabkL/Ow91Br0ie1TvmjmHzLFb3m9qaJ74e+j1IqkySRO8HZm+waeTvdianUz7dtT0waewisXECoMeNsS3tym6Pk5t2bQCmzPdTcD6fb6c0pgNfO4iihydGE8Md+3FQeDo7KQM/Jb96cGRQXXUjn4wFKiGBACiv/wjDjJLKfk0YgE6ZBjHw1aL/cxBHJfoxS3OMneb/e2/8mSMd3bBCea1jyoJAOplnzlv9mOrQ8MGltZ0dufoDUpTeVLPhtbWORDyoGt7L+SM= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3c2762ee-8f19-48a9-6b8e-08d7141c00fc X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jul 2019 11:58:00.6845 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB6208 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190729_045803_839767_B1AAF30B X-CRM114-Status: GOOD ( 11.34 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch enables more VIRTIO drivers (such as console, rpmsg, 9p, rng, etc.) which are usable on KVM RISC-V Guest and Xvisor RISC-V Guest. Signed-off-by: Anup Patel --- arch/riscv/configs/defconfig | 23 ++++++++++++++++++----- arch/riscv/configs/rv32_defconfig | 13 +++++++++++++ 2 files changed, 31 insertions(+), 5 deletions(-) diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig index b7b749b18853..420a0dbef386 100644 --- a/arch/riscv/configs/defconfig +++ b/arch/riscv/configs/defconfig @@ -29,15 +29,19 @@ CONFIG_IP_PNP_DHCP=y CONFIG_IP_PNP_BOOTP=y CONFIG_IP_PNP_RARP=y CONFIG_NETLINK_DIAG=y +CONFIG_NET_9P=y +CONFIG_NET_9P_VIRTIO=y CONFIG_PCI=y CONFIG_PCIEPORTBUS=y CONFIG_PCI_HOST_GENERIC=y CONFIG_PCIE_XILINX=y CONFIG_DEVTMPFS=y +CONFIG_DEVTMPFS_MOUNT=y CONFIG_BLK_DEV_LOOP=y CONFIG_VIRTIO_BLK=y CONFIG_BLK_DEV_SD=y CONFIG_BLK_DEV_SR=y +CONFIG_SCSI_VIRTIO=y CONFIG_ATA=y CONFIG_SATA_AHCI=y CONFIG_SATA_AHCI_PLATFORM=y @@ -53,9 +57,15 @@ CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_EARLYCON_RISCV_SBI=y CONFIG_HVC_RISCV_SBI=y +CONFIG_VIRTIO_CONSOLE=y +CONFIG_HW_RANDOM=y +CONFIG_HW_RANDOM_VIRTIO=y +CONFIG_SPI=y +CONFIG_SPI_SIFIVE=y # CONFIG_PTP_1588_CLOCK is not set CONFIG_DRM=y CONFIG_DRM_RADEON=y +CONFIG_DRM_VIRTIO_GPU=y CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_USB=y CONFIG_USB_XHCI_HCD=y @@ -66,8 +76,14 @@ CONFIG_USB_OHCI_HCD=y CONFIG_USB_OHCI_HCD_PLATFORM=y CONFIG_USB_STORAGE=y CONFIG_USB_UAS=y +CONFIG_MMC=y +CONFIG_MMC_SPI=y +CONFIG_VIRTIO_PCI=y +CONFIG_VIRTIO_BALLOON=y +CONFIG_VIRTIO_INPUT=y CONFIG_VIRTIO_MMIO=y -CONFIG_SPI_SIFIVE=y +CONFIG_RPMSG_CHAR=y +CONFIG_RPMSG_VIRTIO=y CONFIG_EXT4_FS=y CONFIG_EXT4_FS_POSIX_ACL=y CONFIG_AUTOFS4_FS=y @@ -80,11 +96,8 @@ CONFIG_NFS_V4=y CONFIG_NFS_V4_1=y CONFIG_NFS_V4_2=y CONFIG_ROOT_NFS=y +CONFIG_9P_FS=y CONFIG_CRYPTO_USER_API_HASH=y CONFIG_CRYPTO_DEV_VIRTIO=y CONFIG_PRINTK_TIME=y -CONFIG_SPI=y -CONFIG_MMC_SPI=y -CONFIG_MMC=y -CONFIG_DEVTMPFS_MOUNT=y # CONFIG_RCU_TRACE is not set diff --git a/arch/riscv/configs/rv32_defconfig b/arch/riscv/configs/rv32_defconfig index d5449ef805a3..b28267404d55 100644 --- a/arch/riscv/configs/rv32_defconfig +++ b/arch/riscv/configs/rv32_defconfig @@ -29,6 +29,8 @@ CONFIG_IP_PNP_DHCP=y CONFIG_IP_PNP_BOOTP=y CONFIG_IP_PNP_RARP=y CONFIG_NETLINK_DIAG=y +CONFIG_NET_9P=y +CONFIG_NET_9P_VIRTIO=y CONFIG_PCI=y CONFIG_PCIEPORTBUS=y CONFIG_PCI_HOST_GENERIC=y @@ -38,6 +40,7 @@ CONFIG_BLK_DEV_LOOP=y CONFIG_VIRTIO_BLK=y CONFIG_BLK_DEV_SD=y CONFIG_BLK_DEV_SR=y +CONFIG_SCSI_VIRTIO=y CONFIG_ATA=y CONFIG_SATA_AHCI=y CONFIG_SATA_AHCI_PLATFORM=y @@ -53,9 +56,13 @@ CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_EARLYCON_RISCV_SBI=y CONFIG_HVC_RISCV_SBI=y +CONFIG_VIRTIO_CONSOLE=y +CONFIG_HW_RANDOM=y +CONFIG_HW_RANDOM_VIRTIO=y # CONFIG_PTP_1588_CLOCK is not set CONFIG_DRM=y CONFIG_DRM_RADEON=y +CONFIG_DRM_VIRTIO_GPU=y CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_USB=y CONFIG_USB_XHCI_HCD=y @@ -66,7 +73,12 @@ CONFIG_USB_OHCI_HCD=y CONFIG_USB_OHCI_HCD_PLATFORM=y CONFIG_USB_STORAGE=y CONFIG_USB_UAS=y +CONFIG_VIRTIO_PCI=y +CONFIG_VIRTIO_BALLOON=y +CONFIG_VIRTIO_INPUT=y CONFIG_VIRTIO_MMIO=y +CONFIG_RPMSG_CHAR=y +CONFIG_RPMSG_VIRTIO=y CONFIG_SIFIVE_PLIC=y CONFIG_EXT4_FS=y CONFIG_EXT4_FS_POSIX_ACL=y @@ -80,6 +92,7 @@ CONFIG_NFS_V4=y CONFIG_NFS_V4_1=y CONFIG_NFS_V4_2=y CONFIG_ROOT_NFS=y +CONFIG_9P_FS=y CONFIG_CRYPTO_USER_API_HASH=y CONFIG_CRYPTO_DEV_VIRTIO=y CONFIG_PRINTK_TIME=y