From patchwork Tue Jan 10 17:50:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13095420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D4D4C54EBC for ; Tue, 10 Jan 2023 17:51:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234191AbjAJRvK (ORCPT ); Tue, 10 Jan 2023 12:51:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234068AbjAJRvH (ORCPT ); Tue, 10 Jan 2023 12:51:07 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9636D44342 for ; Tue, 10 Jan 2023 09:51:06 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id n22-20020a62e516000000b005817b3a197aso5574398pff.14 for ; Tue, 10 Jan 2023 09:51:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fSVqrui+1W74Hrf4epptiMBHiQYnhr5FaBusZZ81SDE=; b=R8v1fAdpmiLORdR/6ktJCiQ2aALuYwzCcZijkwGb0AflpTOnEMVESUATKgwi0yfBQx xgBzkYcsqazavlx1zpvtYVBMYARb6Eh/45I+FeEV/qkvYPvoEgChwbz5e0w0sKL7vLTB VM+k4s+F/pVNbNtqmG6n3r9otFh3NyYoJ4dPPsLeBUdWg1EV6dKo8WikXKCtKEjhITT2 kh7h/Cdvarg2SswTXJEsi7TdyvGF2DER6nwwebUWBadKQ4hV1qr+Uo0QnimbAzKTSSyU /USbrX5Jz48VWQeelpO57wPd7wrvqfBxIS7j0OOWUHd+HXFzbWJTyyyhmwjIg8Tsjw2s VzbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fSVqrui+1W74Hrf4epptiMBHiQYnhr5FaBusZZ81SDE=; b=6epvAVu3nF2/7R7suwKIQpxAox9D/AqnGwOj4DLlOXDrLjtO8Ca3NnZbREhWS1vNaB V4l5ztg8pnHT/4z4JMXk3d/PJle00mdsH/N0O+CG272+xhZrslla+2AMOI5dPJZN/WKk sK7e6EWyBasFzy6b5h5+Jsvf4MueyDtUJwzy0524uURVzYXSr33SyOAyLKuhU67gFwRT Omrwklcj95B4DgxaH5IPctd+ItnG2v7AmiYQrUYRJFD8BIp6tB28cycPM04Mtowd2nkV uKazx8bfRiHnySTZtyWmfeSBxtS1LJWvZm+Q3Q77OoUftam1oJxL9Nel7JbuGbk8SZIb N+AA== X-Gm-Message-State: AFqh2kp9c28IH0AVYWCEiuQeND5wWQX/wUvn4Vl5eori5nzbnvCDYSj+ tDR4vEdSA03Hb8k4J2cvwLY9oTQplPq9dB1As8eU9iiVx+fKWsrSL2IMCie2faNxj5AY/ve0oQW 6SCq3Vhzw2fqDwVTjRA4Jb+HyTe4vZrk+dfbwpg7/nUrzxIQmUyjPf5kYCg== X-Google-Smtp-Source: AMrXdXsl5Y64p29YqOYuHCSB3ckVHBw9Q21gZ6p3yxTsXKM5IJIHn5oV9Z8CzHCdMSeD8xD+H8Wk2O/pjh0= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:11:8358:4c2a:eae1:4752]) (user=pgonda job=sendgmr) by 2002:a63:4912:0:b0:46f:38ad:de99 with SMTP id w18-20020a634912000000b0046f38adde99mr4042789pga.218.1673373065690; Tue, 10 Jan 2023 09:51:05 -0800 (PST) Date: Tue, 10 Jan 2023 09:50:51 -0800 In-Reply-To: <20230110175057.715453-1-pgonda@google.com> Message-Id: <20230110175057.715453-2-pgonda@google.com> Mime-Version: 1.0 References: <20230110175057.715453-1-pgonda@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Subject: [PATCH V6 1/7] KVM: selftests: sparsebit: add const where appropriate From: Peter Gonda To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Michael Roth , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Peter Gonda Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michael Roth Subsequent patches will introduce an encryption bitmap in kvm_util that would be useful to allow tests to access in read-only fashion. This will be done via a const sparsebit*. To avoid warnings or the need to add casts everywhere, add const to the various sparsebit functions that are applicable for read-only usage of sparsebit. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng cc: Andrew Jones Signed-off-by: Michael Roth Signed-off-by: Peter Gonda --- .../testing/selftests/kvm/include/sparsebit.h | 36 +++++++------- tools/testing/selftests/kvm/lib/sparsebit.c | 48 +++++++++---------- 2 files changed, 42 insertions(+), 42 deletions(-) diff --git a/tools/testing/selftests/kvm/include/sparsebit.h b/tools/testing/selftests/kvm/include/sparsebit.h index 12a9a4b9cead..fb5170d57fcb 100644 --- a/tools/testing/selftests/kvm/include/sparsebit.h +++ b/tools/testing/selftests/kvm/include/sparsebit.h @@ -30,26 +30,26 @@ typedef uint64_t sparsebit_num_t; struct sparsebit *sparsebit_alloc(void); void sparsebit_free(struct sparsebit **sbitp); -void sparsebit_copy(struct sparsebit *dstp, struct sparsebit *src); +void sparsebit_copy(struct sparsebit *dstp, const struct sparsebit *src); -bool sparsebit_is_set(struct sparsebit *sbit, sparsebit_idx_t idx); -bool sparsebit_is_set_num(struct sparsebit *sbit, +bool sparsebit_is_set(const struct sparsebit *sbit, sparsebit_idx_t idx); +bool sparsebit_is_set_num(const struct sparsebit *sbit, sparsebit_idx_t idx, sparsebit_num_t num); -bool sparsebit_is_clear(struct sparsebit *sbit, sparsebit_idx_t idx); -bool sparsebit_is_clear_num(struct sparsebit *sbit, +bool sparsebit_is_clear(const struct sparsebit *sbit, sparsebit_idx_t idx); +bool sparsebit_is_clear_num(const struct sparsebit *sbit, sparsebit_idx_t idx, sparsebit_num_t num); -sparsebit_num_t sparsebit_num_set(struct sparsebit *sbit); -bool sparsebit_any_set(struct sparsebit *sbit); -bool sparsebit_any_clear(struct sparsebit *sbit); -bool sparsebit_all_set(struct sparsebit *sbit); -bool sparsebit_all_clear(struct sparsebit *sbit); -sparsebit_idx_t sparsebit_first_set(struct sparsebit *sbit); -sparsebit_idx_t sparsebit_first_clear(struct sparsebit *sbit); -sparsebit_idx_t sparsebit_next_set(struct sparsebit *sbit, sparsebit_idx_t prev); -sparsebit_idx_t sparsebit_next_clear(struct sparsebit *sbit, sparsebit_idx_t prev); -sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *sbit, +sparsebit_num_t sparsebit_num_set(const struct sparsebit *sbit); +bool sparsebit_any_set(const struct sparsebit *sbit); +bool sparsebit_any_clear(const struct sparsebit *sbit); +bool sparsebit_all_set(const struct sparsebit *sbit); +bool sparsebit_all_clear(const struct sparsebit *sbit); +sparsebit_idx_t sparsebit_first_set(const struct sparsebit *sbit); +sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *sbit); +sparsebit_idx_t sparsebit_next_set(const struct sparsebit *sbit, sparsebit_idx_t prev); +sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *sbit, sparsebit_idx_t prev); +sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *sbit, sparsebit_idx_t start, sparsebit_num_t num); -sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *sbit, +sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *sbit, sparsebit_idx_t start, sparsebit_num_t num); void sparsebit_set(struct sparsebit *sbitp, sparsebit_idx_t idx); @@ -62,9 +62,9 @@ void sparsebit_clear_num(struct sparsebit *sbitp, sparsebit_idx_t start, sparsebit_num_t num); void sparsebit_clear_all(struct sparsebit *sbitp); -void sparsebit_dump(FILE *stream, struct sparsebit *sbit, +void sparsebit_dump(FILE *stream, const struct sparsebit *sbit, unsigned int indent); -void sparsebit_validate_internal(struct sparsebit *sbit); +void sparsebit_validate_internal(const struct sparsebit *sbit); #ifdef __cplusplus } diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c index 50e0cf41a7dd..6777a5b1fbd2 100644 --- a/tools/testing/selftests/kvm/lib/sparsebit.c +++ b/tools/testing/selftests/kvm/lib/sparsebit.c @@ -202,7 +202,7 @@ static sparsebit_num_t node_num_set(struct node *nodep) /* Returns a pointer to the node that describes the * lowest bit index. */ -static struct node *node_first(struct sparsebit *s) +static struct node *node_first(const struct sparsebit *s) { struct node *nodep; @@ -216,7 +216,7 @@ static struct node *node_first(struct sparsebit *s) * lowest bit index > the index of the node pointed to by np. * Returns NULL if no node with a higher index exists. */ -static struct node *node_next(struct sparsebit *s, struct node *np) +static struct node *node_next(const struct sparsebit *s, struct node *np) { struct node *nodep = np; @@ -244,7 +244,7 @@ static struct node *node_next(struct sparsebit *s, struct node *np) * highest index < the index of the node pointed to by np. * Returns NULL if no node with a lower index exists. */ -static struct node *node_prev(struct sparsebit *s, struct node *np) +static struct node *node_prev(const struct sparsebit *s, struct node *np) { struct node *nodep = np; @@ -273,7 +273,7 @@ static struct node *node_prev(struct sparsebit *s, struct node *np) * subtree and duplicates the bit settings to the newly allocated nodes. * Returns the newly allocated copy of subtree. */ -static struct node *node_copy_subtree(struct node *subtree) +static struct node *node_copy_subtree(const struct node *subtree) { struct node *root; @@ -307,7 +307,7 @@ static struct node *node_copy_subtree(struct node *subtree) * index is within the bits described by the mask bits or the number of * contiguous bits set after the mask. Returns NULL if there is no such node. */ -static struct node *node_find(struct sparsebit *s, sparsebit_idx_t idx) +static struct node *node_find(const struct sparsebit *s, sparsebit_idx_t idx) { struct node *nodep; @@ -393,7 +393,7 @@ static struct node *node_add(struct sparsebit *s, sparsebit_idx_t idx) } /* Returns whether all the bits in the sparsebit array are set. */ -bool sparsebit_all_set(struct sparsebit *s) +bool sparsebit_all_set(const struct sparsebit *s) { /* * If any nodes there must be at least one bit set. Only case @@ -776,7 +776,7 @@ static void node_reduce(struct sparsebit *s, struct node *nodep) /* Returns whether the bit at the index given by idx, within the * sparsebit array is set or not. */ -bool sparsebit_is_set(struct sparsebit *s, sparsebit_idx_t idx) +bool sparsebit_is_set(const struct sparsebit *s, sparsebit_idx_t idx) { struct node *nodep; @@ -922,7 +922,7 @@ static inline sparsebit_idx_t node_first_clear(struct node *nodep, int start) * used by test cases after they detect an unexpected condition, as a means * to capture diagnostic information. */ -static void sparsebit_dump_internal(FILE *stream, struct sparsebit *s, +static void sparsebit_dump_internal(FILE *stream, const struct sparsebit *s, unsigned int indent) { /* Dump the contents of s */ @@ -970,7 +970,7 @@ void sparsebit_free(struct sparsebit **sbitp) * sparsebit_alloc(). It can though already have bits set, which * if different from src will be cleared. */ -void sparsebit_copy(struct sparsebit *d, struct sparsebit *s) +void sparsebit_copy(struct sparsebit *d, const struct sparsebit *s) { /* First clear any bits already set in the destination */ sparsebit_clear_all(d); @@ -982,7 +982,7 @@ void sparsebit_copy(struct sparsebit *d, struct sparsebit *s) } /* Returns whether num consecutive bits starting at idx are all set. */ -bool sparsebit_is_set_num(struct sparsebit *s, +bool sparsebit_is_set_num(const struct sparsebit *s, sparsebit_idx_t idx, sparsebit_num_t num) { sparsebit_idx_t next_cleared; @@ -1006,14 +1006,14 @@ bool sparsebit_is_set_num(struct sparsebit *s, } /* Returns whether the bit at the index given by idx. */ -bool sparsebit_is_clear(struct sparsebit *s, +bool sparsebit_is_clear(const struct sparsebit *s, sparsebit_idx_t idx) { return !sparsebit_is_set(s, idx); } /* Returns whether num consecutive bits starting at idx are all cleared. */ -bool sparsebit_is_clear_num(struct sparsebit *s, +bool sparsebit_is_clear_num(const struct sparsebit *s, sparsebit_idx_t idx, sparsebit_num_t num) { sparsebit_idx_t next_set; @@ -1042,13 +1042,13 @@ bool sparsebit_is_clear_num(struct sparsebit *s, * value. Use sparsebit_any_set(), instead of sparsebit_num_set() > 0, * to determine if the sparsebit array has any bits set. */ -sparsebit_num_t sparsebit_num_set(struct sparsebit *s) +sparsebit_num_t sparsebit_num_set(const struct sparsebit *s) { return s->num_set; } /* Returns whether any bit is set in the sparsebit array. */ -bool sparsebit_any_set(struct sparsebit *s) +bool sparsebit_any_set(const struct sparsebit *s) { /* * Nodes only describe set bits. If any nodes then there @@ -1071,20 +1071,20 @@ bool sparsebit_any_set(struct sparsebit *s) } /* Returns whether all the bits in the sparsebit array are cleared. */ -bool sparsebit_all_clear(struct sparsebit *s) +bool sparsebit_all_clear(const struct sparsebit *s) { return !sparsebit_any_set(s); } /* Returns whether all the bits in the sparsebit array are set. */ -bool sparsebit_any_clear(struct sparsebit *s) +bool sparsebit_any_clear(const struct sparsebit *s) { return !sparsebit_all_set(s); } /* Returns the index of the first set bit. Abort if no bits are set. */ -sparsebit_idx_t sparsebit_first_set(struct sparsebit *s) +sparsebit_idx_t sparsebit_first_set(const struct sparsebit *s) { struct node *nodep; @@ -1098,7 +1098,7 @@ sparsebit_idx_t sparsebit_first_set(struct sparsebit *s) /* Returns the index of the first cleared bit. Abort if * no bits are cleared. */ -sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s) +sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *s) { struct node *nodep1, *nodep2; @@ -1152,7 +1152,7 @@ sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s) /* Returns index of next bit set within s after the index given by prev. * Returns 0 if there are no bits after prev that are set. */ -sparsebit_idx_t sparsebit_next_set(struct sparsebit *s, +sparsebit_idx_t sparsebit_next_set(const struct sparsebit *s, sparsebit_idx_t prev) { sparsebit_idx_t lowest_possible = prev + 1; @@ -1245,7 +1245,7 @@ sparsebit_idx_t sparsebit_next_set(struct sparsebit *s, /* Returns index of next bit cleared within s after the index given by prev. * Returns 0 if there are no bits after prev that are cleared. */ -sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s, +sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *s, sparsebit_idx_t prev) { sparsebit_idx_t lowest_possible = prev + 1; @@ -1301,7 +1301,7 @@ sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s, * and returns the index of the first sequence of num consecutively set * bits. Returns a value of 0 of no such sequence exists. */ -sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s, +sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *s, sparsebit_idx_t start, sparsebit_num_t num) { sparsebit_idx_t idx; @@ -1336,7 +1336,7 @@ sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s, * and returns the index of the first sequence of num consecutively cleared * bits. Returns a value of 0 of no such sequence exists. */ -sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *s, +sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *s, sparsebit_idx_t start, sparsebit_num_t num) { sparsebit_idx_t idx; @@ -1584,7 +1584,7 @@ static size_t display_range(FILE *stream, sparsebit_idx_t low, * contiguous bits. This is done because '-' is used to specify command-line * options, and sometimes ranges are specified as command-line arguments. */ -void sparsebit_dump(FILE *stream, struct sparsebit *s, +void sparsebit_dump(FILE *stream, const struct sparsebit *s, unsigned int indent) { size_t current_line_len = 0; @@ -1682,7 +1682,7 @@ void sparsebit_dump(FILE *stream, struct sparsebit *s, * s. On error, diagnostic information is printed to stderr and * abort is called. */ -void sparsebit_validate_internal(struct sparsebit *s) +void sparsebit_validate_internal(const struct sparsebit *s) { bool error_detected = false; struct node *nodep, *prev = NULL; From patchwork Tue Jan 10 17:50:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13095421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62033C46467 for ; Tue, 10 Jan 2023 17:51:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234600AbjAJRvM (ORCPT ); Tue, 10 Jan 2023 12:51:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234173AbjAJRvJ (ORCPT ); Tue, 10 Jan 2023 12:51:09 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F6DD33D75 for ; Tue, 10 Jan 2023 09:51:08 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id i70-20020a638749000000b004b2b09ec530so3135970pge.3 for ; Tue, 10 Jan 2023 09:51:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HEafxR5smy9VpjPjO6IaK/ycja7cEDG1FQOzKPOmrvg=; b=M8l0s71GXEiAd1irLSOXHosGqiLUsf9u7hMTKJrNdITyMK+Eqv5AJ3+GI1+pq+mfsP prOS78xMVy7Llhncf+9yNn2fkaUP8QQiFfg8Qdt8eg4NiaLxrXWXC+S/QZRN5PPTY0nr lk0K1EfXGz6EpjqbseQYcROjBUAP42GrjXwhxtaQboyN5Cbw2b9cNnLgDoGOf9oXTiy4 5pqO9E4tYyWPbyw49QaVmIZBVrbkqnOD6YmJnlQgRardJQw3yIUO/Z4en1ZmugApzSw9 kE8I7nl/jNnMyWceEzT15p1yFj/W/GDw6nUuQsJYj9qjukUZYLIBcxFMI9fukxHIRBCW 1K8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HEafxR5smy9VpjPjO6IaK/ycja7cEDG1FQOzKPOmrvg=; b=kzygnwnQd0nGSRIoVHkCskn8NVxjTk8yOgqUnjfkFXziClFlcoitnTUtAvYlBcR6Y7 J9UQCYxysNtrOONje1XxEb2KY5GGgZsh0hK73D3pT0RRL4WzYob5yVQleCpOJw7DmjyR uOFHNfNhtY8r1SeN6v8owNv30QeJWHM3Uc0khyhahuGFYmViuupXGMCPlTRR59U/XeaY XdYIxnmmUgmLaayRIXxrJQV5aT4mzm6FXZl28HFTN5ehqYc+g+QMwLKf6MApdIat4UlD fXU8R3YI3G+KfyWsdIbop6x63xMsXvDq5fLn8bZwmGecpwSLK5LC4ko3Xn/Wt3KkNjEJ LSJA== X-Gm-Message-State: AFqh2koQueo8mP2NCLWCXt5MDwN2OiGBP8sqVELNfOBqLnHgb8zrbzQJ szotueYoM1o0Epv2S8nepgpeeVn+1ddBUqEzi2eZmMZHHlTnwbgdEgJFXip/xq1nEMpi+RnMiBL dbEVY2OEPasLbg508e05xHPXKNNbO3m4ZyWJuDIIjyYPa4BnV1iEdykbs+A== X-Google-Smtp-Source: AMrXdXti+Iwjyna3LMdSgPjaVvI4CHqbzL19eNkQkJ+RXWo4qC9XPkTz511QRSQsPpvJgE4ZiHuGhZ1C1/c= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:11:8358:4c2a:eae1:4752]) (user=pgonda job=sendgmr) by 2002:a17:902:bb8f:b0:192:fa87:f109 with SMTP id m15-20020a170902bb8f00b00192fa87f109mr1326191pls.173.1673373067783; Tue, 10 Jan 2023 09:51:07 -0800 (PST) Date: Tue, 10 Jan 2023 09:50:52 -0800 In-Reply-To: <20230110175057.715453-1-pgonda@google.com> Message-Id: <20230110175057.715453-3-pgonda@google.com> Mime-Version: 1.0 References: <20230110175057.715453-1-pgonda@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Subject: [PATCH V6 2/7] KVM: selftests: add hooks for managing protected guest memory From: Peter Gonda To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Michael Roth Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add kvm_vm.protected metadata. Protected VMs memory, potentially register and other state may not be accessible to KVM. This combined with a new protected_phy_pages bitmap will allow the selftests to check if a given pages is accessible. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng cc: Andrew Jones Originally-by: Michael Roth Signed-off-by: Peter Gonda --- .../selftests/kvm/include/kvm_util_base.h | 14 ++++++++++++-- tools/testing/selftests/kvm/lib/kvm_util.c | 16 +++++++++++++--- 2 files changed, 25 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index fbc2a79369b8..015b59a0b80e 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -45,6 +45,7 @@ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */ struct userspace_mem_region { struct kvm_userspace_memory_region region; struct sparsebit *unused_phy_pages; + struct sparsebit *protected_phy_pages; int fd; off_t offset; enum vm_mem_backing_src_type backing_src_type; @@ -111,6 +112,9 @@ struct kvm_vm { vm_vaddr_t handlers; uint32_t dirty_ring_size; + /* VM protection enabled: SEV, etc*/ + bool protected; + /* Cache of information for binary stats interface */ int stats_fd; struct kvm_stats_header stats_header; @@ -679,10 +683,16 @@ const char *exit_reason_str(unsigned int exit_reason); vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot); +vm_paddr_t _vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot, bool protected); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); +static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot) +{ + return _vm_phy_pages_alloc(vm, num, paddr_min, memslot, vm->protected); +} + /* * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also * loads the test binary into guest memory and creates an IRQ chip (x86 only). diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 56d5ea949cbb..63913b219b42 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -663,6 +663,7 @@ static void __vm_mem_region_delete(struct kvm_vm *vm, vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, ®ion->region); sparsebit_free(®ion->unused_phy_pages); + sparsebit_free(®ion->protected_phy_pages); ret = munmap(region->mmap_start, region->mmap_size); TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); if (region->fd >= 0) { @@ -1010,6 +1011,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, region->backing_src_type = src_type; region->unused_phy_pages = sparsebit_alloc(); + region->protected_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, guest_paddr >> vm->page_shift, npages); region->region.slot = slot; @@ -1799,6 +1801,10 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) region->host_mem); fprintf(stream, "%*sunused_phy_pages: ", indent + 2, ""); sparsebit_dump(stream, region->unused_phy_pages, 0); + if (vm->protected) { + fprintf(stream, "%*sprotected_phy_pages: ", indent + 2, ""); + sparsebit_dump(stream, region->protected_phy_pages, 0); + } } fprintf(stream, "%*sMapped Virtual Pages:\n", indent, ""); sparsebit_dump(stream, vm->vpages_mapped, indent + 2); @@ -1895,8 +1901,9 @@ const char *exit_reason_str(unsigned int exit_reason) * and their base address is returned. A TEST_ASSERT failure occurs if * not enough pages are available at or above paddr_min. */ -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot) +vm_paddr_t _vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot, + bool protected) { struct userspace_mem_region *region; sparsebit_idx_t pg, base; @@ -1929,8 +1936,11 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, abort(); } - for (pg = base; pg < base + num; ++pg) + for (pg = base; pg < base + num; ++pg) { sparsebit_clear(region->unused_phy_pages, pg); + if (protected) + sparsebit_set(region->protected_phy_pages, pg); + } return base * vm->page_size; } From patchwork Tue Jan 10 17:50:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13095422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58ECCC54EBC for ; Tue, 10 Jan 2023 17:51:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234721AbjAJRvP (ORCPT ); Tue, 10 Jan 2023 12:51:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234431AbjAJRvL (ORCPT ); Tue, 10 Jan 2023 12:51:11 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 286B234D66 for ; Tue, 10 Jan 2023 09:51:10 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id x17-20020a170902ec9100b0019294547b06so8872895plg.12 for ; Tue, 10 Jan 2023 09:51:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RLg3N3B1bx063/IFxDT1N/4UdV8geL6TSaRg6AsRPdU=; b=F6RTYMD3LKIXUv2pGeNaJCtcrXmTN1X7OLdUsuDJOzfXEUiMPNU7xJ5BBW/rKE64r4 l8jK4+pY3CraB0g1Chnjwm6n6dPCMTdDNLZyiS8qLMMqBWYkRRhSNXgghmuhX2htDM7u 1Hrh+vBwkRHRXMgNf/Qw+F1lA2fu2QG8tloyogZENZ180G6i0TwULgXlfK5gDnlgboNA SzEZHoCjWrUrBb+VuohEaXO1lZIAWz9sX9euOLyY7B9Z8jWHSPJIgfheJxfwG8QgrazW QXmAh+gcs41Kh8wG/GcHYEIQ8qo20Zn3ckrpZ/QPCqXFgMGr1mzWlUVaLwRz9+qqZqt/ LLyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RLg3N3B1bx063/IFxDT1N/4UdV8geL6TSaRg6AsRPdU=; b=QJ+UY8ES7F+Y0K9HdeFbw5ydJBzJyqj7chqhUpq4SxONXkVUBrBWzVU/gsUtgU5tb6 HazU/Ief08fEe8EDNa54/QtRf8zzVqLJGqk/7Tt2SUL0q/j4NyQDuuLQZGJMt7aCkt1/ Q6J5+NQAEmWjmVsq6K65iWaA5qt6zxIfA427qv1SQEiT7QY3wP0EnZXL0we4784AbegW urLGm0LhzLIBZQGhiWEarnvPdqUReVskKXh43762kldKJcBntn88mGVx5XDmsnzXNI/h w4ayOkjOw0AVLqeL0HYYoGFUMz+FBIsSi9+0OV/fD5USSatVv48PFSBBjsoiBaufS8dR tnsQ== X-Gm-Message-State: AFqh2koOFgn8SnWiwSYr6wSx3N/iAebJr51HB6Gqh1PvLcuf9FkMZmpZ jUwR5qgoXzXQgA/tp8qH8qsV/aKey5pjtyX1x7LpJscTskutxCA1p0uB6CvNlsHtZijYmCxGW3A +ciD5xAMSaNfMa7499brIqIJpxhAol6RkYOgFMYXbaljkl1Ig+aoLU96JKA== X-Google-Smtp-Source: AMrXdXvBiK2ZCKSLFFhC8Kg/WtCxEs52Q32V9oODmmBPqqYGBQdQ6MV1KKO/ZWgdkbcuH6tuZDDQRuiGWVw= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:11:8358:4c2a:eae1:4752]) (user=pgonda job=sendgmr) by 2002:a05:6a00:701:b0:581:208d:8f9f with SMTP id 1-20020a056a00070100b00581208d8f9fmr3636969pfl.37.1673373069613; Tue, 10 Jan 2023 09:51:09 -0800 (PST) Date: Tue, 10 Jan 2023 09:50:53 -0800 In-Reply-To: <20230110175057.715453-1-pgonda@google.com> Message-Id: <20230110175057.715453-4-pgonda@google.com> Mime-Version: 1.0 References: <20230110175057.715453-1-pgonda@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Subject: [PATCH V6 3/7] KVM: selftests: handle protected bits in page tables From: Peter Gonda To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Michael Roth Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org SEV guests rely on an encyption bit which resides within the range that current code treats as address bits. Guest code will expect these bits to be set appropriately in their page tables, whereas the rest of the kvm_util functions will generally expect these bits to not be present. Introduce pte_me_mask and struct kvm_vm_arch to allow for arch specific address tagging. Currently just adding x86 c_bit and s_bit support for SEV and TDX. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng cc: Andrew Jones Originally-by: Michael Roth Signed-off-by: Peter Gonda --- tools/arch/arm64/include/asm/kvm_host.h | 7 +++++++ tools/arch/riscv/include/asm/kvm_host.h | 7 +++++++ tools/arch/s390/include/asm/kvm_host.h | 7 +++++++ tools/arch/x86/include/asm/kvm_host.h | 13 ++++++++++++ .../selftests/kvm/include/kvm_util_base.h | 19 +++++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++++- .../selftests/kvm/lib/x86_64/processor.c | 17 ++++++++++++--- 7 files changed, 87 insertions(+), 4 deletions(-) create mode 100644 tools/arch/arm64/include/asm/kvm_host.h create mode 100644 tools/arch/riscv/include/asm/kvm_host.h create mode 100644 tools/arch/s390/include/asm/kvm_host.h create mode 100644 tools/arch/x86/include/asm/kvm_host.h diff --git a/tools/arch/arm64/include/asm/kvm_host.h b/tools/arch/arm64/include/asm/kvm_host.h new file mode 100644 index 000000000000..218f5cdf0d86 --- /dev/null +++ b/tools/arch/arm64/include/asm/kvm_host.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H +#define _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H + +struct kvm_vm_arch {}; + +#endif // _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H diff --git a/tools/arch/riscv/include/asm/kvm_host.h b/tools/arch/riscv/include/asm/kvm_host.h new file mode 100644 index 000000000000..c8280d5659ce --- /dev/null +++ b/tools/arch/riscv/include/asm/kvm_host.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H +#define _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H + +struct kvm_vm_arch {}; + +#endif // _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H diff --git a/tools/arch/s390/include/asm/kvm_host.h b/tools/arch/s390/include/asm/kvm_host.h new file mode 100644 index 000000000000..4c4c1c1e4bf8 --- /dev/null +++ b/tools/arch/s390/include/asm/kvm_host.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _TOOLS_LINUX_ASM_S390_KVM_HOST_H +#define _TOOLS_LINUX_ASM_S390_KVM_HOST_H + +struct kvm_vm_arch {}; + +#endif // _TOOLS_LINUX_ASM_S390_KVM_HOST_H diff --git a/tools/arch/x86/include/asm/kvm_host.h b/tools/arch/x86/include/asm/kvm_host.h new file mode 100644 index 000000000000..d8f48fe835fb --- /dev/null +++ b/tools/arch/x86/include/asm/kvm_host.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _TOOLS_LINUX_ASM_X86_KVM_HOST_H +#define _TOOLS_LINUX_ASM_X86_KVM_HOST_H + +#include +#include + +struct kvm_vm_arch { + uint64_t c_bit; + uint64_t s_bit; +}; + +#endif // _TOOLS_LINUX_ASM_X86_KVM_HOST_H diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 015b59a0b80e..f84d7777d5ca 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -17,6 +17,8 @@ #include "linux/rbtree.h" #include +#include +#include #include @@ -111,6 +113,9 @@ struct kvm_vm { vm_vaddr_t idt; vm_vaddr_t handlers; uint32_t dirty_ring_size; + uint64_t gpa_protected_mask; + + struct kvm_vm_arch arch; /* VM protection enabled: SEV, etc*/ bool protected; @@ -162,6 +167,7 @@ enum vm_guest_mode { VM_MODE_P40V48_16K, VM_MODE_P40V48_64K, VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */ + VM_MODE_PXXV48_4K_SEV, /* For 48bits VA but ANY bits PA */ VM_MODE_P47V64_4K, VM_MODE_P44V64_4K, VM_MODE_P36V48_4K, @@ -441,6 +447,17 @@ void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva); vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva); void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa); + +static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa) +{ + return gpa & ~vm->gpa_protected_mask; +} + +static inline vm_paddr_t vm_tag_gpa(struct kvm_vm *vm, vm_paddr_t gpa) +{ + return gpa | vm->gpa_protected_mask; +} + void vcpu_run(struct kvm_vcpu *vcpu); int _vcpu_run(struct kvm_vcpu *vcpu); @@ -917,4 +934,6 @@ void kvm_selftest_arch_init(void); void kvm_arch_vm_post_create(struct kvm_vm *vm); +bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr); + #endif /* SELFTEST_KVM_UTIL_BASE_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 63913b219b42..ba771c2d949d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1451,9 +1451,10 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, * address providing the memory to the vm physical address is returned. * A TEST_ASSERT failure occurs if no region containing gpa exists. */ -void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa) +void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa_tagged) { struct userspace_mem_region *region; + vm_paddr_t gpa = vm_untag_gpa(vm, gpa_tagged); region = userspace_mem_region_find(vm, gpa, gpa); if (!region) { @@ -2147,3 +2148,21 @@ void __attribute((constructor)) kvm_selftest_init(void) kvm_selftest_arch_init(); } + +bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr) +{ + sparsebit_idx_t pg = 0; + struct userspace_mem_region *region; + + if (!vm->protected) + return false; + + region = userspace_mem_region_find(vm, paddr, paddr); + if (!region) { + TEST_FAIL("No vm physical memory at 0x%lx", paddr); + return false; + } + + pg = paddr >> vm->page_shift; + return sparsebit_is_set(region->protected_phy_pages, pg); +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index acfa1d01e7df..d03cefd9f6cd 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -127,6 +127,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) /* If needed, create page map l4 table. */ if (!vm->pgd_created) { vm->pgd = vm_alloc_page_table(vm); + vm->pgd_created = true; } } @@ -153,13 +154,16 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm, int target_level) { uint64_t *pte = virt_get_pte(vm, parent_pte, vaddr, current_level); + uint64_t paddr_raw = vm_untag_gpa(vm, paddr); if (!(*pte & PTE_PRESENT_MASK)) { *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK; if (current_level == target_level) - *pte |= PTE_LARGE_MASK | (paddr & PHYSICAL_PAGE_MASK); - else + *pte |= PTE_LARGE_MASK | (paddr_raw & PHYSICAL_PAGE_MASK); + else { *pte |= vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK; + } + } else { /* * Entry already present. Assert that the caller doesn't want @@ -197,6 +201,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) "Physical address beyond maximum supported,\n" " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); + TEST_ASSERT(vm_untag_gpa(vm, paddr) == paddr, + "Unexpected bits in paddr: %lx", paddr); /* * Allocate upper level page tables, if not already present. Return @@ -219,6 +225,11 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) TEST_ASSERT(!(*pte & PTE_PRESENT_MASK), "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr); *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK); + + if (vm_is_gpa_protected(vm, paddr)) + *pte |= vm->arch.c_bit; + else + *pte |= vm->arch.s_bit; } void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) @@ -493,7 +504,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) * No need for a hugepage mask on the PTE, x86-64 requires the "unused" * address bits to be zero. */ - return PTE_GET_PA(*pte) | (gva & ~HUGEPAGE_MASK(level)); + return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level)); } static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt) From patchwork Tue Jan 10 17:50:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13095423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9613CC46467 for ; Tue, 10 Jan 2023 17:51:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234768AbjAJRvU (ORCPT ); Tue, 10 Jan 2023 12:51:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234634AbjAJRvM (ORCPT ); Tue, 10 Jan 2023 12:51:12 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16A0950062 for ; Tue, 10 Jan 2023 09:51:12 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id d8-20020a17090a7bc800b00226eb4523ceso4487657pjl.7 for ; Tue, 10 Jan 2023 09:51:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z/ZFh9sq8eKLeQlZMFZXYESJzlw0D4epBxstQ4Hl+0w=; b=rR5TD1mjo0BugUgy9sqDZFyRVIVwGuEeGnYm7xKUqLuXUVc8TQafnyYPuAXzjLTAMh Ug8YUjTavJkAx0fQ6WIOH85q9DqOrZo1okXl8+wkYcMNEvOOuefh4tWqZleatHTmgp9Y EKU9FpZSaxwalOJeNYpb6+r431FMJg9Z2kSP4qmAkupspAKncb6fAOAB7GRS6gXKXmZ9 2KB83pbiJvBsAupmZDJPE+6HqDXC+q6tHrkOU00Ormku9OEAy3XWBPdWPG3kQN5cMsLg EAKYAvtAavH24d1GhAXT6E33D9sPiZ1oxeIaqv7/qwKfn7uUTUcAYxA7q8IME79XdL4P eKDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z/ZFh9sq8eKLeQlZMFZXYESJzlw0D4epBxstQ4Hl+0w=; b=DxfZL6NmZun4020cdNKkyIsmJuYwZSM9sICHv5AcDiyOCoZDs3+5tAlE+bkteKcHYv 510zY1EJE4GiE/FEfT6Lwo1wBL38b/HQKgs6jenz+DrevA8Lo9hVO+dxHya1snuoYGip aUH5TcHl5ig24LFjfjIJiCr8IqRf3raA+wyTA7sUIby0aR4Rt1PtZx7kquxLM5KSWp4g bM+Pw3eJcmFikSVdX4cqfg/9CFSdnTGeFbm/90KK7ZglyMK0Uu/j1A1Y8zN2nepWzULU ckg+cVtYpmahstHo8kFhz67AHLegvzR+GPZ5Rmxo+P+GW1BW1LDvAuK5CyElOyWD/FrN FOuQ== X-Gm-Message-State: AFqh2kpgXyGGJxDWhfSgESrcSmGZjisJ12XxbYu2mmgjDd69SauIfvGY gbwFtzhltF+UnM+e5Z3Y2HLR1YuRClYGlyp339jJl+6Ld3Vxb33H/cq9SGB0lKA1m/s1cMFqqj7 xGQLGPJ4eQkCR35j0UJ+mPf3XWR3DhM5eALzDJ2mC6ChG8+2u9M06yFjj+w== X-Google-Smtp-Source: AMrXdXsyttMZPx8tqmF+Qd7moE5dn1J+z7gT+0EhV/1swASMncnWRAXFxQugWRw51uT6uWTXMRq0N/Uu/qA= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:11:8358:4c2a:eae1:4752]) (user=pgonda job=sendgmr) by 2002:a17:90b:2751:b0:219:baef:3c7 with SMTP id qi17-20020a17090b275100b00219baef03c7mr4090148pjb.57.1673373071520; Tue, 10 Jan 2023 09:51:11 -0800 (PST) Date: Tue, 10 Jan 2023 09:50:54 -0800 In-Reply-To: <20230110175057.715453-1-pgonda@google.com> Message-Id: <20230110175057.715453-5-pgonda@google.com> Mime-Version: 1.0 References: <20230110175057.715453-1-pgonda@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Subject: [PATCH V6 4/7] KVM: selftests: add support for protected vm_vaddr_* allocations From: Peter Gonda To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Michael Roth , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Peter Gonda Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michael Roth Test programs may wish to allocate shared vaddrs for things like sharing memory with the guest. Since protected vms will have their memory encrypted by default an interface is needed to explicitly request shared pages. Implement this by splitting the common code out from vm_vaddr_alloc() and introducing a new vm_vaddr_alloc_shared(). Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng cc: Andrew Jones Signed-off-by: Michael Roth Signed-off-by: Peter Gonda --- .../selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++---- 2 files changed, 18 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index f84d7777d5ca..5f3150ecfbbf 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -435,6 +435,7 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_mi vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, enum kvm_mem_region_type type); +vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index ba771c2d949d..0d0a7ad7632d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1305,15 +1305,17 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } -vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, - enum kvm_mem_region_type type) +static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type, + bool encrypt) { uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); virt_pgd_alloc(vm); - vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages, + vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages, KVM_UTIL_MIN_PFN * vm->page_size, - vm->memslots[type]); + vm->memslots[type], encrypt); /* * Find an unused range of virtual page addresses of at least @@ -1333,6 +1335,17 @@ vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, return vaddr_start; } +vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type) +{ + return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, vm->protected); +} + +vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +{ + return ____vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA, false); +} + /* * VM Virtual Address Allocate * From patchwork Tue Jan 10 17:50:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13095424 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4516DC54EBC for ; Tue, 10 Jan 2023 17:51:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234966AbjAJRvZ (ORCPT ); Tue, 10 Jan 2023 12:51:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234431AbjAJRvQ (ORCPT ); Tue, 10 Jan 2023 12:51:16 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DAC4C52743 for ; Tue, 10 Jan 2023 09:51:13 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id u2-20020a17090341c200b00192bc565119so8871758ple.16 for ; Tue, 10 Jan 2023 09:51:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4U9uRuUguD/arQj7XJonH4LDOPTxCYKP8K5iLO88CxQ=; b=sdwC3weVDY5naRciojTNgo/3cv/Gt817VMW1weicXkTpxdAld+DKvPPreh90RxQPKo LHgjLmiEb61D5sx99XY9R0hCbSw9nXlruJs+3CfiwBXwilt+4yqHGX2hN9GeLwoWhHwO YNE+8gvuAbi8dDxV0cZcFBfqQ+CrvsBiQ4mMvSBwJrM68rojaL3m+CNTo/TEGdpKLKzT Klot+dvcRaXxVULqC7szBZX167v+LkvlQiDDFmIPAli7gHZq5SouCC2aUHi1ufhs3zgD 59DkxecZgDFOnWKaZOTgPK3r4i70TIFAUryVtUG/CGBvv/Tm5X4PyOuD1+3IxeefWd8s 4kyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4U9uRuUguD/arQj7XJonH4LDOPTxCYKP8K5iLO88CxQ=; b=GVe4CuM0erYke6mC4iXeaYsO1mbcYuvpgvCM3uNqFTNd98hUBDudYGvpd28p9nWM8p FYB/7pkh04YLGvk30jKr5e4GVBZPDLlRRiDn+hm7JcFnBhTzorWQX/85pRY0KpyQuVT5 moA4bLLPmgrPWSuYhaSPwdXVBUwr+hglUOMIIpVBuYwQ63IQJxjSluXltQIPI4wg+Sx2 r5sfbDdenizVA008zp48q4HJ9aMKHlAvGYhFDJ2DZnG4yzFvJ6amy5S3+9tcV49L8nYP Y+CktkCmWk4/mpjbgQ0ISBenr8758c25yHGV+00it1jJ0ECqueWnocQSmAbnJ1aarruH /jXw== X-Gm-Message-State: AFqh2koMnNsxNrGY2Esy5C+2FjUHfBmo6wZKo/aJTLjxQwlK6f0tNYo1 LeL0NiBdMdSiEkJiDJYxgmyODRAa90/2c6d2vHkzdlqWtbGDtbHlNucuSRzVPZCe8S7YmqSBY6a +kgdQSXas9zJbgFNrcY1Fqo55T8lK1G7dqCZqvY0CK6GcIcl6ua4K3B7SPA== X-Google-Smtp-Source: AMrXdXsuvGMiPvkNQuHDAFnowQingmuMnoWEiAtOTZ00HqZwqql7PgULRRPQlNR7e6DyDgRzbf0vnb1P1FU= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:11:8358:4c2a:eae1:4752]) (user=pgonda job=sendgmr) by 2002:a05:6a00:bf4:b0:576:e69d:a19 with SMTP id x52-20020a056a000bf400b00576e69d0a19mr5107714pfu.49.1673373073317; Tue, 10 Jan 2023 09:51:13 -0800 (PST) Date: Tue, 10 Jan 2023 09:50:55 -0800 In-Reply-To: <20230110175057.715453-1-pgonda@google.com> Message-Id: <20230110175057.715453-6-pgonda@google.com> Mime-Version: 1.0 References: <20230110175057.715453-1-pgonda@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Subject: [PATCH V6 5/7] KVM: selftests: add library for creating/interacting with SEV guests From: Peter Gonda To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Michael Roth Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add interfaces to allow tests to create SEV guests. The additional requirements for SEV guests PTs and other state is encapsulated by the new vm_sev_create_with_one_vcpu() function. This can future be generalized for more vCPUs but the first set of SEV selftests in this series only uses a single vCPU. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng cc: Andrew Jones Originally-by: Michael Roth Co-developed-by: Ackerley Tng Signed-off-by: Peter Gonda --- tools/arch/x86/include/asm/kvm_host.h | 1 + tools/testing/selftests/kvm/Makefile | 3 +- .../selftests/kvm/include/kvm_util_base.h | 15 +- .../selftests/kvm/include/x86_64/processor.h | 1 + .../selftests/kvm/include/x86_64/sev.h | 27 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 8 +- .../selftests/kvm/lib/x86_64/processor.c | 45 +++- tools/testing/selftests/kvm/lib/x86_64/sev.c | 254 ++++++++++++++++++ 8 files changed, 343 insertions(+), 11 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c diff --git a/tools/arch/x86/include/asm/kvm_host.h b/tools/arch/x86/include/asm/kvm_host.h index d8f48fe835fb..c95041e92fb5 100644 --- a/tools/arch/x86/include/asm/kvm_host.h +++ b/tools/arch/x86/include/asm/kvm_host.h @@ -8,6 +8,7 @@ struct kvm_vm_arch { uint64_t c_bit; uint64_t s_bit; + bool is_pt_protected; }; #endif // _TOOLS_LINUX_ASM_X86_KVM_HOST_H diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1750f91dd936..b7cfb15712d1 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -39,6 +39,7 @@ LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c LIBKVM_x86_64 += lib/x86_64/vmx.c +LIBKVM_x86_64 += lib/x86_64/sev.c LIBKVM_aarch64 += lib/aarch64/gic.c LIBKVM_aarch64 += lib/aarch64/gic_v3.c @@ -199,7 +200,7 @@ CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \ -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \ -I$(protected); } +uint64_t vm_nr_pages_required(enum vm_guest_mode mode, + uint32_t nr_runnable_vcpus, + uint64_t extra_mem_pages); + /* * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also * loads the test binary into guest memory and creates an IRQ chip (x86 only). @@ -767,8 +778,8 @@ unsigned long vm_compute_max_gfn(struct kvm_vm *vm); unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size); unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages); unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages); -static inline unsigned int -vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages) +static inline unsigned int vm_adjust_num_guest_pages(enum vm_guest_mode mode, + unsigned int num_guest_pages) { unsigned int n; n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages)); diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 2a5f47d51388..1c72fb5672a9 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -916,6 +916,7 @@ static inline void vcpu_set_msr(struct kvm_vcpu *vcpu, uint64_t msr_index, void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits); +void kvm_init_vm_address_properties(struct kvm_vm *vm); bool vm_is_unrestricted_guest(struct kvm_vm *vm); struct ex_regs { diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h new file mode 100644 index 000000000000..e212b032cd77 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/sev.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Helpers used for SEV guests + * + */ +#ifndef SELFTEST_KVM_SEV_H +#define SELFTEST_KVM_SEV_H + +#include +#include + +#include "kvm_util.h" + +#define CPUID_MEM_ENC_LEAF 0x8000001f +#define CPUID_EBX_CBIT_MASK 0x3f + +#define SEV_POLICY_NO_DBG (1UL << 0) +#define SEV_POLICY_ES (1UL << 2) + +bool is_kvm_sev_supported(void); + +void sev_vm_init(struct kvm_vm *vm); + +struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, + struct kvm_vcpu **cpu); + +#endif /* SELFTEST_KVM_SEV_H */ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 0d0a7ad7632d..99983a5c5558 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -151,6 +151,7 @@ const char *vm_guest_mode_string(uint32_t i) [VM_MODE_P40V48_16K] = "PA-bits:40, VA-bits:48, 16K pages", [VM_MODE_P40V48_64K] = "PA-bits:40, VA-bits:48, 64K pages", [VM_MODE_PXXV48_4K] = "PA-bits:ANY, VA-bits:48, 4K pages", + [VM_MODE_PXXV48_4K_SEV] = "PA-bits:ANY, VA-bits:48, 4K pages", [VM_MODE_P47V64_4K] = "PA-bits:47, VA-bits:64, 4K pages", [VM_MODE_P44V64_4K] = "PA-bits:44, VA-bits:64, 4K pages", [VM_MODE_P36V48_4K] = "PA-bits:36, VA-bits:48, 4K pages", @@ -176,6 +177,7 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = { [VM_MODE_P40V48_16K] = { 40, 48, 0x4000, 14 }, [VM_MODE_P40V48_64K] = { 40, 48, 0x10000, 16 }, [VM_MODE_PXXV48_4K] = { 0, 0, 0x1000, 12 }, + [VM_MODE_PXXV48_4K_SEV] = { 0, 0, 0x1000, 12 }, [VM_MODE_P47V64_4K] = { 47, 64, 0x1000, 12 }, [VM_MODE_P44V64_4K] = { 44, 64, 0x1000, 12 }, [VM_MODE_P36V48_4K] = { 36, 48, 0x1000, 12 }, @@ -254,9 +256,11 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) case VM_MODE_P36V47_16K: vm->pgtable_levels = 3; break; + case VM_MODE_PXXV48_4K_SEV: case VM_MODE_PXXV48_4K: #ifdef __x86_64__ kvm_get_cpu_address_width(&vm->pa_bits, &vm->va_bits); + kvm_init_vm_address_properties(vm); /* * Ignore KVM support for 5-level paging (vm->va_bits == 57), * it doesn't take effect unless a CR4.LA57 is set, which it @@ -270,7 +274,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) vm->pgtable_levels = 4; vm->va_bits = 48; #else - TEST_FAIL("VM_MODE_PXXV48_4K not supported on non-x86 platforms"); + TEST_FAIL("VM_MODE_PXXV48_4K* not supported on non-x86 platforms"); #endif break; case VM_MODE_P47V64_4K: @@ -303,7 +307,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode) return vm; } -static uint64_t vm_nr_pages_required(enum vm_guest_mode mode, +uint64_t vm_nr_pages_required(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, uint64_t extra_mem_pages) { diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index d03cefd9f6cd..557146ba85a8 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -8,6 +8,7 @@ #include "test_util.h" #include "kvm_util.h" #include "processor.h" +#include "sev.h" #ifndef NUM_INTERRUPTS #define NUM_INTERRUPTS 256 @@ -119,10 +120,16 @@ bool kvm_is_tdp_enabled(void) return get_kvm_amd_param_bool("npt"); } +static void assert_supported_guest_mode(struct kvm_vm *vm) +{ + TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K || vm->mode == VM_MODE_PXXV48_4K_SEV, + "Attempt to use unknown or unsupported guest mode, mode: 0x%x", + vm->mode); +} + void virt_arch_pgd_alloc(struct kvm_vm *vm) { - TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " - "unknown or unsupported guest mode, mode: 0x%x", vm->mode); + assert_supported_guest_mode(vm); /* If needed, create page map l4 table. */ if (!vm->pgd_created) { @@ -186,8 +193,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) uint64_t *pml4e, *pdpe, *pde; uint64_t *pte; - TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, - "Unknown or unsupported guest mode, mode: 0x%x", vm->mode); + assert_supported_guest_mode(vm); TEST_ASSERT((vaddr % pg_size) == 0, "Virtual address not aligned,\n" @@ -273,11 +279,14 @@ uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr, { uint64_t *pml4e, *pdpe, *pde; + TEST_ASSERT( + !vm->arch.is_pt_protected, + "Protected guests have their page tables protected so gva2gpa conversions are not possible."); + TEST_ASSERT(*level >= PG_LEVEL_NONE && *level < PG_LEVEL_NUM, "Invalid PG_LEVEL_* '%d'", *level); - TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " - "unknown or unsupported guest mode, mode: 0x%x", vm->mode); + assert_supported_guest_mode(vm); TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (vaddr >> vm->page_shift)), "Invalid virtual address, vaddr: 0x%lx", @@ -543,6 +552,7 @@ static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu) kvm_setup_gdt(vm, &sregs.gdt); switch (vm->mode) { + case VM_MODE_PXXV48_4K_SEV: case VM_MODE_PXXV48_4K: sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG; sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR; @@ -566,6 +576,10 @@ static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu) void kvm_arch_vm_post_create(struct kvm_vm *vm) { vm_create_irqchip(vm); + + if (vm->mode == VM_MODE_PXXV48_4K_SEV) { + sev_vm_init(vm); + } } struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, @@ -1050,6 +1064,25 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits) } } +static void configure_sev_pte_masks(struct kvm_vm *vm) +{ + uint32_t eax, ebx, ecx, edx, enc_bit; + + cpuid(CPUID_MEM_ENC_LEAF, &eax, &ebx, &ecx, &edx); + enc_bit = ebx & CPUID_EBX_CBIT_MASK; + + vm->arch.c_bit = 1ULL << enc_bit; + vm->protected = true; + vm->gpa_protected_mask = vm->arch.c_bit; +} + +void kvm_init_vm_address_properties(struct kvm_vm *vm) +{ + if (vm->mode == VM_MODE_PXXV48_4K_SEV) { + configure_sev_pte_masks(vm); + } +} + static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr, int dpl, unsigned short selector) { diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c new file mode 100644 index 000000000000..3e20f15dd098 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -0,0 +1,254 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Helpers used for SEV guests + * + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include + +#include "kvm_util.h" +#include "svm_util.h" +#include "linux/psp-sev.h" +#include "processor.h" +#include "sev.h" + +#define SEV_FW_REQ_VER_MAJOR 0 +#define SEV_FW_REQ_VER_MINOR 17 + +enum sev_guest_state { + SEV_GSTATE_UNINIT = 0, + SEV_GSTATE_LUPDATE, + SEV_GSTATE_LSECRET, + SEV_GSTATE_RUNNING, +}; + +static void sev_ioctl(int cmd, void *data) +{ + int ret; + struct sev_issue_cmd arg; + + arg.cmd = cmd; + arg.data = (unsigned long)data; + ret = ioctl(open_sev_dev_path_or_exit(), SEV_ISSUE_CMD, &arg); + TEST_ASSERT(ret == 0, "SEV ioctl %d failed, error: %d, fw_error: %d", + cmd, ret, arg.error); +} + +static void kvm_sev_ioctl(struct kvm_vm *vm, int cmd, void *data) +{ + struct kvm_sev_cmd arg = {0}; + int ret; + + arg.id = cmd; + arg.sev_fd = open_sev_dev_path_or_exit(); + arg.data = (__u64)data; + + ret = ioctl(vm->fd, KVM_MEMORY_ENCRYPT_OP, &arg); + TEST_ASSERT( + ret == 0, + "SEV KVM ioctl %d failed, rc: %i errno: %i (%s), fw_error: %d", + cmd, ret, errno, strerror(errno), arg.error); +} + +static void sev_register_user_region(struct kvm_vm *vm, struct userspace_mem_region *region) +{ + struct kvm_enc_region range = {0}; + int ret; + + range.addr = (__u64)region->region.userspace_addr; + ; + range.size = region->region.memory_size; + + ret = ioctl(vm->fd, KVM_MEMORY_ENCRYPT_REG_REGION, &range); + TEST_ASSERT(ret == 0, "failed to register user range, errno: %i\n", + errno); +} + +static void sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa, uint64_t size) +{ + struct kvm_sev_launch_update_data ksev_update_data = {0}; + + pr_debug("%s: addr: 0x%lx, size: %lu\n", __func__, gpa, size); + + ksev_update_data.uaddr = (__u64)addr_gpa2hva(vm, gpa); + ksev_update_data.len = size; + + kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_DATA, &ksev_update_data); +} + + +/* + * Iterate over set ranges within sparsebit @s. In each iteration, + * @range_begin and @range_end will take the beginning and end of the set + * range, which are of type sparsebit_idx_t. + * + * For example, if the range [3, 7] (inclusive) is set, within the + * iteration,@range_begin will take the value 3 and @range_end will take + * the value 7. + * + * Ensure that there is at least one bit set before using this macro with + * sparsebit_any_set(), because sparsebit_first_set() will abort if none + * are set. + */ +#define sparsebit_for_each_set_range(s, range_begin, range_end) \ + for (range_begin = sparsebit_first_set(s), \ + range_end = \ + sparsebit_next_clear(s, range_begin) - 1; \ + range_begin && range_end; \ + range_begin = sparsebit_next_set(s, range_end), \ + range_end = \ + sparsebit_next_clear(s, range_begin) - 1) + +/* + * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the + * -1 would then cause an underflow back to 2**64 - 1. This is expected and + * correct. + * + * If the last range in the sparsebit is [x, y] and we try to iterate, + * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try + * and find the first range, but that's correct because the condition + * expression would cause us to quit the loop. + */ +static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region) +{ + const struct sparsebit *protected_phy_pages = + region->protected_phy_pages; + const vm_paddr_t gpa_base = region->region.guest_phys_addr; + const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift; + + sparsebit_idx_t i; + sparsebit_idx_t j; + + if (!sparsebit_any_set(protected_phy_pages)) + return; + + sev_register_user_region(vm, region); + + sparsebit_for_each_set_range(protected_phy_pages, i, j) { + const uint64_t size_to_load = (j - i + 1) * vm->page_size; + const uint64_t offset = (i - lowest_page_in_region) * vm->page_size; + const uint64_t gpa = gpa_base + offset; + + sev_launch_update_data(vm, gpa, size_to_load); + } +} + +static void sev_encrypt(struct kvm_vm *vm) +{ + int ctr; + struct userspace_mem_region *region; + + hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) { + encrypt_region(vm, region); + } + + vm->arch.is_pt_protected = true; +} + +bool is_kvm_sev_supported(void) +{ + struct sev_user_data_status sev_status; + + sev_ioctl(SEV_PLATFORM_STATUS, &sev_status); + + if (!(sev_status.api_major > SEV_FW_REQ_VER_MAJOR || + (sev_status.api_major == SEV_FW_REQ_VER_MAJOR && + sev_status.api_minor >= SEV_FW_REQ_VER_MINOR))) { + pr_info("SEV FW version too old. Have API %d.%d (build: %d), need %d.%d, skipping test.\n", + sev_status.api_major, sev_status.api_minor, + sev_status.build, SEV_FW_REQ_VER_MAJOR, + SEV_FW_REQ_VER_MINOR); + return false; + } + + return true; +} + +static void sev_vm_launch(struct kvm_vm *vm, uint32_t policy) +{ + struct kvm_sev_launch_start ksev_launch_start = {0}; + struct kvm_sev_guest_status ksev_status; + + ksev_launch_start.policy = policy; + kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_START, &ksev_launch_start); + kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &ksev_status); + TEST_ASSERT(ksev_status.policy == policy, "Incorrect guest policy."); + TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE, + "Unexpected guest state: %d", ksev_status.state); + + ucall_init(vm, 0); + + sev_encrypt(vm); +} + +static void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement) +{ + struct kvm_sev_launch_measure ksev_launch_measure; + struct kvm_sev_guest_status ksev_guest_status; + + ksev_launch_measure.len = 256; + ksev_launch_measure.uaddr = (__u64)measurement; + kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_MEASURE, &ksev_launch_measure); + + kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &ksev_guest_status); + TEST_ASSERT(ksev_guest_status.state == SEV_GSTATE_LSECRET, + "Unexpected guest state: %d", ksev_guest_status.state); +} + +static void sev_vm_launch_finish(struct kvm_vm *vm) +{ + struct kvm_sev_guest_status ksev_status; + + kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &ksev_status); + TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE || + ksev_status.state == SEV_GSTATE_LSECRET, + "Unexpected guest state: %d", ksev_status.state); + + kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_FINISH, NULL); + + kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &ksev_status); + TEST_ASSERT(ksev_status.state == SEV_GSTATE_RUNNING, + "Unexpected guest state: %d", ksev_status.state); +} + +static void sev_vm_measure(struct kvm_vm *vm) +{ + uint8_t measurement[512]; + int i; + + sev_vm_launch_measure(vm, measurement); + + /* TODO: Validate the measurement is as expected. */ + pr_debug("guest measurement: "); + for (i = 0; i < 32; ++i) + pr_debug("%02x", measurement[i]); + pr_debug("\n"); +} + +void sev_vm_init(struct kvm_vm *vm) +{ + kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL); +} + +struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, + struct kvm_vcpu **cpu) +{ + enum vm_guest_mode mode = VM_MODE_PXXV48_4K_SEV; + struct kvm_vm *vm; + struct kvm_vcpu *cpus[1]; + + vm = __vm_create_with_vcpus(mode, 1, 0, guest_code, cpus); + *cpu = cpus[0]; + + sev_vm_launch(vm, policy); + + sev_vm_measure(vm); + + sev_vm_launch_finish(vm); + + pr_info("SEV guest created, policy: 0x%x\n", policy); + + return vm; +} From patchwork Tue Jan 10 17:50:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13095425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E5A7C46467 for ; Tue, 10 Jan 2023 17:51:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235005AbjAJRv0 (ORCPT ); Tue, 10 Jan 2023 12:51:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234859AbjAJRvR (ORCPT ); Tue, 10 Jan 2023 12:51:17 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E338754DBC for ; Tue, 10 Jan 2023 09:51:15 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id f8-20020a170902ce8800b00190c6518e21so8816801plg.1 for ; Tue, 10 Jan 2023 09:51:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Wd8IRDcm/I+X4nIj9mIHk75KksnNCPDW6vKldjVyscc=; b=b6KSQFqc5bLezor0zmWctfaPJ+f7KyxQdr3D/WxZuD9X/46Ab/0ny+3J5uSKG+8BMd uOQtNt3qd0SKGSJDc48X0DejhNnMOMN1KqX1H6Y2pq4012FFY+jsK2wdmkihUd8VYDJi lcm7hjV5n3lyYiVTAfWEbyj6PuU3nPZAPMuLFBK1dNwVn8VthonXMN8aFsG2D7/FUimc nqJ2g6PZpyu46JOs2r3mstW5uLQxFnf0GAqB1S1db+9ovE4akeJlXlcXxOEh+z4XxCNp YjtPFDZ7tPeI1jg+M4oKf7aIqOJxeUYsspIuXjhlF97sGbW0rAl1a7vOZq86idj7GWYl GBcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Wd8IRDcm/I+X4nIj9mIHk75KksnNCPDW6vKldjVyscc=; b=ScT44BNRzXvso0PBtjNw2VGeGxxJf5GLpO1ZEyr/mI+285oM1I92w7bhXwiTL2nTsI tooF22mK7x5lKurayeKRkUG7PAmn19oEATet637pnX5A+5gpQvtvO5JHCXKudxpc5kNi p/At+ACqWnz026lvg1WRoSf6Kzc37P6FnjtePsy1x4HeR09DDzOUBPRNCMmEqt8bDU2Z hDV8sM+buNtGniVX6NdN1CrXY+CTBtlauQLA1lDfxhweWhO0ohUed4R6/ksd4PiJOzTx +YrLuEGU8mPQ0ltnwLJEgWUrb+s5GaPjhOP6gGSEdWlqOVEf+SkQpBNChj5237glq+O0 AQHQ== X-Gm-Message-State: AFqh2kpfJzRFSDKLyRB9TsK5fq3NtMdENgkqSLHBPqsRb8TT2AvP74ED rA3THu0YrAxyo+u86o1fYMfN/2UvtuW6/5ycepIs/c6JTvTyVo5oCwcZ1RJALC4Jer7fKgRt9Nx KhMXK/XbBFvG8qWh2Vj6oikjvQ04yS48XebZkyrWCB4itgQxU/B6GSZEF/Q== X-Google-Smtp-Source: AMrXdXtgUqdh/8WMm6DEuvZ030LJA1xX0uMeQAB19fs4xWBZgfTRe3FK8MNQ9gfkgaQtgyE4OlBoTZ6JP8U= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:11:8358:4c2a:eae1:4752]) (user=pgonda job=sendgmr) by 2002:a63:1a0a:0:b0:478:ba6c:3879 with SMTP id a10-20020a631a0a000000b00478ba6c3879mr5162985pga.440.1673373075316; Tue, 10 Jan 2023 09:51:15 -0800 (PST) Date: Tue, 10 Jan 2023 09:50:56 -0800 In-Reply-To: <20230110175057.715453-1-pgonda@google.com> Message-Id: <20230110175057.715453-7-pgonda@google.com> Mime-Version: 1.0 References: <20230110175057.715453-1-pgonda@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Subject: [PATCH V6 6/7] KVM: selftests: Update ucall pool to allocate from shared memory From: Peter Gonda To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerly Tng , Andrew Jones Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Update the per VM ucall_header allocation from vm_vaddr_alloc() to vm_vaddr_alloc_shared(). This allows encrypted guests to use ucall pools by placing their shared ucall structures in unencrypted (shared) memory. No behavior change for non encrypted guests. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerly Tng cc: Andrew Jones Signed-off-by: Peter Gonda --- tools/testing/selftests/kvm/lib/ucall_common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c index 2f0e2ea941cc..99ef4866a001 100644 --- a/tools/testing/selftests/kvm/lib/ucall_common.c +++ b/tools/testing/selftests/kvm/lib/ucall_common.c @@ -24,7 +24,7 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa) vm_vaddr_t vaddr; int i; - vaddr = __vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR, MEM_REGION_DATA); + vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR); hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr); memset(hdr, 0, sizeof(*hdr)); From patchwork Tue Jan 10 17:50:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Gonda X-Patchwork-Id: 13095426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1073CC54EBC for ; Tue, 10 Jan 2023 17:51:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235081AbjAJRv3 (ORCPT ); Tue, 10 Jan 2023 12:51:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234904AbjAJRvS (ORCPT ); Tue, 10 Jan 2023 12:51:18 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F19DF5F912 for ; Tue, 10 Jan 2023 09:51:17 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id bx21-20020a056a00429500b00582d85bb03bso5546170pfb.16 for ; Tue, 10 Jan 2023 09:51:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Kovpab8ufZORC8OQwD/Ffe4tjc6Lhn/m/+zBtc75xtg=; b=jMZkxMMsqWv4TR7GzZ0A7yhf0SNqy4OOCj6DEGOQl/IHadL2JpeOyeLieFcExLaRH+ nC7f/8hRRK2ZDY8aJJS7Isg8WIjwIqAAP3IDu3db0HKTcWPRoWdVX1XSuF+7WySHus97 UC5xJljIGHCht8YAYZgSXXAMH42IdS9I42o0QHTMnN1Of3nWaBtQixI6ZDup12CBJQnL 02FdR3pDScSNA4le2Wc+FdRUb3KHDY3NQb7agC5kIDo40Jh7nROKzfi/rV4ABfS00M8n 2bz7L5pb9uXenODU9xH+unSgUVvxe3BdcNIMWg3FcoFhvIWRhR4p+mAgEL7y8SdmAj5+ 9P7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Kovpab8ufZORC8OQwD/Ffe4tjc6Lhn/m/+zBtc75xtg=; b=2cFnjrYlNeliGcgCVpC8RXhf6FvFy5Aam5BCI7e7DjiCIYpYLL7zwy2p7pa19gdGSP xiCQvYmsm8YmHS2kKrvlU0Ev5lT27BEyYDaACNLUokloXwDxKqu9/nqBTEsivFUOH/Ma 4P+gDzEP40AOMpcPdNoNJY/14DS3++x0LpaIoSfn1FVjRHzQWMwDO748ITZHRKKcYQpK 8hNB/I1jbRgx2+zTsB1XrAugxY5TEPFZTZlM3JAFBFFfmtqtMGwZZMPwtv1zTz4Iumtl PqGMQgm7XihI4hZlIDXEvVAmiJaWDrXU2roITZcYOm0NdNXNB1Z9QJ6rIKUC/xWwDnfr pvkg== X-Gm-Message-State: AFqh2kpnQCn2UZb1VoCSvqVnX+m1BTf/MiVUsynqd1K0/6jqbsQY97Rr OYRjFAFbit+EYm6cTE23RACacCMz0Nr96UXwbdcl4JAYZRtqUzUbdjyjJIWw5swrwjr/jKHgRDa fCUc++hxFS1wKCC4Rn+7vZIhMwusBxAks8+Ve+Rm/2UaVfLYpdgB+94M0JQ== X-Google-Smtp-Source: AMrXdXtROTaAuScbQiP5lFAmT1xtN9L9Dqm038NELJz/DOHnZMYDutBsJxvjLk/8D/IZFYJyMBlpKlEadsk= X-Received: from pgonda1.kir.corp.google.com ([2620:0:1008:11:8358:4c2a:eae1:4752]) (user=pgonda job=sendgmr) by 2002:a17:90a:206:b0:226:9980:67f3 with SMTP id c6-20020a17090a020600b00226998067f3mr280pjc.1.1673373076771; Tue, 10 Jan 2023 09:51:16 -0800 (PST) Date: Tue, 10 Jan 2023 09:50:57 -0800 In-Reply-To: <20230110175057.715453-1-pgonda@google.com> Message-Id: <20230110175057.715453-8-pgonda@google.com> Mime-Version: 1.0 References: <20230110175057.715453-1-pgonda@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Subject: [PATCH V6 7/7] KVM: selftests: Add simple sev vm testing From: Peter Gonda To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Peter Gonda , Paolo Bonzini , Sean Christopherson , Vishal Annapurve , Ackerley Tng , Andrew Jones , Michael Roth Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org A very simple of booting SEV guests that checks related CPUID bits. This is a stripped down version of "[PATCH v2 08/13] KVM: selftests: add SEV boot tests" from Michael but much simpler. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng cc: Andrew Jones Suggested-by: Michael Roth Signed-off-by: Peter Gonda --- tools/testing/selftests/kvm/.gitignore | 84 +++++++++++++++++++ tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/sev_all_boot_test.c | 84 +++++++++++++++++++ 3 files changed, 169 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 6d9381d60172..6d826957c6ae 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -1,7 +1,91 @@ # SPDX-License-Identifier: GPL-2.0-only +<<<<<<< HEAD * !/**/ !*.c !*.h !*.S !*.sh +======= +/aarch64/aarch32_id_regs +/aarch64/arch_timer +/aarch64/debug-exceptions +/aarch64/get-reg-list +/aarch64/hypercalls +/aarch64/psci_test +/aarch64/vcpu_width_config +/aarch64/vgic_init +/aarch64/vgic_irq +/s390x/memop +/s390x/resets +/s390x/sync_regs_test +/s390x/tprot +/x86_64/amx_test +/x86_64/cpuid_test +/x86_64/cr4_cpuid_sync_test +/x86_64/debug_regs +/x86_64/evmcs_test +/x86_64/emulator_error_test +/x86_64/fix_hypercall_test +/x86_64/get_msr_index_features +/x86_64/kvm_clock_test +/x86_64/kvm_pv_test +/x86_64/hyperv_clock +/x86_64/hyperv_cpuid +/x86_64/hyperv_features +/x86_64/hyperv_svm_test +/x86_64/max_vcpuid_cap_test +/x86_64/mmio_warning_test +/x86_64/monitor_mwait_test +/x86_64/nested_exceptions_test +/x86_64/nx_huge_pages_test +/x86_64/platform_info_test +/x86_64/pmu_event_filter_test +/x86_64/set_boot_cpu_id +/x86_64/set_sregs_test +/x86_64/sev_all_boot_test +/x86_64/sev_migrate_tests +/x86_64/smm_test +/x86_64/state_test +/x86_64/svm_vmcall_test +/x86_64/svm_int_ctl_test +/x86_64/svm_nested_soft_inject_test +/x86_64/sync_regs_test +/x86_64/tsc_msrs_test +/x86_64/tsc_scaling_sync +/x86_64/ucna_injection_test +/x86_64/userspace_io_test +/x86_64/userspace_msr_exit_test +/x86_64/vmx_apic_access_test +/x86_64/vmx_close_while_nested_test +/x86_64/vmx_dirty_log_test +/x86_64/vmx_exception_with_invalid_guest_state +/x86_64/vmx_invalid_nested_guest_state +/x86_64/vmx_msrs_test +/x86_64/vmx_preemption_timer_test +/x86_64/vmx_set_nested_state_test +/x86_64/vmx_tsc_adjust_test +/x86_64/vmx_nested_tsc_scaling_test +/x86_64/xapic_ipi_test +/x86_64/xapic_state_test +/x86_64/xen_shinfo_test +/x86_64/xen_vmcall_test +/x86_64/xss_msr_test +/x86_64/vmx_pmu_caps_test +/x86_64/triple_fault_event_test +/access_tracking_perf_test +/demand_paging_test +/dirty_log_test +/dirty_log_perf_test +/hardware_disable_test +/kvm_create_max_vcpus +/kvm_page_table_test +/max_guest_memory_test +/memslot_modification_stress_test +/memslot_perf_test +/rseq_test +/set_memory_region_test +/steal_time +/kvm_binary_stats_test +/system_counter_offset_test +>>>>>>> KVM: selftests: Add simple sev vm testing diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index b7cfb15712d1..66d7ab3da990 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -111,6 +111,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_caps_test TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_all_boot_test TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests TEST_GEN_PROGS_x86_64 += x86_64/amx_test TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c new file mode 100644 index 000000000000..e9e4d7305bc1 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c @@ -0,0 +1,84 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Basic SEV boot tests. + * + */ +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" +#include "linux/psp-sev.h" +#include "sev.h" + +#define NR_SYNCS 1 + +#define MSR_AMD64_SEV_BIT 1 + +static void guest_run_loop(struct kvm_vcpu *vcpu) +{ + struct ucall uc; + int i; + + for (i = 0; i <= NR_SYNCS; ++i) { + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + continue; + case UCALL_DONE: + return; + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + default: + TEST_FAIL("Unexpected exit: %s", + exit_reason_str(vcpu->run->exit_reason)); + } + } +} + +static void is_sev_enabled(void) +{ + uint64_t sev_status; + + GUEST_ASSERT(this_cpu_has(X86_FEATURE_SEV)); + + sev_status = rdmsr(MSR_AMD64_SEV); + GUEST_ASSERT(sev_status & 0x1); +} + +static void guest_sev_code(void) +{ + GUEST_SYNC(1); + + is_sev_enabled(); + + GUEST_DONE(); +} + +static void test_sev(void *guest_code, uint64_t policy) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = vm_sev_create_with_one_vcpu(policy, guest_code, &vcpu); + TEST_ASSERT(vm, "vm_sev_create_with_one_vcpu() failed to create VM\n"); + + guest_run_loop(vcpu); + + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(is_kvm_sev_supported()); + + test_sev(guest_sev_code, SEV_POLICY_NO_DBG); + test_sev(guest_sev_code, 0); + + return 0; +}