From patchwork Thu Nov 4 00:25:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E101C433F5 for ; Thu, 4 Nov 2021 00:26:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D9160611EE for ; Thu, 4 Nov 2021 00:26:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D9160611EE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GWmiOjjH3vr3kbys5xviVtiwQkmaS9CNBShgEFLUlHE=; b=UG9A8P5YVdjYSu Y6GuJUWqEhwQqzEJSnHhm5VykX0tgletTj0ujObMRxCr7RobNur+Nbu5tvaZqnevSSOQ4cXpKhR25 Gx4gSPbJSykSkRSp7VXkjlm20wRuBPgmKtRlGBkFFHmFoB2rgV9hUKsxwj9ZlPkIcE1y/juIrj0va kF7vH6RwHOqYmuT9w3gJH8bsmE84z7YU80MulXrdQNIPVbR1VoIRvQNgjsWAKBGrgWywIpmSmLMM3 hafTXl75csDrzkBvBTg5qzI9tGpERptgslefqCIxaCtjlkY1roqufx1B8vxrTeAqS/YOMRUm0s5Wo HtMwgJ6gqd5eNsTpnmgg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaA-007Cgs-RC; Thu, 04 Nov 2021 00:26:18 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQZt-007CVu-OY for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:03 +0000 Received: by mail-pg1-x549.google.com with SMTP id r25-20020a63a019000000b002a20656994dso2408044pge.3 for ; Wed, 03 Nov 2021 17:26:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=+lV0XpIZU6tw19BsegMK11Am0XrNm95tEDZ+cjWdUrM=; b=jbtShP55MKCx20McDFEhvtwsoRH1khB6gzD/3j56rgiZ3zt1jHh02awL7RkCEzV3OS juVR4kJgynHQLk53v8qOxVTOjp/cr1Axpeb38StAvaM6K/Esi65OU9b5Xxj4BlHERFw/ 20Ros746PIesTfL4lqUBcYH1DmBdILCibzAzE0JyxJqshteAL/za33QXIkiltJhs6fgG bVtI7QJVCXpF8HJS0rKZ95KbFtPX72tTwq3ga0tOCYx8FS5o3uLhWSoZiizq/DCYGBRC nP9YYyqmezSHN3kgANHiJKtWGLbMgy/F9P8mC6PT1O/nqJH2RcSpIBmpTyxPG6DF1Hbi bVTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=+lV0XpIZU6tw19BsegMK11Am0XrNm95tEDZ+cjWdUrM=; b=2dxEB3Lk0gHUD+iaahigJXB2pEeGTGO1ZSrA+P+LAjHUOMTZrU6HDMtDcyQxQytCq/ chYxhkVuy3HKVhwF+nxKbDyNYasnrQmzlmUpDoeATdfMKfDDVgFcUjeiO6AANB2ptv7f ZrAxAcTDtpf7nqDecLf2ivLk1D7KvAGoUvVF0Uf7cofT1s6e98xc3GLY5qnZPhbW6+4Y ipeT1mqvmjOE5mUutQiOe4qJhcUooV/GXL02VHqKtDs1AK0wYkrUs938tJVDvV9kERRd KKlHNVKuitY5oi3L/C5G7coSveH43KdQoP/aXIZFGpoaa786sTKDCZYp5yGCoHtqpwUM SXlQ== X-Gm-Message-State: AOAM533AohSV/ntBKErKgTEfG1Sq99RCLwk1df4zZS649eQGK9+33odV TkKQ7VIR3PWvoyn+i7XJfM30ZaghSpU= X-Google-Smtp-Source: ABdhPJxPA9GZBaEM5Ul3kt/1JljrZJu3YLRnIzyzQ/7OAn8VgQo/7agz4e2n8dYIb/fDnOKKBTJDQWAH1V8= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:c3:: with SMTP id v3mr252263pjd.0.1635985559868; Wed, 03 Nov 2021 17:25:59 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:02 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-2-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 01/30] KVM: Ensure local memslot copies operate on up-to-date arch-specific data From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172601_829832_EDD1AB53 X-CRM114-Status: GOOD ( 16.17 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org When modifying memslots, snapshot the "old" memslot and copy it to the "new" memslot's arch data after (re)acquiring slots_arch_lock. x86 can change a memslot's arch data while memslot updates are in-progress so long as it holds slots_arch_lock, thus snapshotting a memslot without holding the lock can result in the consumption of stale data. Fixes: b10a038e84d1 ("KVM: mmu: Add slots_arch_lock for memslot arch fields") Cc: stable@vger.kernel.org Cc: Ben Gardon Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 47 ++++++++++++++++++++++++++++++--------------- 1 file changed, 31 insertions(+), 16 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3f6d450355f0..99e69375c4c9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1531,11 +1531,10 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, static int kvm_set_memslot(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, struct kvm_memory_slot *new, int as_id, enum kvm_mr_change change) { - struct kvm_memory_slot *slot; + struct kvm_memory_slot *slot, old; struct kvm_memslots *slots; int r; @@ -1566,7 +1565,7 @@ static int kvm_set_memslot(struct kvm *kvm, * Note, the INVALID flag needs to be in the appropriate entry * in the freshly allocated memslots, not in @old or @new. */ - slot = id_to_memslot(slots, old->id); + slot = id_to_memslot(slots, new->id); slot->flags |= KVM_MEMSLOT_INVALID; /* @@ -1597,6 +1596,26 @@ static int kvm_set_memslot(struct kvm *kvm, kvm_copy_memslots(slots, __kvm_memslots(kvm, as_id)); } + /* + * Make a full copy of the old memslot, the pointer will become stale + * when the memslots are re-sorted by update_memslots(), and the old + * memslot needs to be referenced after calling update_memslots(), e.g. + * to free its resources and for arch specific behavior. This needs to + * happen *after* (re)acquiring slots_arch_lock. + */ + slot = id_to_memslot(slots, new->id); + if (slot) { + old = *slot; + } else { + WARN_ON_ONCE(change != KVM_MR_CREATE); + memset(&old, 0, sizeof(old)); + old.id = new->id; + old.as_id = as_id; + } + + /* Copy the arch-specific data, again after (re)acquiring slots_arch_lock. */ + memcpy(&new->arch, &old.arch, sizeof(old.arch)); + r = kvm_arch_prepare_memory_region(kvm, new, mem, change); if (r) goto out_slots; @@ -1604,14 +1623,18 @@ static int kvm_set_memslot(struct kvm *kvm, update_memslots(slots, new, change); slots = install_new_memslots(kvm, as_id, slots); - kvm_arch_commit_memory_region(kvm, mem, old, new, change); + kvm_arch_commit_memory_region(kvm, mem, &old, new, change); + + /* Free the old memslot's metadata. Note, this is the full copy!!! */ + if (change == KVM_MR_DELETE) + kvm_free_memslot(kvm, &old); kvfree(slots); return 0; out_slots: if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { - slot = id_to_memslot(slots, old->id); + slot = id_to_memslot(slots, new->id); slot->flags &= ~KVM_MEMSLOT_INVALID; slots = install_new_memslots(kvm, as_id, slots); } else { @@ -1626,7 +1649,6 @@ static int kvm_delete_memslot(struct kvm *kvm, struct kvm_memory_slot *old, int as_id) { struct kvm_memory_slot new; - int r; if (!old->npages) return -EINVAL; @@ -1639,12 +1661,7 @@ static int kvm_delete_memslot(struct kvm *kvm, */ new.as_id = as_id; - r = kvm_set_memslot(kvm, mem, old, &new, as_id, KVM_MR_DELETE); - if (r) - return r; - - kvm_free_memslot(kvm, old); - return 0; + return kvm_set_memslot(kvm, mem, &new, as_id, KVM_MR_DELETE); } /* @@ -1718,7 +1735,6 @@ int __kvm_set_memory_region(struct kvm *kvm, if (!old.npages) { change = KVM_MR_CREATE; new.dirty_bitmap = NULL; - memset(&new.arch, 0, sizeof(new.arch)); } else { /* Modify an existing slot. */ if ((new.userspace_addr != old.userspace_addr) || (new.npages != old.npages) || @@ -1732,9 +1748,8 @@ int __kvm_set_memory_region(struct kvm *kvm, else /* Nothing to change. */ return 0; - /* Copy dirty_bitmap and arch from the current memslot. */ + /* Copy dirty_bitmap from the current memslot. */ new.dirty_bitmap = old.dirty_bitmap; - memcpy(&new.arch, &old.arch, sizeof(new.arch)); } if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) { @@ -1760,7 +1775,7 @@ int __kvm_set_memory_region(struct kvm *kvm, bitmap_set(new.dirty_bitmap, 0, new.npages); } - r = kvm_set_memslot(kvm, mem, &old, &new, as_id, change); + r = kvm_set_memslot(kvm, mem, &new, as_id, change); if (r) goto out_bitmap; From patchwork Thu Nov 4 00:25:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40B45C433EF for ; Thu, 4 Nov 2021 00:27:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0546560230 for ; Thu, 4 Nov 2021 00:27:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0546560230 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pzQCdfRfE22qyoaVL/eSKzxayMm820rTc6PCgW7ac9Q=; b=onAEXXOnJN5NCW u1orLkTqaP5UDDmsi6oKdo6jMsn70uk26T4iqrBU8AUo3biVVlmGUF3lAKGGQeto1DgTehqXQXlM8 7njO/FHZp3yyQAMZ13S4GF1JQ/cb+9Er6shkBka9L8VA640goOz+mNxhvaAdigOaEBJaRS9ysMgf4 dSjqUZDy4DngKsHj+7rJm6pZjDgaWesFtETDtDVp0BZu2CEjJaycSnC6F3e+dScG6YCeZltcN9KEB Fed6xvND2lkd9SxPfP/U4bR4mUxUDi9zJZpAk94RpEZVhaDx8DqdIBMF8iQ4KNUFMr9aBsqtNFXED A+ECfOCRdoC4bDwVMRDw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQat-007D95-EN; Thu, 04 Nov 2021 00:27:03 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQZu-007CWa-QS for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:05 +0000 Received: by mail-pg1-x549.google.com with SMTP id c2-20020a63d5020000b029023ae853b72cso2357574pgg.18 for ; Wed, 03 Nov 2021 17:26:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=YU3JQHoEXhNYeqTaFGElIxIrX+lTdkV7tmvn1pNhx20=; b=nzIiCi8a3FdUTPNh7ytN/z1OZHuDUfxPkVLwkX7a0tH3K3c1YU/GUrVbVsUKogWulg 8TV0o+lCCtBLbDlyn4CdD1ZyDyGmDuKpEashuorjvvAytB9Fha6Xh4GdoRmbK2lCIj46 yaw8bm9yIsgJfAeOZzVP5kZxJzf5HvyCshSjBFStaw+bzKBcK6bPZX6ZcbGAXqRyq2X0 r24cJ2J/KvfMuDyi9skDCmfAMeR4jILYlAAhpoLRq8XhdUkbVLo1pJ78Onj43zLqECqm BUFjMpHQOpmAXdYIVQtZSEVPXFkfrUmNQanymXLbwXrV/MiRmrChRqGkV2fOUirqVkRC KbPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=YU3JQHoEXhNYeqTaFGElIxIrX+lTdkV7tmvn1pNhx20=; b=7BB7HyvT26YtVUc6vWhywZtkladcebafRONu0AT2J/YOLfl4N7OQ/jchF5jlJPN/L3 NK9BEaTz7Pl1oFmUvfv2N6SwnyyS0FPwmyRFOhZHxqDuQanXiEAvC+za975bw1d3/QO6 Og/3XHDGqXAL+tc8hd6f13lc1Y18wR7eGgM/42csyT7FnQ3r4PmIEshPjGawKusVDdkR +8TvXZRVeNsN014eMoUUzNbSylPTON0dfwyDziP5SJ3ZCYB7IXVo/enbXcBo3YxFR0HZ Ft364aTLgY01OzdVK+WfsEICF06FUG9i9hnj5ku6m4eLmWbqbrV3cUxafggsybStgGiF JK+g== X-Gm-Message-State: AOAM531TFMedqpUtxxl4zASo/3/p0Iyx2js5Z5wzafrAjG7J0uKLwKVt J87ZfNMFNy0mYDMI32AHwlPxyRLJp3s= X-Google-Smtp-Source: ABdhPJweDq2zn2AkOEc4n/Ejh97liBXXjVYy6U4pABPdgYlHkkJl2at3TcV1J/XikSLWHiPxtEuWDooqyFQ= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:16c6:b029:32d:e190:9dd0 with SMTP id l6-20020a056a0016c6b029032de1909dd0mr48630576pfc.70.1635985561569; Wed, 03 Nov 2021 17:26:01 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:03 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-3-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 02/30] KVM: Disallow user memslot with size that exceeds "unsigned long" From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172602_890531_157EC517 X-CRM114-Status: GOOD ( 10.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Reject userspace memslots whose size exceeds the storage capacity of an "unsigned long". KVM's uAPI takes the size as u64 to support large slots on 64-bit hosts, but does not account for the size being truncated on 32-bit hosts in various flows. The access_ok() check on the userspace virtual address in particular casts the size to "unsigned long" and will check the wrong number of bytes. KVM doesn't actually support slots whose size doesn't fit in an "unsigned long", e.g. KVM's internal kvm_memory_slot.npages is an "unsigned long", not a "u64", and misc arch specific code follows that behavior. Fixes: fa3d315a4ce2 ("KVM: Validate userspace_addr of memslot when registered") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- virt/kvm/kvm_main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 99e69375c4c9..83287730389f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1689,7 +1689,8 @@ int __kvm_set_memory_region(struct kvm *kvm, id = (u16)mem->slot; /* General sanity checks */ - if (mem->memory_size & (PAGE_SIZE - 1)) + if ((mem->memory_size & (PAGE_SIZE - 1)) || + (mem->memory_size != (unsigned long)mem->memory_size)) return -EINVAL; if (mem->guest_phys_addr & (PAGE_SIZE - 1)) return -EINVAL; From patchwork Thu Nov 4 00:25:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B171BC433FE for ; Thu, 4 Nov 2021 00:27:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 74B1560230 for ; Thu, 4 Nov 2021 00:27:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 74B1560230 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=afV6/gHvwmVkq9sTnj+fIOfSln4X7Ef5YQdkQgoKe7s=; b=eLbDIEOZfqYL+2 OPz5X4a171idwuspFVUTGf2r+gExV7b4V1T238HBn68aeP6C0+mkPg38PJbcHFcapqcL6lc1HeIGb lobcRNgUBuWqozON9Lc+z0ICHKHTjhnmm9t2CYLdQSIaVKrisQxP8aIARvlcAJeHxp1gx2W+YDRlg LUTSAYuwGRYepUf871VcA7Y3gdof/vj+X4xps09EZHDYUc6QaLSyBHr/ppsOf2Q0j3Gl+ITTc2sfu ig7pcBXVTVnLJ9w2dMZr5r1YPQ34QfscJstKxIX2noRPjhcVAC9coYLMELYuTNcP01B9x821gxrMs Q2Fsh8CupQFtclPdxQ2A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQbE-007DK9-E4; Thu, 04 Nov 2021 00:27:24 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQZw-007CXk-Eg for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:07 +0000 Received: by mail-pg1-x549.google.com with SMTP id 76-20020a63054f000000b002c9284978aaso2389169pgf.10 for ; Wed, 03 Nov 2021 17:26:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uUiF2cCeyvLcomA71gNb8UtnumiDOcYwDfMiXjWen6E=; b=oTQ7zxZOtJdIWwrRYuqPf1zjANoRXpGs4d18yx/1uFcULOqfGSPv/eGnnTHt2OaFXl Fw2Ja9jpx42JTWA+4xQHXaSDJzaLjB39qn72mpFaq/+8Pgyy1WE+Sui5ukzf7K0T1tch WQUzSIOcUqQ4RYGuYqINyp9t2Du3YpUpxuubgXkNzI9EfzqleZjWAUTMgOc4YeGCZQie KXx3n4rodQ4x96mmNgtB9r6T1WA9dTicsSrUIx+if+mPJuWUWAF+djp3CD78xEHLczgf /8ph947YIr/9EVIDN0A2sn1MabQLXDCtd3nrNdsi57mxIxM3VfxgHreQqgiovSN/aw02 N1Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uUiF2cCeyvLcomA71gNb8UtnumiDOcYwDfMiXjWen6E=; b=yuXnBPvqpplheWiHgS2pGp4YMcNDtpwzCYfy4rXOyvoJZ/HqG+ISX+3fp2p3vhKVJK a3VBhImRCNGJtVLScuFCDhAQyHGy2znEl+0Lk6WiaQXFmx54ho2FWLX7kj0O97rDPcmo Q0wjURSjGD3tTDzQ00UqJGs2GqH0gBp3EbwZnvORnfwnF1GfsIlInq6wBvxeuiGD+vQ2 4MmX66fYBA6j0AaZvqKiz4fIZw2ug74AiOKTudv81xZ/D5M00fdtnOwcoCm6/1HkJy3L v+N5oJAxoiVW6xb+INm/7Jtal8DgDi4nqLua/Zrax92TinGc0l4IwaHh+KIveZmawx/w TTXQ== X-Gm-Message-State: AOAM531A643BrSkMzogCKmEZ4r0Zlp/jI6SGuZFZ+QMLBu16CumT/0Sv +/+chBKhDM6xItyYKTkQO/CyzZpEhAw= X-Google-Smtp-Source: ABdhPJzS1sKm1vihVbRPixin8ADPxwIwiRu+sSSO0iFE9jTmdDmiJb4scGZK4H9WqPr37m1ISOQPandeoog= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2181:b0:44c:f4bc:2f74 with SMTP id h1-20020a056a00218100b0044cf4bc2f74mr47622932pfi.68.1635985562987; Wed, 03 Nov 2021 17:26:02 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:04 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-4-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 03/30] KVM: Require total number of memslot pages to fit in an unsigned long From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172604_520258_4781E3CF X-CRM114-Status: GOOD ( 16.10 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Explicitly disallow creating more memslot pages than can fit in an unsigned long, KVM doesn't correctly handle a total number of memslot pages that doesn't fit in an unsigned long and remedying that would be a waste of time. For a 64-bit kernel, this is a nop as memslots are not allowed to overlap in the gfn address space. With a 32-bit kernel, userspace can at most address 3gb of virtual memory, whereas wrapping the total number of pages would require 4tb+ of guest physical memory. Even with x86's second address space for SMM, userspace would need to alias all of guest memory more than one _thousand_ times. And on older x86 hardware with MAXPHYADDR < 43, the guest couldn't actually access any of those aliases even if userspace lied about guest.MAXPHYADDR. On 390 and arm64, this is a nop as they don't support 32-bit hosts. On x86, practically speaking this is simply acknowledging reality as the existing kvm_mmu_calculate_default_mmu_pages() assumes the total number of pages fits in an "unsigned long". On PPC, this is likely a nop as every flavor of PPC KVM assumes gfns (and gpas!) fit in unsigned long. arch/powerpc/kvm/book3s_32_mmu_host.c goes a step further and fails the build if CONFIG_PTE_64BIT=y, which presumably means that it does't support 64-bit physical addresses. On MIPS, this is also likely a nop as the core MMU helpers assume gpas fit in unsigned long, e.g. see kvm_mips_##name##_pte. And finally, RISC-V is a "don't care" as it doesn't exist in any release, i.e. there is no established ABI to break. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 19 +++++++++++++++++++ 2 files changed, 20 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 60a35d9fe259..d8e92d4a78d8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -551,6 +551,7 @@ struct kvm { */ struct mutex slots_arch_lock; struct mm_struct *mm; /* userspace tied to this vm */ + unsigned long nr_memslot_pages; struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 83287730389f..264c4b16520b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1623,6 +1623,15 @@ static int kvm_set_memslot(struct kvm *kvm, update_memslots(slots, new, change); slots = install_new_memslots(kvm, as_id, slots); + /* + * Update the total number of memslot pages before calling the arch + * hook so that architectures can consume the result directly. + */ + if (change == KVM_MR_DELETE) + kvm->nr_memslot_pages -= old.npages; + else if (change == KVM_MR_CREATE) + kvm->nr_memslot_pages += new->npages; + kvm_arch_commit_memory_region(kvm, mem, &old, new, change); /* Free the old memslot's metadata. Note, this is the full copy!!! */ @@ -1653,6 +1662,9 @@ static int kvm_delete_memslot(struct kvm *kvm, if (!old->npages) return -EINVAL; + if (WARN_ON_ONCE(kvm->nr_memslot_pages < old->npages)) + return -EIO; + memset(&new, 0, sizeof(new)); new.id = old->id; /* @@ -1736,6 +1748,13 @@ int __kvm_set_memory_region(struct kvm *kvm, if (!old.npages) { change = KVM_MR_CREATE; new.dirty_bitmap = NULL; + + /* + * To simplify KVM internals, the total number of pages across + * all memslots must fit in an unsigned long. + */ + if ((kvm->nr_memslot_pages + new.npages) < kvm->nr_memslot_pages) + return -EINVAL; } else { /* Modify an existing slot. */ if ((new.userspace_addr != old.userspace_addr) || (new.npages != old.npages) || From patchwork Thu Nov 4 00:25:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6E3CC433EF for ; Thu, 4 Nov 2021 00:28:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9EC25611C7 for ; Thu, 4 Nov 2021 00:28:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9EC25611C7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fP685/KJiOg/MXs5uPHG6oHxRARLAOCR/FjQXMONnVs=; b=Qpy+fYBOwGwtUY zoW0jVfqvqQJD506ydzAGlVW+S1ShjO0FSo7uwfe5sjTgrOUo2mMVCTxP9NqAZ6AgiBoqhwSDMcUT 8CVfUq+SD43zg0x5+Z9KJL6c/UBAZ5ygopWVoHRHn9SFhw9KrIQpfA6763VgWxa/Tttk0lTBVMpMY iWHshHJ6CBEor6GP5+qdI4l43KHQm6ww/20GQi/YsHbVfAveTTUiznnBcarG9guVqJiDB1+svN76t lrty9EThecv1N65aMv0tAQGZ5umizEnSxM7W84EvXkUYEUBBFvpVOZYERNCQ5QPF0+E76gj+Y4Tp6 AIYGWWB9/o3rNCGbbBhA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQbg-007Ded-UO; Thu, 04 Nov 2021 00:27:53 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQZy-007CZ3-Dx for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:07 +0000 Received: by mail-pg1-x549.google.com with SMTP id e4-20020a630f04000000b002cc40fe16afso2342400pgl.23 for ; Wed, 03 Nov 2021 17:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=E63+zcqes7aObPmDEJrvowSyb7WASC2+JFrjiGuoOMk=; b=G1EofX3RvsgDocbJ2WdlZIxLPrRqCOY3yk9dCpHXn3BrotBTcINoCOIdoZtnGuqNkz XjduJywdhEpzM76x4KoUTmOMqDWGTwqBcR9WlCLJ/Ixo7OZNiFzolIn6uGeuv97Lynv7 ddl94QsCyWYRP1cDpWTxGlOfLjkpa9JMyPhL71/QGQqzOCEiWBc8/ntYod6VDaOBqRIS X353qmmShX/kJORUwjw2weuKuQNgd6XS84Jwff3SPGUNlABXgf8tKjegkYwWGGN3YZ/S LGQAetSmsnJyG5LH2Wdt7/WsV81v8KEKNpq61Fts5PSRMZc9eqQNsC0W+vLYcBCTaqSh tOqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=E63+zcqes7aObPmDEJrvowSyb7WASC2+JFrjiGuoOMk=; b=UH8RckOw7mP0VK8DAz1NR4IY4cNsl2DyDD8gN5m3jecOfBY2+lIGJjvihnVeLE0iHH Qxl43PegL29jzOjmUpP3GLOEXutFgEteqEFp5VghUYUXwJXW2Vdew07xyf+VWs7HMqm0 TDgVyA1pCSG2zLLXXzJTFUdFVMeHsmeSM4zrIW/qJcqvpSySF0g4QiC0KZpcC95RohdM rrYz5lpnNPtyXKdcPM3EHpu74T4DKohfLhYjleq+22PbMZ4Ep8qIpJtQJHeJu0xodhvY 7KWBBaHiXNJszn9QParwFDd60BnEZJgecOiflUKJWLg7lhu62bScaU35iw2wUx9wVCMg xiZg== X-Gm-Message-State: AOAM533toSW5b58cfMwN5yxQaaxEQmLFxC80yN6GTsJsHrZBU0PGJlDQ IY8onkgBuTDg0b20PLaFqO2fKGCxNFo= X-Google-Smtp-Source: ABdhPJyxb/nQ01mzKYX4m8bfp03Ule2nAlwfJSHz9a05yqoefRBI+SnT6XPXcdLf6g/xUstLGwYZnueYC+Q= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a62:1b8e:0:b0:44c:9318:f6e1 with SMTP id b136-20020a621b8e000000b0044c9318f6e1mr48578653pfb.84.1635985564791; Wed, 03 Nov 2021 17:26:04 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:05 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-5-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 04/30] KVM: Open code kvm_delete_memslot() into its only caller From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172606_480348_E2D2B8B1 X-CRM114-Status: GOOD ( 13.36 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Fold kvm_delete_memslot() into __kvm_set_memory_region() to free up the "kvm_delete_memslot()" name for use in a future helper. The delete logic isn't so complex/long that it truly needs a helper, and it will be simplified a wee bit further in upcoming commits. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- virt/kvm/kvm_main.c | 42 +++++++++++++++++------------------------- 1 file changed, 17 insertions(+), 25 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 264c4b16520b..6171ddb3e31c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1653,29 +1653,6 @@ static int kvm_set_memslot(struct kvm *kvm, return r; } -static int kvm_delete_memslot(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, int as_id) -{ - struct kvm_memory_slot new; - - if (!old->npages) - return -EINVAL; - - if (WARN_ON_ONCE(kvm->nr_memslot_pages < old->npages)) - return -EIO; - - memset(&new, 0, sizeof(new)); - new.id = old->id; - /* - * This is only for debugging purpose; it should never be referenced - * for a removed memslot. - */ - new.as_id = as_id; - - return kvm_set_memslot(kvm, mem, &new, as_id, KVM_MR_DELETE); -} - /* * Allocate some memory and give it an address in the guest physical address * space. @@ -1732,8 +1709,23 @@ int __kvm_set_memory_region(struct kvm *kvm, old.id = id; } - if (!mem->memory_size) - return kvm_delete_memslot(kvm, mem, &old, as_id); + if (!mem->memory_size) { + if (!old.npages) + return -EINVAL; + + if (WARN_ON_ONCE(kvm->nr_memslot_pages < old.npages)) + return -EIO; + + memset(&new, 0, sizeof(new)); + new.id = id; + /* + * This is only for debugging purpose; it should never be + * referenced for a removed memslot. + */ + new.as_id = as_id; + + return kvm_set_memslot(kvm, mem, &new, as_id, KVM_MR_DELETE); + } new.as_id = as_id; new.id = id; From patchwork Thu Nov 4 00:25:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAAFBC433EF for ; Thu, 4 Nov 2021 00:29:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB850611CA for ; Thu, 4 Nov 2021 00:29:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AB850611CA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PNqaQ+S/7DVwfIxusb1uMLr6I9Mo0PRKVbJJDkHF/iE=; b=q5sLiHpiX4mwKJ CueG9MeHHoFkt0QPZAGJ2sArojw423GZVrcMItjNk+3NxHte63BB8BW9ede/v0YDnw02kYiXV/8AF K2HuLJbJBRy97W2ZydhDDUvnDNip+p8lEzUFEPPmOEV8OuaYOTfS4vmJw48F1alMBkbYAo3RxrI61 y9VvVeO/ItvOLUMSGpjaGPkqTOyuNFvtMP8MaiuX5XCKOspi4K/eehyVo0rsvz98/3vxpX/8EuEMo vqiWpwCprKzmVwPxiNVs//wIix5WUu41toMToJnrg9ZuWBeEOxAfbbv9lgQDhsB8Z/pFj/NqmzeEa 1DwO7RgygzO8tt+kdm9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQd8-007EZm-RU; Thu, 04 Nov 2021 00:29:22 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQa0-007CaH-Sa for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:10 +0000 Received: by mail-pg1-x549.google.com with SMTP id c2-20020a63d5020000b029023ae853b72cso2357711pgg.18 for ; Wed, 03 Nov 2021 17:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=d5eBGhjOdraOb8ROBhsskw4veY1ejlVUg/ty/xkC83c=; b=L0C1v+L5WkKOg/1UpzG68qTukZN83qwmMd/ueUN7ewuot9u43aTlXxZMHBpBnaC7/S e6lXD4GKbCtYx2tBTmg/D1dxn24ATg3EXOzk/ufgGhsCdxjkWD3S/Oet9+81xR7BjiXX KsSER1qqH3/ReOXKXEOMxEw+ZWZNMNrHAG+1wsXmgAmQGJdPsauAqWgxGa2bXM3eqrsd MHAblvb9KsqWTWqMYMO09zCzubyhjCK3PZcX9W1VDJfzaXxKjt1+UDPCWwyzZTqBxXQG LYMbx+aWi+q//Eh8G56e8rXxUsD7RwW0EkMLPRzsJeTaMSzf+hbvycjlYJ8jReWzoeCU NYQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=d5eBGhjOdraOb8ROBhsskw4veY1ejlVUg/ty/xkC83c=; b=QZAG1j5b4DxEzu7krMQ1NuTytI9LxVbZGbpkc7vDjMdf46pYu5q+4pyZdoHWJ6gvck aY0EiUcsHhkbq1FpUQznzLT6w/23wYR5SDNsxSFCMx9Dx9J/iah7I4g4NmtDrn0o8AIa 6oXUPDB30ukeqnIsO9t2DoPOQ9EFvKiOyKBTLJWVqXMr8oLDfVE1ugOMtgu65sNaijD8 6wzLGOuruCOPj+BJ+noP04ibPBqqKGMz5YatFQrOLE5hlH0XwjxHtX7njb4TUxHrbGwb PnjXpNRF6N7w5B8vfMKdOWu6cRJ2vyaVfhqRknyxBjA2HmtrEcR8HBPnts4FAfpASbft l8pA== X-Gm-Message-State: AOAM533HYdDzdCoZLVFpciTJ8szHRWQ9NrvK29srq1AoEQiUx0H6+Q/f kzGQhItG3/c5PSJwKBYTbVeXmnCOtO4= X-Google-Smtp-Source: ABdhPJwR/8S2bf9OYImnNTGTvQtJKx+tPqKxegkkAxc8UN7P/Djj1xJRY5WUZbtpalPRRv4cuF26HKA4p5A= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:5285:: with SMTP id w5mr261267pjh.1.1635985566550; Wed, 03 Nov 2021 17:26:06 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:06 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-6-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 05/30] KVM: Resync only arch fields when slots_arch_lock gets reacquired From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172608_980961_059F7B54 X-CRM114-Status: GOOD ( 17.53 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero There is no need to copy the whole memslot data after releasing slots_arch_lock for a moment to install temporary memslots copy in kvm_set_memslot() since this lock only protects the arch field of each memslot. Just resync this particular field after reacquiring slots_arch_lock. Note, this also eliminates the need to manually clear the INVALID flag when restoring memslots; the "setting" of the INVALID flag was an unwanted side effect of copying the entire memslots. Signed-off-by: Maciej S. Szmigiero [sean: tweak shortlog, note INVALID flag in changelog, revert comment] Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 45 +++++++++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 20 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6171ddb3e31c..e5c2d10f6111 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1500,12 +1500,6 @@ static size_t kvm_memslots_size(int slots) (sizeof(struct kvm_memory_slot) * slots); } -static void kvm_copy_memslots(struct kvm_memslots *to, - struct kvm_memslots *from) -{ - memcpy(to, from, kvm_memslots_size(from->used_slots)); -} - /* * Note, at a minimum, the current number of used slots must be allocated, even * when deleting a memslot, as we need a complete duplicate of the memslots for @@ -1524,11 +1518,22 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT); if (likely(slots)) - kvm_copy_memslots(slots, old); + memcpy(slots, old, kvm_memslots_size(old->used_slots)); return slots; } +static void kvm_copy_memslots_arch(struct kvm_memslots *to, + struct kvm_memslots *from) +{ + int i; + + WARN_ON_ONCE(to->used_slots != from->used_slots); + + for (i = 0; i < from->used_slots; i++) + to->memslots[i].arch = from->memslots[i].arch; +} + static int kvm_set_memslot(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *new, int as_id, @@ -1569,9 +1574,10 @@ static int kvm_set_memslot(struct kvm *kvm, slot->flags |= KVM_MEMSLOT_INVALID; /* - * We can re-use the memory from the old memslots. - * It will be overwritten with a copy of the new memslots - * after reacquiring the slots_arch_lock below. + * We can re-use the old memslots, the only difference from the + * newly installed memslots is the invalid flag, which will get + * dropped by update_memslots anyway. We'll also revert to the + * old memslots if preparing the new memory region fails. */ slots = install_new_memslots(kvm, as_id, slots); @@ -1588,12 +1594,14 @@ static int kvm_set_memslot(struct kvm *kvm, mutex_lock(&kvm->slots_arch_lock); /* - * The arch-specific fields of the memslots could have changed - * between releasing the slots_arch_lock in - * install_new_memslots and here, so get a fresh copy of the - * slots. + * The arch-specific fields of the now-active memslots could + * have been modified between releasing slots_arch_lock in + * install_new_memslots and re-acquiring slots_arch_lock above. + * Copy them to the inactive memslots. Arch code is required + * to retrieve memslots *after* acquiring slots_arch_lock, thus + * the active memslots are guaranteed to be fresh. */ - kvm_copy_memslots(slots, __kvm_memslots(kvm, as_id)); + kvm_copy_memslots_arch(slots, __kvm_memslots(kvm, as_id)); } /* @@ -1642,13 +1650,10 @@ static int kvm_set_memslot(struct kvm *kvm, return 0; out_slots: - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { - slot = id_to_memslot(slots, new->id); - slot->flags &= ~KVM_MEMSLOT_INVALID; + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) slots = install_new_memslots(kvm, as_id, slots); - } else { + else mutex_unlock(&kvm->slots_arch_lock); - } kvfree(slots); return r; } From patchwork Thu Nov 4 00:25:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47A96C433EF for ; Thu, 4 Nov 2021 00:30:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EF01D60FC2 for ; Thu, 4 Nov 2021 00:30:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EF01D60FC2 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6UlW8yrZewhQJQ5JXSbBqXVlrjcQIUVb3q5eq63bh10=; b=YFREEN2d9R8d4U g8N/841PdwH51akDzt/d/m0/cENtgYddKBme6zTYt/LzaW8lu/dNUjZ6OVOzv5ntoR98dNzotO5dj I0pgGAVepV2bn2s/Z2q0od/c5JHYsr7+iGjlwhgZTTz463MVB6xePN9mS81hyp3uvVr80X1Khs/78 nVx2lMD5Tu4o0GY4lhb8gmoPVRXEJKSRkzNHm5uqJ0pGNv2cLaTQlbSdxyVoZTpKNCVPCEcs/N7nX gmFalzrQZ7wdw50aGAn4asJYJLnUHGI4vmO43WpUtQl08quWyZoCVuxrhxt7+BLXXEZWV1/WoAtvW l80KfKQvYzZwvC7Cy0ug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQdx-007F1m-Os; Thu, 04 Nov 2021 00:30:13 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQa2-007CbC-B6 for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:12 +0000 Received: by mail-pg1-x549.google.com with SMTP id m74-20020a633f4d000000b0029fed7e61f9so2362055pga.16 for ; Wed, 03 Nov 2021 17:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=LkuPu2MAmPCt3jWV5gNi6FOTigPIaUyFE38xvFpYcwQ=; b=fQ2bhoWoTNjz/8iYeFRgk8xE8v2Objy7+s+fRs3Xbjqo7xemtwELytlJkHcmrH+I8M nF5+PPWzNIVtCFCIG/25evZjDbhEXHDEYI4QPrbiUqkMtJe/NqQSXDqAkRrWHvTsM3jA SrPBKggz5NoPesjkkd2VEpQQcxnugvWHDTSR8XA4/eYqple6lXn1XqFKMNVmaihp1/C5 Wp0hIBRrvn/u5O+ODQm5fCPzEDbT4Tydnp6dW0bf7m2CqdK9fPiOK1MYaFjy9+H/z58g ImoAfZaL85IvUdUGLtGquAHeWwmvPs8y2pam7sWV2Bi4gxyPO2lJ2+0Gm/fMDsQ6WkmD xZEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=LkuPu2MAmPCt3jWV5gNi6FOTigPIaUyFE38xvFpYcwQ=; b=vT3+7DlWhxQyMM/r3Xr4wo7l9httCJVMfBzQ+6Y/3wOBuV4lnB9ClTOJlnk4ygTNk1 AumcunySczr/UXfJ5qNOB32j5T17As22TMZk47RxVuwhDx/6d2FXBgu7xwplHUmvZRA7 IPFGK3LXyrXQ/atsnP44tWnAWzTyN4rukVuR4hQzXRhG+yhgyax02BPeaoGYWo6lCWbK N9auWQuMwuEGPnvvhpWNCiMC+D85r3GGsO+k7JIAvawXMvPCPeYOSuKPA1npppzLrXta BwaNJdsdmetDry8Vhn8gi2JEqnivL4lWvwVQwQkStrNiW4SYu+DHmKhcpaLXMRyEzoIz f+eg== X-Gm-Message-State: AOAM530/fT3g8TFzxl4f4KzzVqlsg82PAX6Qj4OD8nMvDXF1QFEir4yc gtA2CElnKp8GO42wOtcZRTFxFRcLaTg= X-Google-Smtp-Source: ABdhPJzeKw7Ie9jZTvrBx1qRvJ+/3gb9DtrWa/OZ/zOTtW8+RR4Bzhv/NHzsa4hGJ6ZmrRjYSntDBRUHEEc= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:ecca:b0:141:e920:3b71 with SMTP id a10-20020a170902ecca00b00141e9203b71mr22220776plh.10.1635985568621; Wed, 03 Nov 2021 17:26:08 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:07 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-7-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 06/30] KVM: Use "new" memslot's address space ID instead of dedicated param From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172610_395569_908C8E92 X-CRM114-Status: GOOD ( 14.13 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Now that the address space ID is stored in every slot, including fake slots used for deletion, use the slot's as_id instead of passing in the redundant information as a param to kvm_set_memslot(). This will greatly simplify future memslot work by avoiding passing a large number of variables around purely to honor @as_id. Drop a comment in the DELETE path about new->as_id being provided purely for debug, as that's now a lie. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- virt/kvm/kvm_main.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e5c2d10f6111..39a64e02a43a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1536,7 +1536,7 @@ static void kvm_copy_memslots_arch(struct kvm_memslots *to, static int kvm_set_memslot(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *new, int as_id, + struct kvm_memory_slot *new, enum kvm_mr_change change) { struct kvm_memory_slot *slot, old; @@ -1559,7 +1559,7 @@ static int kvm_set_memslot(struct kvm *kvm, */ mutex_lock(&kvm->slots_arch_lock); - slots = kvm_dup_memslots(__kvm_memslots(kvm, as_id), change); + slots = kvm_dup_memslots(__kvm_memslots(kvm, new->as_id), change); if (!slots) { mutex_unlock(&kvm->slots_arch_lock); return -ENOMEM; @@ -1579,7 +1579,7 @@ static int kvm_set_memslot(struct kvm *kvm, * dropped by update_memslots anyway. We'll also revert to the * old memslots if preparing the new memory region fails. */ - slots = install_new_memslots(kvm, as_id, slots); + slots = install_new_memslots(kvm, new->as_id, slots); /* From this point no new shadow pages pointing to a deleted, * or moved, memslot will be created. @@ -1601,7 +1601,7 @@ static int kvm_set_memslot(struct kvm *kvm, * to retrieve memslots *after* acquiring slots_arch_lock, thus * the active memslots are guaranteed to be fresh. */ - kvm_copy_memslots_arch(slots, __kvm_memslots(kvm, as_id)); + kvm_copy_memslots_arch(slots, __kvm_memslots(kvm, new->as_id)); } /* @@ -1618,7 +1618,7 @@ static int kvm_set_memslot(struct kvm *kvm, WARN_ON_ONCE(change != KVM_MR_CREATE); memset(&old, 0, sizeof(old)); old.id = new->id; - old.as_id = as_id; + old.as_id = new->as_id; } /* Copy the arch-specific data, again after (re)acquiring slots_arch_lock. */ @@ -1629,7 +1629,7 @@ static int kvm_set_memslot(struct kvm *kvm, goto out_slots; update_memslots(slots, new, change); - slots = install_new_memslots(kvm, as_id, slots); + slots = install_new_memslots(kvm, new->as_id, slots); /* * Update the total number of memslot pages before calling the arch @@ -1651,7 +1651,7 @@ static int kvm_set_memslot(struct kvm *kvm, out_slots: if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) - slots = install_new_memslots(kvm, as_id, slots); + slots = install_new_memslots(kvm, new->as_id, slots); else mutex_unlock(&kvm->slots_arch_lock); kvfree(slots); @@ -1723,13 +1723,9 @@ int __kvm_set_memory_region(struct kvm *kvm, memset(&new, 0, sizeof(new)); new.id = id; - /* - * This is only for debugging purpose; it should never be - * referenced for a removed memslot. - */ new.as_id = as_id; - return kvm_set_memslot(kvm, mem, &new, as_id, KVM_MR_DELETE); + return kvm_set_memslot(kvm, mem, &new, KVM_MR_DELETE); } new.as_id = as_id; @@ -1792,7 +1788,7 @@ int __kvm_set_memory_region(struct kvm *kvm, bitmap_set(new.dirty_bitmap, 0, new.npages); } - r = kvm_set_memslot(kvm, mem, &new, as_id, change); + r = kvm_set_memslot(kvm, mem, &new, change); if (r) goto out_bitmap; From patchwork Thu Nov 4 00:25:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58FA0C433EF for ; Thu, 4 Nov 2021 00:31:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1547E60FC2 for ; Thu, 4 Nov 2021 00:31:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1547E60FC2 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tqkMovtrORHbzekW/pw8Y3IntZdGfBWWupfB4sRXZtc=; b=gcoPLSsg/n6kuV 3mx76KsHQVGSsJcmiX+v6aQbNaT76ZLZTmPHH99YNZ05DeaHtPLRQKLA8QYQWgOTZTwGc54A2g/yw EPDi38axjKrC1uEPmm4qoBEXtfgKdJviYKMmq1CTbsYQ0bWklFmCXiMF0rnqSp9ZgBWVDV9U8M6aV t2y++jKjMZ6ue6pIhzQTdqbru8eOY9s3jdF4DvXfFeJDtSQMOxE0yMmm0zZQmhnDgAyU8eiLpWJl8 g7ZTxqnYNVmmYBiqXKgHU3YdBGnufVXHuH/RjPo9atf2j8AoDzDSL3fCILCt0/AjPWuMaW4utDyRc 0w1O1Bmxy3Fl+aq8Zk0g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQeo-007FPT-Qo; Thu, 04 Nov 2021 00:31:06 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQa3-007CcJ-JJ for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:14 +0000 Received: by mail-pf1-x44a.google.com with SMTP id f18-20020aa79d92000000b0048118561271so2320243pfq.21 for ; Wed, 03 Nov 2021 17:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uOj9MASDMyqaD09CBvQ7SzGDpItQArWAxGfhYMIUqmI=; b=E4RZpau9+nQn9+/n2RPOrlq7PvXpj7MG17o7n/BfHN1YYghuZgPxkY/A4mPyL4uQB1 OTGg4p9vl7mk39A/YPmXPTB+1TX2BqR/VJTrf6MR6dlMXOVdFEc2D2DnzVhcp+c1miB9 +4Mf4pRkU/EMkg3/RPSq3IrSQ+bsFFzeMA05CDAi//Bk9y5v/LlxHdKSBePEDVdG6P4y pE5btkxCowCg99D6M05INgKguiJHSnvf3qeMReSJWqSZN/DhWv8TDKnnVJhuqjSfx4lj 79ZFqvoySZzmjQmlK9lY+IaH7vpqEEoiSTle4MYRv10ShZmDqKBGgw1XVolC/qkDVszH I77A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uOj9MASDMyqaD09CBvQ7SzGDpItQArWAxGfhYMIUqmI=; b=XtaaPdv7Ipwm9dM0QVRrGeMOQdvLmm/SzF5ahPJsFxpjiJjmQFowTKs4/iDJ+lYBbw Juz9rJbL5ZGvodo9Rr9mQjTx9+LfKrFhK+Gh+s8W3F6ziqaLkTMMQ3VHBYkf0hNN2Ai+ 8HQgYuVydFvQWLsR88xw1LZ+7RWF4kus2sqbstg1ydvulRYwqNkiAJrOjYSmNBfrpgpe IZRM3JzXCiox6NQoQrKCU1rNi/j9iED17VWovrG2I0yr+ZWWxx/GotMaxyRwUsmTUwsX 6yTmzMzTxfhvb+UXLqUT62LUT9ZTd8JnbzGX4A75zJQSx+WHb8QTXrBd2RcP71wHXieT 9mlA== X-Gm-Message-State: AOAM532qNJOaUK72b2ZXL7kDVxC3+kzMEdlCB2+5y8Hebpge0HROJLxw ViW985tsWP7rsdGFVCIB7oPgFGTi3do= X-Google-Smtp-Source: ABdhPJziJOkwY/+7Zk3EBcubkrBrhImOClMzbCgTtTc18W0CvhysXNGGt481m/GlPLefrB8WbPeog+gR4I0= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:10d2:b0:44d:f03e:46c7 with SMTP id d18-20020a056a0010d200b0044df03e46c7mr47982150pfu.0.1635985570053; Wed, 03 Nov 2021 17:26:10 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:08 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-8-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 07/30] KVM: Let/force architectures to deal with arch specific memslot data From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172611_672952_1B69B975 X-CRM114-Status: GOOD ( 17.73 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Pass the "old" slot to kvm_arch_prepare_memory_region() and force arch code to handle propagating arch specific data from "new" to "old" when necessary. This is a baby step towards dynamically allocating "new" from the get go, and is a (very) minor performance boost on x86 due to not unnecessarily copying arch data. For PPC HV, copy the rmap in the !CREATE and !DELETE paths, i.e. for MOVE and FLAGS_ONLY. This is functionally a nop as the previous behavior would overwrite the pointer for CREATE, and eventually discard/ignore it for DELETE. For x86, copy the arch data only for FLAGS_ONLY changes. Unlike PPC HV, x86 needs to reallocate arch data in the MOVE case as the size of x86's allocations depend on the alignment of the memslot's gfn. Opportunistically tweak kvm_arch_prepare_memory_region()'s param order to match the "commit" prototype. Signed-off-by: Sean Christopherson --- arch/arm64/kvm/mmu.c | 7 ++++--- arch/mips/kvm/mips.c | 3 ++- arch/powerpc/include/asm/kvm_ppc.h | 18 ++++++++++-------- arch/powerpc/kvm/book3s.c | 12 ++++++------ arch/powerpc/kvm/book3s_hv.c | 17 ++++++++++------- arch/powerpc/kvm/book3s_pr.c | 17 +++++++++-------- arch/powerpc/kvm/booke.c | 5 +++-- arch/powerpc/kvm/powerpc.c | 5 +++-- arch/s390/kvm/kvm-s390.c | 3 ++- arch/x86/kvm/x86.c | 15 +++++++++++---- include/linux/kvm_host.h | 3 ++- virt/kvm/kvm_main.c | 5 +---- 12 files changed, 63 insertions(+), 47 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 69bd1732a299..cc41eadfbbf4 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1486,8 +1486,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, enum kvm_mr_change change) { hva_t hva = mem->userspace_addr; @@ -1502,7 +1503,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, * Prevent userspace from creating a memory region outside of the IPA * space addressable by the KVM guest IPA space. */ - if ((memslot->base_gfn + memslot->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT)) + if ((new->base_gfn + new->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT)) return -EFAULT; mmap_read_lock(current->mm); @@ -1536,7 +1537,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (vma->vm_flags & VM_PFNMAP) { /* IO region dirty page logging not allowed */ - if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (new->flags & KVM_MEM_LOG_DIRTY_PAGES) { ret = -EINVAL; break; } diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 562aa878b266..8c94cd4093af 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -233,8 +233,9 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, enum kvm_mr_change change) { return 0; diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 671fbd1a765e..b01760dd1374 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -200,12 +200,13 @@ extern void kvmppc_core_destroy_vm(struct kvm *kvm); extern void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); extern int kvmppc_core_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change); -extern void kvmppc_core_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change); +extern void kvmppc_core_commit_memory_region(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change); extern int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm, @@ -274,12 +275,13 @@ struct kvmppc_ops { int (*get_dirty_log)(struct kvm *kvm, struct kvm_dirty_log *log); void (*flush_memslot)(struct kvm *kvm, struct kvm_memory_slot *memslot); int (*prepare_memory_region)(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change); - void (*commit_memory_region)(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change); + void (*commit_memory_region)(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change); bool (*unmap_gfn_range)(struct kvm *kvm, struct kvm_gfn_range *range); diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index b785f6772391..8250e8308674 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -847,17 +847,17 @@ void kvmppc_core_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) } int kvmppc_core_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change) { - return kvm->arch.kvm_ops->prepare_memory_region(kvm, memslot, mem, - change); + return kvm->arch.kvm_ops->prepare_memory_region(kvm, mem, old, new, change); } void kvmppc_core_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, + struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) { diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2acb1c96cfaf..5bf763a74c22 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4828,17 +4828,20 @@ static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *slot) } static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm, - struct kvm_memory_slot *slot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change) { unsigned long npages = mem->memory_size >> PAGE_SHIFT; if (change == KVM_MR_CREATE) { - slot->arch.rmap = vzalloc(array_size(npages, - sizeof(*slot->arch.rmap))); - if (!slot->arch.rmap) + new->arch.rmap = vzalloc(array_size(npages, + sizeof(*new->arch.rmap))); + if (!new->arch.rmap) return -ENOMEM; + } else if (change != KVM_MR_DELETE) { + new->arch.rmap = old->arch.rmap; } return 0; @@ -4846,7 +4849,7 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm, static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, + struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) { diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 6bc9425acb32..58d3ae4605c0 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1899,16 +1899,17 @@ static void kvmppc_core_flush_memslot_pr(struct kvm *kvm, } static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) -{ - return 0; -} - -static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + return 0; +} + +static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) { diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 977801c83aff..fcf9c1dbd442 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -1807,8 +1807,9 @@ void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) } int kvmppc_core_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, enum kvm_mr_change change) { return 0; @@ -1816,7 +1817,7 @@ int kvmppc_core_prepare_memory_region(struct kvm *kvm, void kvmppc_core_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, + struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) { diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 8ab90ce8738f..ca28e7acaae8 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -706,11 +706,12 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, enum kvm_mr_change change) { - return kvmppc_core_prepare_memory_region(kvm, memslot, mem, change); + return kvmppc_core_prepare_memory_region(kvm, mem, old, new, change); } void kvm_arch_commit_memory_region(struct kvm *kvm, diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 6a6dd5e1daf6..d766d764d24c 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -5016,8 +5016,9 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) /* Section: memory related */ int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, enum kvm_mr_change change) { /* A few sanity checks. We can have memory slots which have to be diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ac83d873d65b..aa2abca47af0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11727,13 +11727,20 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change) { if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) - return kvm_alloc_memslot_metadata(kvm, memslot, + return kvm_alloc_memslot_metadata(kvm, new, mem->memory_size >> PAGE_SHIFT); + + if (change == KVM_MR_FLAGS_ONLY) + memcpy(&new->arch, &old->arch, sizeof(old->arch)); + else if (WARN_ON_ONCE(change != KVM_MR_DELETE)) + return -EIO; + return 0; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d8e92d4a78d8..f8e79cf7584f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -826,8 +826,9 @@ int __kvm_set_memory_region(struct kvm *kvm, void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, enum kvm_mr_change change); void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 39a64e02a43a..389243120435 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1621,10 +1621,7 @@ static int kvm_set_memslot(struct kvm *kvm, old.as_id = new->as_id; } - /* Copy the arch-specific data, again after (re)acquiring slots_arch_lock. */ - memcpy(&new->arch, &old.arch, sizeof(old.arch)); - - r = kvm_arch_prepare_memory_region(kvm, new, mem, change); + r = kvm_arch_prepare_memory_region(kvm, mem, &old, new, change); if (r) goto out_slots; From patchwork Thu Nov 4 00:25:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 224CDC433EF for ; Thu, 4 Nov 2021 00:32:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DAFDB60E9C for ; Thu, 4 Nov 2021 00:32:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DAFDB60E9C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mF2OFg+Q/6wFrfswM1WbF4gDGAzbP0/5iARKlqzOEo4=; b=GDc/QgEkqdC1co jigK0GoUiKG7F6UX1zlk78chgSKD1yZ3fN2OZIjSij/lQUlBQoM78gKqgTODQV/yJLf04iGgAc70T OKxKcXxnDPOuPmW1kXfiuOAJIBsv3jJLUOqO2PHNjlbUNQHIra/z672SJkohVImYM/DP8sLK3apG0 JY1yOjTlW9zzMpOhypIVhTsJe9wBjhSN8i6UJhQ5uYGujSoWR99bXMTSsdtvwTi4NIex64by7VlS+ XrQ0xzUtJS3ec6RYQDxJM8g/CotRAYEdHXz4wpaVcsXLnfQUI3V4PIeP3wxFP1OWC0zSJF+FZlDmA /ImrWye5oALoptMfS8aw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQfj-007Fpa-Q1; Thu, 04 Nov 2021 00:32:03 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQa5-007CdO-HK for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:15 +0000 Received: by mail-pf1-x44a.google.com with SMTP id m26-20020a62a21a000000b0041361973ba7so2330513pff.15 for ; Wed, 03 Nov 2021 17:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=tLQvwVMFuRU1rudmVd+Ot8DGDwWCKV2XWE4z9OvxcSI=; b=aidjRg109rDQhcbEOaDvYUP/J6xeFiJXWQHPB/9inpIU47CGJj4EB+RwhcIG070Mie lVHTfvEeNG2Brzpgw8EZNdAVeUVZ4nNqCUo0Ix5XLJ0H18oJo/2rFOKMzl6fWLSJY191 J6xXKGp+3siToXIqVebLKzjgDI+yAMnV78lg50hWKNPX/pVq2TrOflawSIczG+Lqobke Z0ZzafsHPcdo8PEDuhw/rn2m58dsopFaT3sRkFsFmilo83Ii5s7nnixu4BD3y0cdffxJ kzs8TJbXMv5IItYIkJSv/EhZBtHn1F5Yu5VOIhoRSGmVNkoq+MSSDwRM5budJwMaPSM5 i1ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=tLQvwVMFuRU1rudmVd+Ot8DGDwWCKV2XWE4z9OvxcSI=; b=3DkkHRffPlG2ptrm9XHaolIVXPBMuJtn5lAVwhqZYndmqL9jRidbydqtsFCvjYEx07 wbqXh6zwGVPxGOd9IKluHnJfxm3cYEzaM7fpsVwj66Wy89IlXM8BwbMZfxFzBPJjxDdb Wv2OdKSRghdtmGhKTerUuvIhrMUp7jNnNBRr6NC6UaC6yw2J/u9y5VS6HrWQYXKNy8Ay cqOQ5xpxOOEdDuiNzWtA3vsDDwGW2pzuizAyDYoFiATkGWWfr/tUEOdDyGeu8ifQ/rZp qsvcL0CvWq/cLylud1W01I8fpgIf+A+Vz6sAyicKr+gazFutbnHCa1cKd26e1E3lamrm B0Jw== X-Gm-Message-State: AOAM533vgPtZGti3V4Zwm1xAkSQupFkNaEHZKvXP31TRtpAIjLbBt0iU cvr0OZUjI4e0foeUWIVKSbywNxSmqzQ= X-Google-Smtp-Source: ABdhPJwCsMzZuRxv6iw0pTuYU4/wjVIfD1uNyeuH4Gna3eUrPsMPSwpB1c5wUSrPuzkGj84qqgXi63hO2s0= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1484:b0:48c:2e58:8d39 with SMTP id v4-20020a056a00148400b0048c2e588d39mr11085289pfu.13.1635985571770; Wed, 03 Nov 2021 17:26:11 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:09 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-9-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 08/30] KVM: arm64: Use "new" memslot instead of userspace memory region From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172613_615418_77419454 X-CRM114-Status: GOOD ( 12.01 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Get the slot ID, hva, etc... from the "new" memslot instead of the userspace memory region when preparing/committing a memory region. This will allow a future commit to drop @mem from the prepare/commit hooks once all architectures convert to using "new". Opportunistically wait to get the hva begin+end until after filtering out the DELETE case in anticipation of a future commit passing NULL for @new when deleting a memslot. Signed-off-by: Sean Christopherson Reviewed-by: Reiji Watanabe --- arch/arm64/kvm/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index cc41eadfbbf4..21213cba7c47 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1473,14 +1473,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * allocated dirty_bitmap[], dirty pages will be tracked while the * memory slot is write protected. */ - if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (change != KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) { /* * If we're with initial-all-set, we don't need to write * protect any pages because they're all reported as dirty. * Huge pages and normal pages will be write protect gradually. */ if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) { - kvm_mmu_wp_memory_region(kvm, mem->slot); + kvm_mmu_wp_memory_region(kvm, new->id); } } } @@ -1491,8 +1491,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, struct kvm_memory_slot *new, enum kvm_mr_change change) { - hva_t hva = mem->userspace_addr; - hva_t reg_end = hva + mem->memory_size; + hva_t hva, reg_end; int ret = 0; if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && @@ -1506,6 +1505,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if ((new->base_gfn + new->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT)) return -EFAULT; + hva = new->userspace_addr; + reg_end = hva + (new->npages << PAGE_SHIFT); + mmap_read_lock(current->mm); /* * A memory region could potentially cover multiple VMAs, and any holes From patchwork Thu Nov 4 00:25:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B113C433F5 for ; Thu, 4 Nov 2021 00:32:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 63BF6611C3 for ; Thu, 4 Nov 2021 00:32:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 63BF6611C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5ix4zB9z8A1wdPfsgdPILwoXbn9wzDNSMxJiSs5jeEo=; b=vIRjkpVSzDZmBA 2fDWW7MhUR2DPuIEu0eL5ReqpMHQ3FsR+KjZbIDkFPBpvsdLcbop+tRf9i0cgVEQIj+LmqzWA6Y11 xb8Bv9m9a4U5wyvH+BmtgPpdBXozd7NbgDEONfPSIB+zDepQ52BNfFI7Ms9kp6qDK9VDbSFO3La1t 1aPLhu0b2TfyUM8TxixC7mkr/PWjNrvzp7cvrCbVKfYr177M21kv5We7N2e8ZZ5g8hOgNMpqsvsrt RUwa5yzusxII7jr3zq25PiU826REsOSAnO+lun1PqpvpE8UmUZSJD59EupaqaPNsVC/jFJv9+B1rf uuTdrQcZfQS6pKTlkqSg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQfn-007Fqz-AB; Thu, 04 Nov 2021 00:32:07 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQa7-007CeO-6G for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:16 +0000 Received: by mail-pl1-x64a.google.com with SMTP id q2-20020a170902dac200b001422673d86fso1025056plx.20 for ; Wed, 03 Nov 2021 17:26:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=8x8y6LCkLptxfKHX5+pDaTNp73wBECOiN1YdxqMK+rA=; b=Kwf8l4ALLidbOCnsA8IZb7EGVbQY2I/GyIc70v/xPgAN2uNZmJzzLzZJfwAX2uOOCO bOK9U8USa/+T5EGJ5WM+xo1wGM21VgQud3S9i76kNITLTzwA0wNdnUr/fJDAgqhSvhap Yw0ASqeY//wjIN0mhEQ3qwyRthBADSpw3C4ZFZ4+KorPOGjjv9GHHalj2Z0JVLcN7rKX 1aJXnhddEMDTeyAz0wqrWJZzkxWK24hOVug2NjMIF6/CfNf9YTCoQnGyCXWnO5Qwes4j g5HbrauvOLkXEcvWsWbG0nodw7IIljy/H4UiUpBEQUah5lbLSlwy7/6wMiTLEuWxciTS tAtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=8x8y6LCkLptxfKHX5+pDaTNp73wBECOiN1YdxqMK+rA=; b=uK0twivr7kFKeci4gkvbHtCRHsQ5RQ+k6IdvxXvng9r3sHAl9wL55NTMwEMLvlJE7C QkVOIM3U+7waEZkE+k/UkTXguoim+8kNx7HcYeOCnjqYNTmIZM4umn5x4GeSu9iGpr2W W5tBDL8GkWwata/hEgj2JkwWdLUgfVOl45Ege2VM6no/fbasv9FGjNKxvbI8NnqGu3Ik i3a5pia3PUROs7lAW3ZMoimvmoaIeo5u/aJPFxSzsOfJ/NKYJjnhv1gAzmXOHMmttzgB dj/zZykgUT9M3laPjOIhp/6aymi/ZARkV3v2HjbwYEb2kV+31WGSo8jt/FP+430NJJ6w nUbw== X-Gm-Message-State: AOAM532us1j8qO0xIRma00M/0zEhk2yem9PT74W9CkR+KVGGum5vjQAJ wToI/H9OG9QJLVGcxun49t9jaJEcOig= X-Google-Smtp-Source: ABdhPJxsJ29Wj32AtuAm6joDY8iQebT4Rnu7/phTa+MfeSdX22KfADwkK1eYIPGppvLFyQSyOgzQ6GfE4Fg= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:5285:: with SMTP id w5mr261308pjh.1.1635985573544; Wed, 03 Nov 2021 17:26:13 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:10 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-10-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 09/30] KVM: MIPS: Drop pr_debug from memslot commit to avoid using "mem" From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172615_306603_49F4DE53 X-CRM114-Status: GOOD ( 10.35 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Remove an old (circa 2012) kvm_debug from kvm_arch_commit_memory_region() to print basic information when committing a memslot change. The primary motivation for removing the kvm_debug is to avoid using @mem, the user memory region, so that said param can be removed. Alternatively, the debug message could be converted to use @new, but that would require synthesizing select state to play nice with the DELETED case, which will pass NULL for @new in the future. And there's no argument to be had for dumping generic information in an arch callback, i.e. if there's a good reason for the debug message, then it belongs in common KVM code where all architectures can benefit. Signed-off-by: Sean Christopherson --- arch/mips/kvm/mips.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 8c94cd4093af..b7aa8fa4a5fb 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -249,10 +249,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, { int needs_flush; - kvm_debug("%s: kvm: %p slot: %d, GPA: %llx, size: %llx, QVA: %llx\n", - __func__, kvm, mem->slot, mem->guest_phys_addr, - mem->memory_size, mem->userspace_addr); - /* * If dirty page logging is enabled, write protect all pages in the slot * ready for dirty logging. From patchwork Thu Nov 4 00:25:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B240C433F5 for ; Thu, 4 Nov 2021 00:34:07 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 62806611C7 for ; Thu, 4 Nov 2021 00:34:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 62806611C7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OCihlK1lxVfnJ0FMDa4oMb5X5tHbOjJEvyt8PqzL5yQ=; b=lr0F1v26EJ1rTK g3PI27iJpnkxBlMuQq8okjd8XURrjI2EGXpo0fFS7mmhJf2uil/9fDJ4MaaPFksgFRd1oYpPkiKau TlPLxGzoA8XIeXZnqCE/aF0n1f7bJWDfMZzYEqdiF8LtjQNXUa4CFeCQ4NOrAZ0e9Mb2lW/4GDgUF TDiemc/1htF09nK3Q4U9gwk9xHbKcuBVb2O6f5cr3JbIvue1Nc/Ndvhbw0zgCiS3hP6nEscGxE8mU MKfS7yaXNuIllOxqzf9jr9nLHOS6eLet422/ZDS7Ix9W+qxQ6XX4SZpWXBAS3KpAifcIPEoDHSvJk YGaAg6hIQFMwSX9ceBJg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQhY-007Gds-Jo; Thu, 04 Nov 2021 00:33:56 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQa8-007CfE-Vn for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:19 +0000 Received: by mail-pl1-x64a.google.com with SMTP id u10-20020a170902e80a00b001421d86afc4so1535524plg.9 for ; Wed, 03 Nov 2021 17:26:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=PjovHHKwir4eMxna9ALNlFgtKtfStcoQnuN5jUx4d74=; b=MzRIGZ00uU1v6jXFg7sOa49q0QiiJzIeJF0fsnZltmvZzCFhconYRc7Mql7/SxLGGe PQR/3RHhBtRSAtrokWdTMRIYSNfCjrzeqp/2pYl4nuIUdLA+tNXzlJj4UmkkkIK9LPqV 7p7wAQ5d9Kyg2fkqIEMVil6Qgqe36g5ifelvyOMR/AYAS8U+ERKkQK+cGyQQZ8x89Oqo pR44zTX8U0CmR165/PWPO3esK4NT4i+1siE3vZY8sMARYj2xTKpTBhSVsLlYNqaeTVf1 xDjmvGuKQEAXKpLgA0+7lW1H38uqGwX51Von2AgvBjNF1O3E3ZNcRPe20jLwzon1Y1yB H4FQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=PjovHHKwir4eMxna9ALNlFgtKtfStcoQnuN5jUx4d74=; b=FE7WVte29z9pHk2bwBeLsk+w1PKXgYZlMYd+GuiTnXANK4tOMnR9FYH/W515IpKilw o/CfROSJCkoiww4lNWj9r2uAjSVxk8b8e4fmeAr08s8mb8yMwcP47gLm3q++alA81lQ/ NXibl7qr0fMONrn3CzsdveitMzUklSEGgUchw1/pJllmO4V8col38VvNPpNjUo7wVyYX dyWapAi7825Gv9M8v2hB5eAITEt+sq6oNmFT1pqH0htyLY0cqBmaV1DRLJX9/hKzvdGw T8EZI9RvNNUKjK4wQ1rs5Luv3ZDxFn8e9+1M69gDczRhxi2+oevRVuSfYZYW4w9ZgGkx Nv8g== X-Gm-Message-State: AOAM530j8Q4ncugPSXHMB1a9Kg9oyTHi5rrrs6i61nda3CDOBDUsQEnV 1+1sEaEF66J6nGXBThTXGzv5x/9xi0s= X-Google-Smtp-Source: ABdhPJzQ7YmaVSHBOjM7K350dhFQvaiNwSHal69feIXmb76g43+5TJfukiKupDeHK6Cfhk2S1QB6BJZI8TM= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90b:1643:: with SMTP id il3mr11048014pjb.182.1635985575294; Wed, 03 Nov 2021 17:26:15 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:11 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-11-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 10/30] KVM: PPC: Avoid referencing userspace memory region in memslot updates From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172617_098776_29399C68 X-CRM114-Status: GOOD ( 14.77 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org For PPC HV, get the number of pages directly from the new memslot instead of computing the same from the userspace memory region, and explicitly check for !DELETE instead of inferring the same when toggling mmio_update. The motivation for these changes is to avoid referencing the @mem param so that it can be dropped in a future commit. No functional change intended. Signed-off-by: Sean Christopherson --- arch/powerpc/include/asm/kvm_ppc.h | 4 ---- arch/powerpc/kvm/book3s.c | 6 ++---- arch/powerpc/kvm/book3s_hv.c | 12 +++--------- arch/powerpc/kvm/book3s_pr.c | 2 -- arch/powerpc/kvm/booke.c | 2 -- arch/powerpc/kvm/powerpc.c | 4 ++-- 6 files changed, 7 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index b01760dd1374..935c58dc38c4 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -200,12 +200,10 @@ extern void kvmppc_core_destroy_vm(struct kvm *kvm); extern void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); extern int kvmppc_core_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change); extern void kvmppc_core_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change); @@ -275,12 +273,10 @@ struct kvmppc_ops { int (*get_dirty_log)(struct kvm *kvm, struct kvm_dirty_log *log); void (*flush_memslot)(struct kvm *kvm, struct kvm_memory_slot *memslot); int (*prepare_memory_region)(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change); void (*commit_memory_region)(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change); diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 8250e8308674..6d525285dbe8 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -847,21 +847,19 @@ void kvmppc_core_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) } int kvmppc_core_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) { - return kvm->arch.kvm_ops->prepare_memory_region(kvm, mem, old, new, change); + return kvm->arch.kvm_ops->prepare_memory_region(kvm, old, new, change); } void kvmppc_core_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new, change); + kvm->arch.kvm_ops->commit_memory_region(kvm, old, new, change); } bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5bf763a74c22..4d40c1867be5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4828,15 +4828,12 @@ static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *slot) } static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) { - unsigned long npages = mem->memory_size >> PAGE_SHIFT; - if (change == KVM_MR_CREATE) { - new->arch.rmap = vzalloc(array_size(npages, + new->arch.rmap = vzalloc(array_size(new->npages, sizeof(*new->arch.rmap))); if (!new->arch.rmap) return -ENOMEM; @@ -4848,20 +4845,17 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm, } static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - unsigned long npages = mem->memory_size >> PAGE_SHIFT; - /* - * If we are making a new memslot, it might make + * If we are creating or modifying a memslot, it might make * some address that was previously cached as emulated * MMIO be no longer emulated MMIO, so invalidate * all the caches of emulated MMIO translations. */ - if (npages) + if (change != KVM_MR_DELETE) atomic64_inc(&kvm->arch.mmio_update); /* diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 58d3ae4605c0..ca3bfba94fe4 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1899,7 +1899,6 @@ static void kvmppc_core_flush_memslot_pr(struct kvm *kvm, } static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -1908,7 +1907,6 @@ static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm, } static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index fcf9c1dbd442..25dcf079c713 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -1807,7 +1807,6 @@ void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) } int kvmppc_core_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -1816,7 +1815,6 @@ int kvmppc_core_prepare_memory_region(struct kvm *kvm, } void kvmppc_core_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index ca28e7acaae8..59342237e046 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -711,7 +711,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, struct kvm_memory_slot *new, enum kvm_mr_change change) { - return kvmppc_core_prepare_memory_region(kvm, mem, old, new, change); + return kvmppc_core_prepare_memory_region(kvm, old, new, change); } void kvm_arch_commit_memory_region(struct kvm *kvm, @@ -720,7 +720,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - kvmppc_core_commit_memory_region(kvm, mem, old, new, change); + kvmppc_core_commit_memory_region(kvm, old, new, change); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, From patchwork Thu Nov 4 00:25:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EABA2C433FE for ; Thu, 4 Nov 2021 00:35:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BAB14611C3 for ; Thu, 4 Nov 2021 00:35:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BAB14611C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AjfgQm/FFjLz28Q8ZWRK8Srn46JyRg2kEwdHfKQVKS0=; b=gceXeXCCyDoOKO u9LWF8S9JTMDzXTS93T4+Q0jFm+yk5vEBbNFspj7yw/UG8UgP6e93/KM72t9C6TEv2QlDingZLlQm AnLRUMnCQDFIof9dLn1f9FXQ+FHp56TFPirgOSDGDW80VFUT2SKeRE50Lki8R+fK6P05Gce6lrcZm OwHXWpf/g4SA5SgUYt0uSauRoDv6J2UozVZLyhRL7eg2xhvsua7YdcvoE1xehXTvLyzbKVdo+fD/C 3Rn8IhnvOKNEkIZ/6gnZOZaBeHBy7DQqsd4ATUvDQgx79Qpina4J+iHIn53NqKt+kdjr7mER0c5xE moxXloHdJenRanraSBWQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQia-007H6C-2o; Thu, 04 Nov 2021 00:35:00 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaA-007CgP-Qy for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:20 +0000 Received: by mail-pg1-x549.google.com with SMTP id z7-20020a63c047000000b0026b13e40309so2350755pgi.19 for ; Wed, 03 Nov 2021 17:26:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=kLLbpYUiVrBCCmy9Sc82r5ow03dVDTpEgLuIVlNMoOU=; b=Ay0YG9JEBW9+oY2nHageNG1AdA4t7yDgBUGJkYSFHo5eFkEi8vzfRBn5r41f1wSxFg HSgICUcsH+f+lzWZSrC/1+mz5xqRXkD91mzBnDo3BZYbwFr6hDST1vZCD3wSaINgurlO zbuvwIYYwfswIpEP9r8XZUjhOUbrj1e1JEJE4CWqzQQsYzwNhKAy5PlY8BJTrIckkqFw /WBxV8tmpZLjaeDHV2KmyUmRuJtIFaWLMoliidYs9v4yCfTtKY1d7jZJDCgZUJEa5xF2 wIsTmffaYibM2+7wr3G9ycxcN4Z/J+2gaNLg0QQl0/oDgNaLZZgfBR++RADooAplNPs9 dtjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=kLLbpYUiVrBCCmy9Sc82r5ow03dVDTpEgLuIVlNMoOU=; b=V93PDDixfVze4m/m4NGn3fQkIAy91foEPxTpaKoZVVI/ypFiS1To292S7Ws6SOpvG2 QVT3BrvRtmnTxwZc8NjHGtX7ykUd46289DJ5vwaDMXmUVuZpgnyeXEzcN5DJWltBa7YL xghjIxkjJisHdxCwI5uNWXqmV9T1R9wkbOPWbXIbibGDqJj0b44q5nDmFOH5N2XRb4ur prjLuO70zPSkEvvySs2duCN6xZrVTRjWauD0NsbGN8wMESxQnoftSnMSK+TojxDEI0IP kybZlIR495f1Yv8TowPQO6SOyitHBYWZ7XBUxzJchkYPwzVEju6GIYxTouadMFLrDhNs cuGw== X-Gm-Message-State: AOAM530tupp6+YJtnydA59C3aYBZoXTAF7/TtwdFtjKbeJYGCDevcZZN LM4LcOev/b44f9OPirtzU8Np3tRx7z0= X-Google-Smtp-Source: ABdhPJw5JY2aPJBBFAL7b1zmagPoeo9Jl9Nl1wm8mOZYtI4MVm1cgFWi1nr5hcey34/naZNPnAO1I6HTWwg= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:8cd:b0:47b:b9e8:7c2e with SMTP id s13-20020a056a0008cd00b0047bb9e87c2emr47505781pfu.61.1635985577071; Wed, 03 Nov 2021 17:26:17 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:12 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-12-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 11/30] KVM: s390: Use "new" memslot instead of userspace memory region From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172618_941101_CFCEFF1D X-CRM114-Status: GOOD ( 14.98 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Get the gfn, size, and hva from the new memslot instead of the userspace memory region when preparing/committing memory region changes. This will allow a future commit to drop the @mem param. Note, this has a subtle functional change as KVM would previously reject DELETE if userspace provided a garbage userspace_addr or guest_phys_addr, whereas KVM zeros those fields in the "new" memslot when deleting an existing memslot. Arguably the old behavior is more correct, but there's zero benefit into requiring userspace to provide sane values for hva and gfn. Signed-off-by: Sean Christopherson --- If we want to keep the checks for DELETE, my vote would be to add an arch hook that is dedicated to validated the userspace memory region so that the prepare/commit hooks operate only on KVM-generate objects. arch/s390/kvm/kvm-s390.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index d766d764d24c..e69ad13612d9 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -5021,18 +5021,20 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, struct kvm_memory_slot *new, enum kvm_mr_change change) { + gpa_t size = new->npages * PAGE_SIZE; + /* A few sanity checks. We can have memory slots which have to be located/ended at a segment boundary (1MB). The memory in userland is ok to be fragmented into various different vmas. It is okay to mmap() and munmap() stuff in this slot after doing this call at any time */ - if (mem->userspace_addr & 0xffffful) + if (new->userspace_addr & 0xffffful) return -EINVAL; - if (mem->memory_size & 0xffffful) + if (size & 0xffffful) return -EINVAL; - if (mem->guest_phys_addr + mem->memory_size > kvm->arch.mem_limit) + if ((new->base_gfn * PAGE_SIZE) + size > kvm->arch.mem_limit) return -EINVAL; /* When we are protected, we should not change the memory slots */ @@ -5061,8 +5063,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, break; fallthrough; case KVM_MR_CREATE: - rc = gmap_map_segment(kvm->arch.gmap, mem->userspace_addr, - mem->guest_phys_addr, mem->memory_size); + rc = gmap_map_segment(kvm->arch.gmap, new->userspace_addr, + new->base_gfn * PAGE_SIZE, + new->npages * PAGE_SIZE); break; case KVM_MR_FLAGS_ONLY: break; From patchwork Thu Nov 4 00:25:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02A92C433F5 for ; Thu, 4 Nov 2021 00:35:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B0887611C7 for ; Thu, 4 Nov 2021 00:35:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B0887611C7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Na0Ew5QI0VKkrAgqOe1/b2rPZFhIdEwDk3P/M9aqZE0=; b=2uUMWLdiQ3XMLL b2fUav2eUFLxfsqKShYfvx+do/PAB3qVuwtsoY36fDpdIZNZTffVU53FYy+txl7Pq58APeajU2S/j hWdvM1O9gQn4ulS0qnhvHUeJgPsGfexDzyrkzSwQy4KrP5IxvuaX1CTe78QzX4tNkpeUqo4f7HhHH TkZltRb48KnqihbXNqsYv/YTOwy13succLsODEd83Q0Wj/W+2EXcP7TMq2RygVXmURunUAmsrfN4k 0ua5nIac9aEk2Za/VH1M0qSfm/z4THfG6EbqSY8XLqKnh6gUL/e6cw6aDDPG6Q+eoPBRLRre4tqdN Oppf+97LSR+1sUVSF30g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQjJ-007HQ7-TB; Thu, 04 Nov 2021 00:35:45 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaC-007ChT-CW for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:21 +0000 Received: by mail-pf1-x449.google.com with SMTP id a127-20020a627f85000000b0047feae4a8d9so2319575pfd.19 for ; Wed, 03 Nov 2021 17:26:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=A6RNuBqUgsG4d2AkEOuVD6njvyuDvF4HW/gggxH+5XA=; b=OEQtzcgzg8iUsXNKyYwVPMo1zK+4Kr5qoKxGqBuItQV2mlefdBnexzTvsQV8x7tFLB DJ7N1H9zaKUjIJM9lgUrmNBtCvPaEhJRmE75YYyGNfGHftU+r7zpZ5tAJxYDGkRZ5UaN eCFKKjVPfyvAv1MPQ+W3cG8F1U785VZ2TB/bCNeburay+vZTg802nta3hX92M+l5e/kY v9PkejhG6TTnKgDhaZRjUpks89CX+RjAuwRAYXLN4sVhejbhHruqwLuHdy9a63l08zeG UQHlhHNETrOYUd9aJ76VJnvg16EQRO47SPl2LKCIPj/pp3bTuMRIyoksJkM8yu5B2mRm sSDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=A6RNuBqUgsG4d2AkEOuVD6njvyuDvF4HW/gggxH+5XA=; b=se82MT3h8tea0gXeU0vo2zIz1y894DhbF9ywghnWfFaKAE7HKndko7Wyavst5NezL7 0t2TqxoH1l54CkP9iRQyPtP8ZtgWJ6YBevuwn9jpjbNkUe0dU7yUoJorE7sdB977QjTO xAPX78WinffRHzT/LLtsrGbVqDeSawN3he/wwg+jMIU+wA8fvgjguuq7PnqB/O13bKL9 SH7yRMi+/2SahCzVg4ClBJH7gZZc3DiXvMLuTBlnxLxrenrmVfft+Qmko1hE5zcNi33l CslUjte1nctGFlOfaE+FTKDXs2ghbEbVUCIjsQQDXE8MBNs3XrtgufQ1KPnrpm2OYwlk k5IA== X-Gm-Message-State: AOAM5318dW2IhTMBbbNJHuQ2VCEUgyEUkv6MJWC2P2LWEQpShB5d3LNP 32gfnumADXRqsTOlGfJU5UMHs63MgT8= X-Google-Smtp-Source: ABdhPJwlGR9LbkwslcTVo23oQzDGM/4MShAybN51snLAMyuOhBW32wf0LbVvAKT8z91bQYBzxg51jn3JWKY= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2181:b0:44c:f4bc:2f74 with SMTP id h1-20020a056a00218100b0044cf4bc2f74mr47624205pfi.68.1635985578784; Wed, 03 Nov 2021 17:26:18 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:13 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-13-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 12/30] KVM: x86: Use "new" memslot instead of userspace memory region From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172620_434435_C24D2413 X-CRM114-Status: UNSURE ( 9.15 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Get the number of pages directly from the new memslot instead of computing the same from the userspace memory region when allocating memslot metadata. This will allow a future patch to drop @mem. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- arch/x86/kvm/x86.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index aa2abca47af0..c68e7de9f116 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11646,9 +11646,9 @@ int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages) } static int kvm_alloc_memslot_metadata(struct kvm *kvm, - struct kvm_memory_slot *slot, - unsigned long npages) + struct kvm_memory_slot *slot) { + unsigned long npages = slot->npages; int i, r; /* @@ -11733,8 +11733,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, enum kvm_mr_change change) { if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) - return kvm_alloc_memslot_metadata(kvm, new, - mem->memory_size >> PAGE_SHIFT); + return kvm_alloc_memslot_metadata(kvm, new); if (change == KVM_MR_FLAGS_ONLY) memcpy(&new->arch, &old->arch, sizeof(old->arch)); From patchwork Thu Nov 4 00:25:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65B18C433FE for ; Thu, 4 Nov 2021 00:36:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2F795611CE for ; Thu, 4 Nov 2021 00:36:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2F795611CE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=z4HxpBB3phTGWZqGidMDPGpcg2pm9tjYVjQS2ic6Jbg=; b=ha7epXK+y8Q3cc nlqEPjZQqNPRiEGHBKLyEFt8YbNGt33HtXBSzxhUdjMabMOx9wG4VngkECAAzNpGOIOIl9gVHUMyw 46SJnHXRBTAobD0MzXsZ7DLVq417JPzuKJ+t+U/ulALknuVRwdUnNMtjMxXC3WSGXbIDEllKfjRIh 0kbIGmjv6gNeOwBgf5XT2SJXUOFZWSzDrjuV19pa9yOt5GzdZh3vtWG2kUFam6NF37O2f65qTjILh Kd70ki4EyPE99Qz5Nig29LuGXxFIDRM1H2Ka6LmgW9+Pr2/4cc29R9OkW5/ImqTuny6/metbdAiuV chEI30rjg+ciBT0DaQMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQkH-007HoC-NN; Thu, 04 Nov 2021 00:36:45 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaE-007Ciy-FW for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:24 +0000 Received: by mail-pf1-x449.google.com with SMTP id r2-20020a627602000000b00480f8ce37abso2348946pfc.8 for ; Wed, 03 Nov 2021 17:26:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Knv6NX4aNDEtvlXPSeSH4G0VJXSoyW8J2lqqVusFeB0=; b=WAj+5pKSlToue5x/Xjp3tHAY8zkyEMkATjMwSDKkSLNc+c15nQO/y3IMeoonW+8TGW IJ//NVTbf9iDd83CZGqogcbVBJFAwkAWutEaX8OHtmK98GraXTwQmiojDJlfjgAF0LlZ VKLj3H4JJK4O+dXU8VVJcJqNhuhyRpmQ/xKaWsc+4pgLXjw9U6El3bcpe+Md5lqvOckG VcaHEmfZpm2lQzXxITOF5vpCwMpKOib5Ate7AUJbu3BfwOJ4sMpJbFl+NTQugIzCKWHz Bu63CaoNp5DbWRkc48ITeWB0/vQjNJnW0r5KU3r+tAHLd4I7Sk2VhJyHE/A1d5epjjc8 jySg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Knv6NX4aNDEtvlXPSeSH4G0VJXSoyW8J2lqqVusFeB0=; b=DZxzLhamQXAjou+7TSDNeRlmDmohF38bVnBCS39Oq+JI74TRWnvjACOzyVKf/iVDay nYUjuJYZMb6gGY+7CJNWm1bTBnvS7AVTKzKYgwM/IMU3TBGn9OFdzS+asqcVd/MrgiXd Dnvrf8dqOkTwwwKPVkeULRQffOHiuaaKY9NybEVSGdExSc+ApMDPomSFPrUUBSXYQvXB czRcIW2Ym1L7H3YE37Twmfq+LnddnJul075CF7F5lpJTw/5VUUfUxRDYVUKhUXIoyEZY odkMIXYT0ByWmRy8rC8LwlbN7ZmUZLZzuLOAWjjhnX7ZuT85+46PIpMQx/BPv63+0IVF kH3A== X-Gm-Message-State: AOAM532yRt7T88h3j7HzLIZficePrdisFmCwNX/Pf+rGYoXXF8mEZzIV HtFdq05hihePG478cSi8tUkwpky+oMw= X-Google-Smtp-Source: ABdhPJycWJt6vZswesjO49ykEcHeMjmquF3AfKfwc6UF1QVg1hQjUz6uw6WOUx1iNerGeq81ot/R29ZJDgQ= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:c3:: with SMTP id v3mr252380pjd.0.1635985580217; Wed, 03 Nov 2021 17:26:20 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:14 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-14-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 13/30] KVM: RISC-V: Use "new" memslot instead of userspace memory region From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172622_560437_7164AC4D X-CRM114-Status: GOOD ( 13.11 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Get the slot ID, hva, etc... from the "new" memslot instead of the userspace memory region when preparing/committing a memory region. This will allow a future commit to drop @mem from the prepare/commit hooks once all architectures convert to using "new". Opportunistically wait to get the various "new" values until after filtering out the DELETE case in anticipation of a future commit passing NULL for @new when deleting a memslot. Signed-off-by: Sean Christopherson --- arch/riscv/kvm/mmu.c | 34 +++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 3a00c2df7640..db5230ec6951 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -466,18 +466,19 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * allocated dirty_bitmap[], dirty pages will be tracked while * the memory slot is write protected. */ - if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) - stage2_wp_memory_region(kvm, mem->slot); + if (change != KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) + stage2_wp_memory_region(kvm, new->id); } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change) { - hva_t hva = mem->userspace_addr; - hva_t reg_end = hva + mem->memory_size; - bool writable = !(mem->flags & KVM_MEM_READONLY); + hva_t hva, reg_end, size; + gpa_t base_gpa; + bool writable; int ret = 0; if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && @@ -488,10 +489,15 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, * Prevent userspace from creating a memory region outside of the GPA * space addressable by the KVM guest GPA space. */ - if ((memslot->base_gfn + memslot->npages) >= - (stage2_gpa_size >> PAGE_SHIFT)) + if ((new->base_gfn + new->npages) >= (stage2_gpa_size >> PAGE_SHIFT)) return -EFAULT; + hva = new->userspace_addr; + size = new->npages << PAGE_SHIFT; + reg_end = hva + size; + base_gpa = new->base_gfn << PAGE_SHIFT; + writable = !(new->flags & KVM_MEM_READONLY); + mmap_read_lock(current->mm); /* @@ -527,15 +533,14 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, vm_end = min(reg_end, vma->vm_end); if (vma->vm_flags & VM_PFNMAP) { - gpa_t gpa = mem->guest_phys_addr + - (vm_start - mem->userspace_addr); + gpa_t gpa = base_gpa + (vm_start - hva); phys_addr_t pa; pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT; pa += vm_start - vma->vm_start; /* IO region dirty page logging not allowed */ - if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (new->flags & KVM_MEM_LOG_DIRTY_PAGES) { ret = -EINVAL; goto out; } @@ -553,8 +558,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, spin_lock(&kvm->mmu_lock); if (ret) - stage2_unmap_range(kvm, mem->guest_phys_addr, - mem->memory_size, false); + stage2_unmap_range(kvm, base_gpa, size, false); spin_unlock(&kvm->mmu_lock); out: From patchwork Thu Nov 4 00:25:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80A44C433EF for ; Thu, 4 Nov 2021 00:38:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F274611C1 for ; Thu, 4 Nov 2021 00:38:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4F274611C1 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r4Y2hKt/dEAs/vBt6c8jTjXWTLjQFDBjULGMvkg1SnM=; b=P1jS4sOThvyXjf Jdm0M7dMUaIqiQOO2wXFAFkActleeLAVIDSfxpGpsUGUCCljhEyVhtuYqmQeg93vF5WU+v10k97xk qK+Q2kk8sQbA58iHHuYRtFVRmDBn4gvzrVjhCdDllOqRJaXsZkZd+3pIirsvgPDkVcU1+08PrT0GA jmTwxiJ7XpCuCWlYXPAFhRu4C/pM/T7GJ0q+yAwXbBWsT0rq0e2OItM84khWQTdyW0FRbCYlt8DP9 E/Cqa+/y3qdH1c8tGQIYeCekQwBsywx4tbX+ZM0su3sZ/AymzW3Yqibz1ioZkXHonBpC29NGANjxd IfGMxwgg37YRA/QIm+TA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQlg-007IMM-H4; Thu, 04 Nov 2021 00:38:12 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaF-007CkB-A8 for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:28 +0000 Received: by mail-pf1-x44a.google.com with SMTP id 2-20020aa79102000000b0044c216dd8ecso2321660pfh.18 for ; Wed, 03 Nov 2021 17:26:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=/RvGHTt4E3c4iVv3wsterupSlbh/I++vTjsd+nrbZTE=; b=HyykrtCCmq3kCamMD/Tm7VvDgqmDnoEsE+g1eeqH4NbqmVOGEIPXEF0p44V12K+tKv cuhUnPrZGQP5wZ3hLK+/1RKT0txi5xKCd+JuqAuxM/FmVqFBME7pAC+KWpd37Sadnmeu lG6JAvLtVSbJb1He7sVTN3iqT2jXaXZ9aCFU0nSIjt20mOePuQMygvc8lEwJBvVIqi6I K5uhrd/Mr5VKCjgNFfaDRg+gQDzGFZRQLuV3piNS1sOT1VVHqP+YMJNc8a7c9S8QvZA6 4wHieFDZfoFTOKF1jdxfARHJJ/0OEyKmgNBjIBCXoaMWWRQ2moanjPbmOrNTixvUNha5 SCMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=/RvGHTt4E3c4iVv3wsterupSlbh/I++vTjsd+nrbZTE=; b=KlhYHM9EH2IxW8cku9SkN3CrnDtC7F9aiL8G3DP1Bq/M1t1o6WB/xhMlVBpxioD8nq w1K+8Txmyf1LML4Eu+5V16bNc5cJqKuvgJLHTBtICPibPg/wm2+44tNWp2zg6Z2e6asf bsuiWcW9iA8khJMv+9DVBX4SCJbkb+QDBGQ6KKsg04ea33+uwcGI4EEoNnrVmgyCPP43 wn9eRcch41xLZOcKjgppBw7midftTpZ56YnZOrnhSmgKAvx5h0dNgwAMSK2yKlZ5JEnC LUjfiC1wBRRr+1a3rtYCm5fBwhuEN+1/M2vO29BoKZzp7H9BXoHS4RLJqjoufb2hmrLn GQnw== X-Gm-Message-State: AOAM532YUwwyjSxL+Q9/HBSvO4yjrGz5dOczP41K+23fZiwkKkoOiu5N agt/5Cz5/69rg2oYBbUatTE5AbMFHqA= X-Google-Smtp-Source: ABdhPJzUTstetgr7LfAz4PUuI3fkJrGKDFjmFGhXU/D+Ufl0+b4ADI76LnvXO52LyVNOIFV+GAz+aYo+l4Q= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:9348:b0:141:5862:28b4 with SMTP id g8-20020a170902934800b00141586228b4mr41368996plp.17.1635985582257; Wed, 03 Nov 2021 17:26:22 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:15 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-15-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 14/30] KVM: Stop passing kvm_userspace_memory_region to arch memslot hooks From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172623_390937_EAE25909 X-CRM114-Status: UNSURE ( 9.74 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Drop the @mem param from kvm_arch_{prepare,commit}_memory_region() now that its use has been removed in all architectures. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- arch/arm64/kvm/mmu.c | 2 -- arch/mips/kvm/mips.c | 2 -- arch/powerpc/kvm/powerpc.c | 2 -- arch/riscv/kvm/mmu.c | 2 -- arch/s390/kvm/kvm-s390.c | 2 -- arch/x86/kvm/x86.c | 2 -- include/linux/kvm_host.h | 2 -- virt/kvm/kvm_main.c | 9 ++++----- 8 files changed, 4 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 21213cba7c47..a76718388cbd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1463,7 +1463,6 @@ int kvm_mmu_init(u32 *hyp_va_bits) } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -1486,7 +1485,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, } int kvm_arch_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index b7aa8fa4a5fb..47b7dc149032 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -233,7 +233,6 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, } int kvm_arch_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -242,7 +241,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 59342237e046..52ab1782b257 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -706,7 +706,6 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) } int kvm_arch_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -715,7 +714,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index db5230ec6951..0732867d398c 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -456,7 +456,6 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -471,7 +470,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, } int kvm_arch_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index e69ad13612d9..81f90891db0f 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -5016,7 +5016,6 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) /* Section: memory related */ int kvm_arch_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -5044,7 +5043,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c68e7de9f116..80e726f73dd7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11727,7 +11727,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) } int kvm_arch_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -11831,7 +11830,6 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f8e79cf7584f..2ef946e94a73 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -826,12 +826,10 @@ int __kvm_set_memory_region(struct kvm *kvm, void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change); void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, enum kvm_mr_change change); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 389243120435..9c75691b98ba 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1535,7 +1535,6 @@ static void kvm_copy_memslots_arch(struct kvm_memslots *to, } static int kvm_set_memslot(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *new, enum kvm_mr_change change) { @@ -1621,7 +1620,7 @@ static int kvm_set_memslot(struct kvm *kvm, old.as_id = new->as_id; } - r = kvm_arch_prepare_memory_region(kvm, mem, &old, new, change); + r = kvm_arch_prepare_memory_region(kvm, &old, new, change); if (r) goto out_slots; @@ -1637,7 +1636,7 @@ static int kvm_set_memslot(struct kvm *kvm, else if (change == KVM_MR_CREATE) kvm->nr_memslot_pages += new->npages; - kvm_arch_commit_memory_region(kvm, mem, &old, new, change); + kvm_arch_commit_memory_region(kvm, &old, new, change); /* Free the old memslot's metadata. Note, this is the full copy!!! */ if (change == KVM_MR_DELETE) @@ -1722,7 +1721,7 @@ int __kvm_set_memory_region(struct kvm *kvm, new.id = id; new.as_id = as_id; - return kvm_set_memslot(kvm, mem, &new, KVM_MR_DELETE); + return kvm_set_memslot(kvm, &new, KVM_MR_DELETE); } new.as_id = as_id; @@ -1785,7 +1784,7 @@ int __kvm_set_memory_region(struct kvm *kvm, bitmap_set(new.dirty_bitmap, 0, new.npages); } - r = kvm_set_memslot(kvm, mem, &new, change); + r = kvm_set_memslot(kvm, &new, change); if (r) goto out_bitmap; From patchwork Thu Nov 4 00:25:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 485CEC433EF for ; Thu, 4 Nov 2021 00:41:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 09E3961058 for ; Thu, 4 Nov 2021 00:41:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 09E3961058 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Bymdfc1AvfuBIv0oPx5h567qYJcVA8d5urcq9b32rUY=; b=XWiB6uHvTIZrEX 76Iqg7NMVIuCE2ssJ+ztsoRmDD37XbSTMRjbx51QAJf5HXkr8kYveDQ751g/Ne/YMUQM7RpMJG1Kq /JA7A5E2mF+qU+J4ii2h3tCuUQ5KWb9QY15Suck3lcNGga2n9jKkQw8AhRBwoONrlZxVosmYmvDwO HXYVaXm61ESlGtq4m2LyaOf3hbT7ks4VP3wGq5dc87g6nEJ27JIOWBe6UCsW6s6F8aDUraUl9UZ2N St3k8QrE3LEgYbpugj6qDLeC4eK61UFBtaSrKmm5ifN7++mq37Gijcz7yhF7nacSze41UpXOiTcK9 X62VHf2Dt2b72s+gIBag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQok-007JRl-3J; Thu, 04 Nov 2021 00:41:22 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaI-007Clj-1z for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:30 +0000 Received: by mail-pg1-x54a.google.com with SMTP id r25-20020a63a019000000b002a20656994dso2408582pge.3 for ; Wed, 03 Nov 2021 17:26:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=8l7d2ySvC5D/YrAT9T6fGMeBNPy379MeMOeE3xh8Oo8=; b=XiqYYDck3YcORrnntegA08OaixW9ADI8oEaxX/wY4SPbXeWsD1VXDFcjm/TidROIbZ 9Q2NVQaj9nLfdl+cqXqf4Kh6YN4XHKJ2zsb+EpNc9IyohK52aJL1q/BFk3/RE0ODcZB8 GNyUDjWDupQznT03kgrxvb1OE31VaRVNbSDf9CUPNOEAZtkCHw4Qv4MmcZwr3dKGZAmf 6AxtM+Gz8EmUA7J8IsoIZQr23aa7Awbo74V9XOkz2fAy2yckuLbgCL7Nci1x7Np35zGg 7ymF90fL+rLQ1XwcldtrRea5GFF57rzNoICly8t9ERXW8JMRQyClod/qJZXUwEJYSp+y 4VKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=8l7d2ySvC5D/YrAT9T6fGMeBNPy379MeMOeE3xh8Oo8=; b=Fyb9VO6lE7Np8fcmyFh9bHWYhrgB4e/W4kMFuX0EELs8Oi1BhowAu5XKcf71k78juT DAdo/tda4c2hjQfKGJUy40OZwrT1Froy2u6FRrNoeIP6k5ndsddoThpgGY41znF/ThkT XGMxbe8/GuYWB2byCBocC4fUCSKfJWTo5V1fdWQlBdlVu7l3fZK906nrlJXk4p3a8A5g OqYnWBl6bv/rnWzVg6HtQSql3S92kRiibJ/ic6dM2xnH/t6gXUDQzy+rqYZfweDB5v5v f82J8YmCcZdua5jVLV87BwWmCY06wgi0Y5kOlQLQpl65/ILjXrxtgS1i9VKyG3mK9dSW ZNpw== X-Gm-Message-State: AOAM5337gzaWm6KV/Dw0TOxgM2LmBbZEjnjQRNrbfZ5cltU/AgkvCgYL MscEELv2kDZYIonBzEuRJZwpF0SVBUg= X-Google-Smtp-Source: ABdhPJySEn7xLw/vUsYntwcby5ILVgW7R0Nx0rQ81XxDarxrzPAk5ll4NJoYw2Lk2jx8oLA22ygGGCPmhvo= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90b:4b43:: with SMTP id mi3mr18498246pjb.102.1635985584027; Wed, 03 Nov 2021 17:26:24 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:16 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-16-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 15/30] KVM: Use prepare/commit hooks to handle generic memslot metadata updates From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172626_140140_2F04C0DF X-CRM114-Status: GOOD ( 18.86 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Handle the generic memslot metadata, a.k.a. dirty bitmap, updates at the same time that arch handles it's own metadata updates, i.e. at memslot prepare and commit. This will simplify converting @new to a dynamically allocated object, and more closely aligns common KVM with architecture code. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- virt/kvm/kvm_main.c | 109 +++++++++++++++++++++++++++----------------- 1 file changed, 66 insertions(+), 43 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 9c75691b98ba..6c7bbc452dae 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1534,6 +1534,69 @@ static void kvm_copy_memslots_arch(struct kvm_memslots *to, to->memslots[i].arch = from->memslots[i].arch; } +static int kvm_prepare_memory_region(struct kvm *kvm, + const struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + int r; + + /* + * If dirty logging is disabled, nullify the bitmap; the old bitmap + * will be freed on "commit". If logging is enabled in both old and + * new, reuse the existing bitmap. If logging is enabled only in the + * new and KVM isn't using a ring buffer, allocate and initialize a + * new bitmap. + */ + if (!(new->flags & KVM_MEM_LOG_DIRTY_PAGES)) + new->dirty_bitmap = NULL; + else if (old->dirty_bitmap) + new->dirty_bitmap = old->dirty_bitmap; + else if (!kvm->dirty_ring_size) { + r = kvm_alloc_dirty_bitmap(new); + if (r) + return r; + + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + bitmap_set(new->dirty_bitmap, 0, new->npages); + } + + r = kvm_arch_prepare_memory_region(kvm, old, new, change); + + /* Free the bitmap on failure if it was allocated above. */ + if (r && new->dirty_bitmap && !old->dirty_bitmap) + kvm_destroy_dirty_bitmap(new); + + return r; +} + +static void kvm_commit_memory_region(struct kvm *kvm, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + /* + * Update the total number of memslot pages before calling the arch + * hook so that architectures can consume the result directly. + */ + if (change == KVM_MR_DELETE) + kvm->nr_memslot_pages -= old->npages; + else if (change == KVM_MR_CREATE) + kvm->nr_memslot_pages += new->npages; + + kvm_arch_commit_memory_region(kvm, old, new, change); + + /* + * Free the old memslot's metadata. On DELETE, free the whole thing, + * otherwise free the dirty bitmap as needed (the below effectively + * checks both the flags and whether a ring buffer is being used). + */ + if (change == KVM_MR_DELETE) + kvm_free_memslot(kvm, old); + else if (old->dirty_bitmap && !new->dirty_bitmap) + kvm_destroy_dirty_bitmap(old); +} + static int kvm_set_memslot(struct kvm *kvm, struct kvm_memory_slot *new, enum kvm_mr_change change) @@ -1620,27 +1683,14 @@ static int kvm_set_memslot(struct kvm *kvm, old.as_id = new->as_id; } - r = kvm_arch_prepare_memory_region(kvm, &old, new, change); + r = kvm_prepare_memory_region(kvm, &old, new, change); if (r) goto out_slots; update_memslots(slots, new, change); slots = install_new_memslots(kvm, new->as_id, slots); - /* - * Update the total number of memslot pages before calling the arch - * hook so that architectures can consume the result directly. - */ - if (change == KVM_MR_DELETE) - kvm->nr_memslot_pages -= old.npages; - else if (change == KVM_MR_CREATE) - kvm->nr_memslot_pages += new->npages; - - kvm_arch_commit_memory_region(kvm, &old, new, change); - - /* Free the old memslot's metadata. Note, this is the full copy!!! */ - if (change == KVM_MR_DELETE) - kvm_free_memslot(kvm, &old); + kvm_commit_memory_region(kvm, &old, new, change); kvfree(slots); return 0; @@ -1736,7 +1786,6 @@ int __kvm_set_memory_region(struct kvm *kvm, if (!old.npages) { change = KVM_MR_CREATE; - new.dirty_bitmap = NULL; /* * To simplify KVM internals, the total number of pages across @@ -1756,9 +1805,6 @@ int __kvm_set_memory_region(struct kvm *kvm, change = KVM_MR_FLAGS_ONLY; else /* Nothing to change. */ return 0; - - /* Copy dirty_bitmap from the current memslot. */ - new.dirty_bitmap = old.dirty_bitmap; } if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) { @@ -1772,30 +1818,7 @@ int __kvm_set_memory_region(struct kvm *kvm, } } - /* Allocate/free page dirty bitmap as needed */ - if (!(new.flags & KVM_MEM_LOG_DIRTY_PAGES)) - new.dirty_bitmap = NULL; - else if (!new.dirty_bitmap && !kvm->dirty_ring_size) { - r = kvm_alloc_dirty_bitmap(&new); - if (r) - return r; - - if (kvm_dirty_log_manual_protect_and_init_set(kvm)) - bitmap_set(new.dirty_bitmap, 0, new.npages); - } - - r = kvm_set_memslot(kvm, &new, change); - if (r) - goto out_bitmap; - - if (old.dirty_bitmap && !new.dirty_bitmap) - kvm_destroy_dirty_bitmap(&old); - return 0; - -out_bitmap: - if (new.dirty_bitmap && !old.dirty_bitmap) - kvm_destroy_dirty_bitmap(&new); - return r; + return kvm_set_memslot(kvm, &new, change); } EXPORT_SYMBOL_GPL(__kvm_set_memory_region); From patchwork Thu Nov 4 00:25:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02379C433F5 for ; Thu, 4 Nov 2021 00:41:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B4A0861058 for ; Thu, 4 Nov 2021 00:41:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B4A0861058 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OcydA+5V8wAwCOefgxbhOxEVU6+bTNj5bvY3tC7bhhA=; b=bCVvQH6rV/6v1R 1L00PkprQDweZrxoXUTX9sBpExdkTeWHBzqHRgUY/GbljL/xwgDsjoWXF4d417CusMlfr/f2KowKV K5MELua653ofGjknBHx/dQzHJsMY8DNgAjquEfyjbQLTLhG+iyhWtzpBaHa18swFmmHe29oo33/dJ ElYHAMpwWsZ6iCWYF+LKK1X3JRFMhKwNKI8T30GrcDS8fEGXWUyvYs7AZpM+YgKNgjDAzllTJubKV 1iEov1pm5UBQrQdkvjfEAONM4X6JA4ehLnZCwyq48kVhRdTL16lovTk9MYj3uCrd778Djr5cmvUNO 1sHcdPpTO2icU930xTMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQou-007JW2-W3; Thu, 04 Nov 2021 00:41:33 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaJ-007Cn0-QG for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:31 +0000 Received: by mail-pl1-x64a.google.com with SMTP id n9-20020a170902968900b0013f23b51142so1946735plp.8 for ; Wed, 03 Nov 2021 17:26:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Rh/p9dCBzSE6gfhoLM6qA42nAbGkMHfvz+ihOHus8Bw=; b=a0LysDSixF+SvGFoYJAn+9lReuNdeuojzCYwE/5S80HnosejUJ/GB5lSTi0I0B6etw PCesr77PK5UoG4qbBnDDmbOplfKjJvN0RSVwwsJ9A6TMfI9nydou7MxD9T7lOeaglEHG XNiJ9e6EtswDVfj8NbPNGs3K4UVClUfWt61l+WfszqPbkc9k/eRztWPGJiIa05+786xW gJ3qi2D5CO9qdvRG06/gp2Sn2Z2tyJIgyiWYmMzoSHEsDwT7XcmXrmm49tOMjj1utuxC 5Wr/o1J29XTmPql9v7HTGvlM7g8HATHgYxp+e4CAJr75tfOZ82uzZSP74HenxJ6CsJu8 wJFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Rh/p9dCBzSE6gfhoLM6qA42nAbGkMHfvz+ihOHus8Bw=; b=yh/xUTosIQe4sGxNSYfN1xM8WIm5tsEP5CzmE+ZrRmTGDTy0UN08qp4TGakXYbcGpI MhBo0M/6Z693HcmJ0Qwj1Fr5KTlbYsJL7xBju0TTiwYpKFeOn6rQSqUo32sCKpVJqi1+ g7AVhe8IbpaN9Dg3giO1jTJ6tnS9oXpObehj3qraLssd10gCik9TUxhaMWW+qmUKIoyu dx6AtdzrK1+ellRLvmtLA2gRzZHlO1CBgUShqdZFp4psnI5875kvB4jertFBzbkabHZy FNvDABDZILdXJbjZpUJnMX8nlgejjIKSidhAe8wEVDxr686h//qx6RMQ9JU/W/nmda5t PlAQ== X-Gm-Message-State: AOAM531moxNtIT5tmLxhDL5TcybGKYUwV7QZk6ZjZ3aj277MSj0yEfgS mqOiLCjWqIGbNZPP5Jq4aC4j+nMiPgU= X-Google-Smtp-Source: ABdhPJyOWq98exFdSMvxgt0VGjIuS13MnOJ47QLgbIC6Hkpi6D0S+Boi55mGWksr+QgQdfajOLURjR0xq4c= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:e8d1:b0:141:de15:f596 with SMTP id v17-20020a170902e8d100b00141de15f596mr25033911plg.67.1635985585858; Wed, 03 Nov 2021 17:26:25 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:17 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-17-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 16/30] KVM: x86: Don't assume old/new memslots are non-NULL at memslot commit From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172627_899335_B5999CD8 X-CRM114-Status: GOOD ( 11.23 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Play nice with a NULL @old or @new when handling memslot updates so that common KVM can pass NULL for one or the other in CREATE and DELETE cases instead of having to synthesize a dummy memslot. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- arch/x86/kvm/x86.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 80e726f73dd7..80183f7eadeb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11762,13 +11762,15 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - bool log_dirty_pages = new->flags & KVM_MEM_LOG_DIRTY_PAGES; + u32 old_flags = old ? old->flags : 0; + u32 new_flags = new ? new->flags : 0; + bool log_dirty_pages = new_flags & KVM_MEM_LOG_DIRTY_PAGES; /* * Update CPU dirty logging if dirty logging is being toggled. This * applies to all operations. */ - if ((old->flags ^ new->flags) & KVM_MEM_LOG_DIRTY_PAGES) + if ((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES) kvm_mmu_update_cpu_dirty_logging(kvm, log_dirty_pages); /* @@ -11786,7 +11788,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * MOVE/DELETE: The old mappings will already have been cleaned up by * kvm_arch_flush_shadow_memslot(). */ - if ((change != KVM_MR_FLAGS_ONLY) || (new->flags & KVM_MEM_READONLY)) + if ((change != KVM_MR_FLAGS_ONLY) || (new_flags & KVM_MEM_READONLY)) return; /* @@ -11794,7 +11796,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * other flag is LOG_DIRTY_PAGES, i.e. something is wrong if dirty * logging isn't being toggled on or off. */ - if (WARN_ON_ONCE(!((old->flags ^ new->flags) & KVM_MEM_LOG_DIRTY_PAGES))) + if (WARN_ON_ONCE(!((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES))) return; if (!log_dirty_pages) { From patchwork Thu Nov 4 00:25:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AD3AC433F5 for ; Thu, 4 Nov 2021 00:43:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 47B4360231 for ; Thu, 4 Nov 2021 00:43:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 47B4360231 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vah6ED7bx4yLFRz/VT6N/Xj5lOe52Fn3FKB3T8RILho=; b=iYC4vJSBccSjlp /gHQuC0kELodWL7tK3k5vNTXTQqmhiegK8DuBhxOz4GD6WMwX1d8HOkyRuHLqn36gNOlrJDL66lh/ Vo83bSGgcNLYhdnA0mjcvAqTZT/du4/dzbmtNVx5g3R6Ph4vFIPkDnUSVubVZoaWLrq1hmScf2kQa ovTEfcO3MR5b+9JTLD8wfRy266Nf5iMG+xPS+Rsk0F5VogpDthLeHTrAjJdO3MLiqcI17mMNeiY8G sxuFc0DxuGkBQL0AJJ5G0Ati17kui3Fo1GxnlxKBReWQ8DAskW3hrPFkKEPFDKwHAO0Y4xGANUTkf 9dOuxG6tj31kqs07Yb1w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQql-007KEM-66; Thu, 04 Nov 2021 00:43:27 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaL-007CoZ-LY for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:33 +0000 Received: by mail-pl1-x64a.google.com with SMTP id k9-20020a170902c40900b001421e921ccaso1432394plk.22 for ; Wed, 03 Nov 2021 17:26:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=IsY0WRuuTFV0eCJh4XuxLWgE9EBRjAtittkz8TMTuco=; b=O7s96ScNe9nmLZhqPDRepEfehWbch5lSXwuuBFzYiqZgVQsgJRuuGd/HsCCp0bLNu4 cYQEx19ZWNPAdMz0HClIalz8ml2QbzjuG+kpgnfMKZafKps+5AXiFqHJNbVFhjHTioYL tC1EMjtQ3yJ0X06NcpoHHSW6eMNaRj1UY0Egh3hPuOjeGtPjO4DN4XnLvAY2osT6lAMm Q868Qm3pLeksx//ppWn43EQZ/TdzYIIwGOwnzwEzdl67hcCGzEJJpjoe2w8tgpCv7e+t TP/XmqLrn9P33rNcE9thKpdxIqj+9MZJlkaTX8cqkFRjS/EbxZ7CKKedokkl4MG1CITB FVVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=IsY0WRuuTFV0eCJh4XuxLWgE9EBRjAtittkz8TMTuco=; b=VqYArEjapyUWKXKgnsIAby+9VVQb4X6FHtxi3XgybCGGL5uyn9eJ7WLw1cNvC89Ix7 cwt0qyJLOjyrgPm67/ZTLNvz/F7zCRfIOYeAu8sGjopYcyYR4y1bhDEQ/ylYf6uhwoTJ 5HRKPTJiP1FX/WmXPy6foqvBFQ1pG3OhpiTFRwVGHsiP3Ci8/KwbonAaDSDeKzRyjBPn uslF6r9Izf40BYxyZ7KFbUREdQPcvHqgcjoB9B4u9yiyrKVBxawoht3/yxawVY56PDHy CgTVmhjoYSmkg6SpkkRDJrR6RjzK1RZ6QabZqjx3ihZPVmmW/gpbia3suapAGMLAVI1k lghA== X-Gm-Message-State: AOAM531BPe13XTM8DAiVYgtCM4usqFnSVELGK9o8MTDWh8yDntBngm9F xadcCDhSvwvZtrYDxw8DVfdiPA7LsXQ= X-Google-Smtp-Source: ABdhPJz+o3KGoFfULDwKrO6qcyTUUuzAUo3X1uvau9DDSuCs/3Lm9Kc5uh2iA0skAT+76ch/61YBaYAjQn0= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:5285:: with SMTP id w5mr261386pjh.1.1635985587426; Wed, 03 Nov 2021 17:26:27 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:18 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-18-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 17/30] KVM: s390: Skip gfn/size sanity checks on memslot DELETE or FLAGS_ONLY From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172629_753966_3BC5C7C1 X-CRM114-Status: GOOD ( 12.95 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Sanity check the hva, gfn, and size of a userspace memory region only if any of those properties can change, i.e. skip the checks for DELETE and FLAGS_ONLY. KVM doesn't allow moving the hva or changing the size, a gfn change shows up as a MOVE even if flags are being modified, and the checks are pointless for the DELETE case as userspace_addr and gfn_base are zeroed by common KVM. No functional change intended. Signed-off-by: Sean Christopherson --- arch/s390/kvm/kvm-s390.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 81f90891db0f..c4d0ed5f3400 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -5020,7 +5020,14 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, struct kvm_memory_slot *new, enum kvm_mr_change change) { - gpa_t size = new->npages * PAGE_SIZE; + gpa_t size; + + /* When we are protected, we should not change the memory slots */ + if (kvm_s390_pv_get_handle(kvm)) + return -EINVAL; + + if (change == KVM_MR_DELETE || change == KVM_MR_FLAGS_ONLY) + return 0; /* A few sanity checks. We can have memory slots which have to be located/ended at a segment boundary (1MB). The memory in userland is @@ -5030,15 +5037,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (new->userspace_addr & 0xffffful) return -EINVAL; + size = new->npages * PAGE_SIZE; if (size & 0xffffful) return -EINVAL; if ((new->base_gfn * PAGE_SIZE) + size > kvm->arch.mem_limit) return -EINVAL; - /* When we are protected, we should not change the memory slots */ - if (kvm_s390_pv_get_handle(kvm)) - return -EINVAL; return 0; } From patchwork Thu Nov 4 00:25:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2D4DC433F5 for ; Thu, 4 Nov 2021 00:43:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 648D760231 for ; Thu, 4 Nov 2021 00:43:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 648D760231 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=i122cHey9WtC4wrTnoLOlg03SP84lAzyJnjCB+TzwGY=; b=RxgG4j/yE/bvr8 xVknesDa1p/3F5qfzRP5YoZ+SZNYmJIVB7G+rlgCjidKcLll9SOOm0xg6F+4tgwEwHmQDq0aJ+sfH 3VN8zBwxU2Y/KuXxQeGKpAT3mep8LMFP+AuQCYMqM2b7Dm/kM1E4B1w6ffhY09WGm9ZiDKMa2n1kN tUdJW+IoIH80T2xrdMNKTKq+jeMC6eOOh24NeGtm2LbLnIi6tnpcDRK8nIkgOsGa1XN24jpp7tcQP eUCIHagcD4tyjRJoYXKoZc3Qknz36LqYPWcPwoHI4tdFeVAJP+LNiuCOqZqTiiKcJJtN7T510nwYm i17qEKR9FkOun/DOCrSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQqZ-007K9k-9W; Thu, 04 Nov 2021 00:43:15 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaM-007CpS-SE for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:33 +0000 Received: by mail-pg1-x54a.google.com with SMTP id g18-20020a631112000000b00299f5f53824so2417184pgl.2 for ; Wed, 03 Nov 2021 17:26:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=gefeXeMu+9xCn+OiiZbsmvywIWNyuYY1aczHxfS7F84=; b=q1wAaFzQivbUaLWNpD28vIW6ok97J24IWOZtjc0+hsnIoFnhMyUWsnSfQKoqN3maJ3 KBdBMOyYnFaTKZvkOTgijuDb8a1y6dTYyIvATTjrk50gQdXgdq0sK8P9D8VXqeBwIchM S4752LKzCwKCElggFGcp/0gCfncZSwTidehk3QlNJHjH4YdWFX4NP2+qFH0XmXIojpE7 shl+78ytzLoTN1+DNy2u1aXGBjCw+SMXFRKxDeT2bHggWBTmy7o/SubcktQSj4/PZQou hwUcODvMGvNUFZIWK9EWYU9o0oSTlOaqrHSlqsc6GHFHSECGSUQyaSoW+8/pBlFiy+YJ LVDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=gefeXeMu+9xCn+OiiZbsmvywIWNyuYY1aczHxfS7F84=; b=NSgA8Cd4Wy4LNDc1m4dZD0aEgsxzg3PFyzxjTBj1LWmaa4iBx1yqgRbdIXcUYYCf3i RNs3meqfkhrA2ed2LAmrae+xWjDPiOzHxDj0k32qinmHCuZe119or04bQwFEfsRpTqkd bBOTydd8ebPwdbJCBVfn9/ROdyRtACQu2i75zU4NMHAChWtk6UBZ6eiw3UG97AnVGLFe Lc5PGRx6UmiDA6Jfc+UV4dRzhyOErUhSOHJ1VeenKoy46qHZ/FdT3hUJfsHAx8Czo7WJ H2gugNcKZR2q+pXnKWj0WVbwhLC/2KT1i/iRPtxyP3FdaQiUKZP54xEUPE/QhT+e9vuY BNdg== X-Gm-Message-State: AOAM5321mZ3a0FSRVJ7J/qMUaVkemaDo2CDHzbaC17YksXsuaI6yPJpe 6cxPUym5FtIDmTfn661YCCzwEce/xjs= X-Google-Smtp-Source: ABdhPJwuRHtV9T3+KFXWyLxF72TgsEexvxCX5VCcTZ2sr7WXwUborCujSIBbaHyiH9gR1LJWc7GYuxnx26s= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a63:740e:: with SMTP id p14mr35995604pgc.329.1635985589285; Wed, 03 Nov 2021 17:26:29 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:19 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-19-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 18/30] KVM: Don't make a full copy of the old memslot in __kvm_set_memory_region() From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172630_993333_5F3193E3 X-CRM114-Status: GOOD ( 14.86 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Stop making a full copy of the old memslot in __kvm_set_memory_region() now that metadata updates are handled by kvm_set_memslot(), i.e. now that the old memslot's dirty bitmap doesn't need to be referenced after the memslot and its pointer is modified/invalidated by kvm_set_memslot(). No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- virt/kvm/kvm_main.c | 35 +++++++++++++---------------------- 1 file changed, 13 insertions(+), 22 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6c7bbc452dae..bbaa01afac43 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1715,8 +1715,8 @@ static int kvm_set_memslot(struct kvm *kvm, int __kvm_set_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem) { - struct kvm_memory_slot old, new; - struct kvm_memory_slot *tmp; + struct kvm_memory_slot *old, *tmp; + struct kvm_memory_slot new; enum kvm_mr_change change; int as_id, id; int r; @@ -1746,25 +1746,16 @@ int __kvm_set_memory_region(struct kvm *kvm, return -EINVAL; /* - * Make a full copy of the old memslot, the pointer will become stale - * when the memslots are re-sorted by update_memslots(), and the old - * memslot needs to be referenced after calling update_memslots(), e.g. - * to free its resources and for arch specific behavior. + * Note, the old memslot (and the pointer itself!) may be invalidated + * and/or destroyed by kvm_set_memslot(). */ - tmp = id_to_memslot(__kvm_memslots(kvm, as_id), id); - if (tmp) { - old = *tmp; - tmp = NULL; - } else { - memset(&old, 0, sizeof(old)); - old.id = id; - } + old = id_to_memslot(__kvm_memslots(kvm, as_id), id); if (!mem->memory_size) { - if (!old.npages) + if (!old || !old->npages) return -EINVAL; - if (WARN_ON_ONCE(kvm->nr_memslot_pages < old.npages)) + if (WARN_ON_ONCE(kvm->nr_memslot_pages < old->npages)) return -EIO; memset(&new, 0, sizeof(new)); @@ -1784,7 +1775,7 @@ int __kvm_set_memory_region(struct kvm *kvm, if (new.npages > KVM_MEM_MAX_NR_PAGES) return -EINVAL; - if (!old.npages) { + if (!old || !old->npages) { change = KVM_MR_CREATE; /* @@ -1794,14 +1785,14 @@ int __kvm_set_memory_region(struct kvm *kvm, if ((kvm->nr_memslot_pages + new.npages) < kvm->nr_memslot_pages) return -EINVAL; } else { /* Modify an existing slot. */ - if ((new.userspace_addr != old.userspace_addr) || - (new.npages != old.npages) || - ((new.flags ^ old.flags) & KVM_MEM_READONLY)) + if ((new.userspace_addr != old->userspace_addr) || + (new.npages != old->npages) || + ((new.flags ^ old->flags) & KVM_MEM_READONLY)) return -EINVAL; - if (new.base_gfn != old.base_gfn) + if (new.base_gfn != old->base_gfn) change = KVM_MR_MOVE; - else if (new.flags != old.flags) + else if (new.flags != old->flags) change = KVM_MR_FLAGS_ONLY; else /* Nothing to change. */ return 0; From patchwork Thu Nov 4 00:25:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C45ABC433F5 for ; Thu, 4 Nov 2021 00:45:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7777C611C5 for ; Thu, 4 Nov 2021 00:45:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7777C611C5 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ChOCjbGH9Gl4QQocTsvplE0ugaisoCm9oAXFjAVcQuo=; b=dz9xnT/+vnhc+r uZrsqYJHYPsLTMhPCJzVoLr5rN34Ry82kfVPFpGdYyhLqtYnt0IduB8u7y/L5Bz8NtAyawbpr1ORw 2YYY7weAGucF2CEaSnNzUnUNVtLQuCRMSz7hDu8r4bAOVjcNNjSv4OOS0w8GCRyP02ExKlbhjR4GN 1JYe/o6NYqdYYPqYFjVOcbclnAS/QsFMrLaggUh+AMeUvMwQ5sWAhAMmxxjZaR1dNxXDHPXzzPFSF Tpy9jRZcS1MMywOHmI/U3stDM3p/c4kT2zGt99m9A3oy1hgo3hiR8QH1c3qIkQx7GU1eb3tsyJJAA Yrj80MsXUN6ftWYA5IWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQsM-007Kn8-CQ; Thu, 04 Nov 2021 00:45:06 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaO-007Cqp-Iz for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:35 +0000 Received: by mail-pl1-x64a.google.com with SMTP id n13-20020a170902d2cd00b0014228ffc40dso848550plc.4 for ; Wed, 03 Nov 2021 17:26:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uYjj4f3K7aLt9wIaWKUjF6y1lKdICOYwfejCWHB3aF8=; b=Uhm3v8WNND1uUP79qkh3iF9ngMS/QpmsXEC/6vevUuzviyfyF3+tSzE2rdmLrNYnov ivEPHlTaSTgUBPTTi0LsyRVmw9xXp2mzAafjmpRwoFL1KovVk8wtgK3U+1IWl+jiRT2o 7QOXBMeLcoMyFIuk8nfYq8Z4cK4ySxLX4VQvCwivT4uBC8D1531XJeoZcVX+WcQdgv/2 15XpbR3gJyAolfjM0WcGNgJS3ceK1iyuvg9S+WhHGJq+B2HdNn7lGmW8IZ2NF2RpaP0n 2NlCasOBpN6Ngo/ODMi6fT9lbV6fzIJZALi7PZVoZsnpUFpAWoSzDGNfxoR0u+D8gkJP KhsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uYjj4f3K7aLt9wIaWKUjF6y1lKdICOYwfejCWHB3aF8=; b=eFiIRICjv2iKYIs02zEaQQD4CQBEnbV9T/9N4g5cK4s+C/UcYzAt/uIfel4Zjmme+V WbPPO1pC6a4nu68J82rRL8BJofLDlvdCu8c0fdLBD5WkmXuZbUhmC5UwNfsluP0PWokX B91FihzRNuncCjcnnI0QMHw/toTPk8UhvIvPG8VvbPg6L0s/jYZnJUpVErVs/NjlWIcy bYtzgessvE1bx2WBelaY1AuPI4Ho6SG8ULeQKEqvLOR2+fwcQYkeiB/vz/kxA0Orecn3 d2ZLxzH12TzpI0GSz1PJOsocaNhdpFkuHJ1qQFPSVN/zp7CD5RvJaGOeqGBxE0disj0Q 0WOA== X-Gm-Message-State: AOAM533/c0n4amxV5obP+QaEEjjPRby34L8O1Q7KE+lKgFT7LjbUGN74 NJOkKd6D5grlX47hcQbe0wZwGobWDPs= X-Google-Smtp-Source: ABdhPJyzJUVQ4+X7UL4YhSp9RCZBPi0XLOQZSFtg0LZxrgaDfXLCgqmWWI3ojjh232hTQrMrLyr2UsSbIBM= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1511:b0:492:61fe:9fa6 with SMTP id q17-20020a056a00151100b0049261fe9fa6mr9114017pfu.57.1635985590891; Wed, 03 Nov 2021 17:26:30 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:20 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-20-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 19/30] KVM: x86: Don't call kvm_mmu_change_mmu_pages() if the count hasn't changed From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172632_670035_AF077B9D X-CRM114-Status: UNSURE ( 9.79 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org There is no point in calling kvm_mmu_change_mmu_pages() for memslot operations that don't change the total page count, so do it just for KVM_MR_CREATE and KVM_MR_DELETE. Reviewed-by: Sean Christopherson Signed-off-by: Maciej S. Szmigiero Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 80183f7eadeb..4b0cb7390902 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11836,7 +11836,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - if (!kvm->arch.n_requested_mmu_pages) + if (!kvm->arch.n_requested_mmu_pages && + (change == KVM_MR_CREATE || change == KVM_MR_DELETE)) kvm_mmu_change_mmu_pages(kvm, kvm_mmu_calculate_default_mmu_pages(kvm)); From patchwork Thu Nov 4 00:25:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 666A2C433EF for ; Thu, 4 Nov 2021 00:46:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2480E61058 for ; Thu, 4 Nov 2021 00:46:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2480E61058 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dQ6qtmf/RmYMoFC1H5rBL2myRshaf1TLk7Iefn4n2JI=; b=kJwzdYnIwHX+BV V+bJL0EZsxd7RC2NBEyIcsuQ0EwAo3C6leN+75UjGbQQ4fouULRdZKMyTMFXLjYJtCPCWX7CP2Elk ScnaGNeWK90VACqACYrsJGC7abVnMJLhWsuxaqNO2pKJwyehN6pGMK8K/13Qamg+fDeLSucJX5cz3 4EcT9X598P+wld1qOIGOxN4iNxSwbM5cEsfGX58lGLfuxMRmIj2zwC/TSfLmINQcwheTVoov+xtDI YTb8MM3ROeaSFamyhQAPI6fhNJS2NZncDWJmjSKPE/wXg2ds5nG7JZtrjgrczY0wKBmhKiwRDlQEb Ov/gp7T+u4qtrtQVM4FA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQtU-007LCl-HQ; Thu, 04 Nov 2021 00:46:16 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaQ-007CsL-Iq for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:37 +0000 Received: by mail-pg1-x54a.google.com with SMTP id w5-20020a654105000000b002692534afceso2391725pgp.8 for ; Wed, 03 Nov 2021 17:26:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=h7EfkSVswSL3FymMNglXt58S7Vc0tx9NzVk9JdibdVk=; b=DBOeGcjhrXQwhK0FDOBounT9evoR1Uijermfpo2pBeH9R90xNB6NiKKYfJXt4+tgd/ IJ6jypcgRe3jOg4zlXWx5eC1V32MNn6vaqHSQr5smH43RoADVpNyl0gwCePpvloc8Ypr F1pfRhWLlrBnkA1hdX3ERXkuM/PO79U9BEXqecoo3QkL6YdP7YHLNYC5NWI2JUA/rUmh xU//wrDCCMLfmQuyamdw2hMfy0zAoUq6wowZ4aj1Y2drElfwl5AaFtyY1b2PFE88cjJE QyDIm2njMA5AFpQT4XPba1500YCHFs+L2URKin8PC+/TgGc7BaLAMkj9Nt+raQ950oUC 96+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=h7EfkSVswSL3FymMNglXt58S7Vc0tx9NzVk9JdibdVk=; b=Q5KZBNrb8KIHU19yAPjhZF6/9x4ui65TFhYIdq/0cuFSnxHOCHX2MdPeYdb95Nskoj gSZ0jbprSl0K/XHTABNyY1tfDVocz1XZetvkOdwB1SKwezBVdjlqrFopTZbYb6koLfT+ iD32IDQiY0Tu3J+YWFsh2d0PjUJB7I1mq55eYctQQ4tGCfdfvBarca/wxnb1spAZDiXI QNP8nYpcbDbxvoXr2RkzV4TaPBYG1bQMPCgAlYQb2BuPhGAtVIeJd6F4jvTJCPmu16DV NzWucRtQz0Xx5gyYkHK+bRunnvNh+Zko+E7VReUUbgjbruaI8Id7xUNEDax0xelU+ZcC qynA== X-Gm-Message-State: AOAM530lZ8sm7RbZNjkOlPzNZUZIFtj/xT6X40zmLmmGM8t1zBlgk1UM lM6ffB1KNkzLLKsffNIuHXJdDDGoHr0= X-Google-Smtp-Source: ABdhPJxsxHfihcUwqJ9+RVNX21hbTOai5CGcJvQmgGN1uIZFdZGoK7dZESQoRGdygHMeGrtUc4zbFpEoUSY= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a63:83c2:: with SMTP id h185mr20757080pge.146.1635985592609; Wed, 03 Nov 2021 17:26:32 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:21 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-21-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 20/30] KVM: x86: Use nr_memslot_pages to avoid traversing the memslots array From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172634_656179_25F8C957 X-CRM114-Status: GOOD ( 11.05 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero There is no point in recalculating from scratch the total number of pages in all memslots each time a memslot is created or deleted. Use KVM's cached nr_memslot_pages to compute the default max number of MMU pages. Signed-off-by: Maciej S. Szmigiero [sean: use common KVM field and rework changelog accordingly] Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 24 ------------------------ arch/x86/kvm/x86.c | 11 ++++++++--- 3 files changed, 8 insertions(+), 28 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 88fce6ab4bbd..3fe155ece015 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1582,7 +1582,6 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot); void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); -unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages); int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 354d2ca92df4..564781585fd2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6141,30 +6141,6 @@ int kvm_mmu_module_init(void) return ret; } -/* - * Calculate mmu pages needed for kvm. - */ -unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm) -{ - unsigned long nr_mmu_pages; - unsigned long nr_pages = 0; - struct kvm_memslots *slots; - struct kvm_memory_slot *memslot; - int i; - - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - - kvm_for_each_memslot(memslot, slots) - nr_pages += memslot->npages; - } - - nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000; - nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); - - return nr_mmu_pages; -} - void kvm_mmu_destroy(struct kvm_vcpu *vcpu) { kvm_mmu_unload(vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4b0cb7390902..9a0440e22ede 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11837,9 +11837,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, enum kvm_mr_change change) { if (!kvm->arch.n_requested_mmu_pages && - (change == KVM_MR_CREATE || change == KVM_MR_DELETE)) - kvm_mmu_change_mmu_pages(kvm, - kvm_mmu_calculate_default_mmu_pages(kvm)); + (change == KVM_MR_CREATE || change == KVM_MR_DELETE)) { + unsigned long nr_mmu_pages; + + nr_mmu_pages = kvm->nr_memslot_pages * KVM_PERMILLE_MMU_PAGES; + nr_mmu_pages /= 1000; + nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); + kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); + } kvm_mmu_slot_apply_flags(kvm, old, new, change); From patchwork Thu Nov 4 00:25:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F18FAC433F5 for ; Thu, 4 Nov 2021 00:47:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ABDAF60174 for ; Thu, 4 Nov 2021 00:47:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ABDAF60174 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RZPxK4J+rKDZuiYjbGpd+Q/2ORMFtvmFbeIsHi/6/tw=; b=1bV1RKeR5CeRCN suY6V5RPB68Fk1fQEJXF/wctCSAey6M33YOsUlSdJ2aGO6NqB11INGacqtsPS9qgWN+xH1XUh6+D4 myqZKrV8PsE3juCNgPv9rm+u6zAJaIpFLzKzeVfbh27fb3aBJlsh06szin2LJpjV9SduHDwdPKCHT dFL8iWoc1QLi1aJ3AAYGxPRBnK96/xlabFtx0Hjmi+ntrOzYbRVnt5cp0U7syMqS9znF9ImSUdiMm 6CVZDYZU5VsoNaDuFRWYUSnHsR0FSj3eU3HeH2CC5LO3a++OfxsUuyjdL1wKFzzgeyMYALX9h7wQP rBhAawf4bhVwWjpC2wXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQua-007LdG-4N; Thu, 04 Nov 2021 00:47:24 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaS-007Ctz-Jy for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:39 +0000 Received: by mail-pf1-x44a.google.com with SMTP id c207-20020a621cd8000000b0048060050cfeso2355666pfc.3 for ; Wed, 03 Nov 2021 17:26:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=FUEi7KfqPphzymRQTGTbfOx/MaI4KzNKcipjuT8FVdY=; b=dFLgxm0c72Sa9o7RZRhGakedthtd0rFQ9fxpUH7hzUonvUmaEHzTEnr9Z7QKN+CMp/ UnHWs+AkXOLJwNuZqSXK1TbkIXte514NBf/fueQC3NI4aiWd2h85ZQ55FxOM3E6EZZr8 49Ow/tJTyTy5Xhf5TSJTHjUY9Vrt2JWRXIpRq3jtBZ24fzBy9ywvoMoucypPleHtz7Bt nyGl9zsi/YAZTSoy9L2rlvKnLcaek2cAGwNHMKjOBf2K1E/oRLqh2gz69UfhBmUXR2vR bqL0iE5lPgi5OSkY6KeTt/6FnfFKdVm554W2sAwUPzX0qvPyOIEUQT5iY+zRVA9g6jW8 c+tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=FUEi7KfqPphzymRQTGTbfOx/MaI4KzNKcipjuT8FVdY=; b=YEEiLbdU6l2McornzfsLQWKo73g4gjUnNg/MUJ5Je5zzArLkwZiNy5wL7AAoYJQ8kw OqI74yF8acFZrzLn7SkHUVh+cAeovSIXs+le1goChEKqBqfTWKKGWqEUG4GqOJh2J56r rmhJ7tay8ASTN6ZiNd24nO9NJDma8A99GdhC2Lz2HxmuNHPcPQbj5foOW2vruRjV4+rZ LF4yhNEQwdvdIkRwdRcIhVMJA3mFhN6ZEwdDyNshFrxlG4dpq1SH8NRqsvRfHXg/lGpd TMIjir9fnXUbvxcpP3GD/6XzWh2PLbF8xnIxsIL/a1fZn06WGOcGE4OZZZt/TqZQT022 3l+Q== X-Gm-Message-State: AOAM530fQCDVqoVw8ZiSLo2VB0d5orVu7EfUyrh0J01l28N5e+kBUQr4 QGOS3FZ14UgJriOA/uzs1Smr1xrRzrw= X-Google-Smtp-Source: ABdhPJzI7wXfhmzIb07WZ6IKVyEUUmbTY3InZMh88rZP6YElu9I5y2TlNvxIb1UtjdEBVBmnF7+gpU4SBUM= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:c3:: with SMTP id v3mr252447pjd.0.1635985594374; Wed, 03 Nov 2021 17:26:34 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:22 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-22-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 21/30] KVM: Integrate gfn_to_memslot_approx() into search_memslots() From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172636_709545_F862F7EF X-CRM114-Status: GOOD ( 18.74 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero s390 arch has gfn_to_memslot_approx() which is almost identical to search_memslots(), differing only in that in case the gfn falls in a hole one of the memslots bordering the hole is returned. Add this lookup mode as an option to search_memslots() so we don't have two almost identical functions for looking up a memslot by its gfn. Signed-off-by: Maciej S. Szmigiero [sean: tweaked helper names to keep gfn_to_memslot_approx() in s390] Signed-off-by: Sean Christopherson --- arch/s390/kvm/kvm-s390.c | 45 +++++++--------------------------------- include/linux/kvm_host.h | 35 ++++++++++++++++++++++++------- virt/kvm/kvm_main.c | 2 +- 3 files changed, 36 insertions(+), 46 deletions(-) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index c4d0ed5f3400..4e032e176216 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1941,41 +1941,6 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args) /* for consistency */ #define KVM_S390_CMMA_SIZE_MAX ((u32)KVM_S390_SKEYS_MAX) -/* - * Similar to gfn_to_memslot, but returns the index of a memslot also when the - * address falls in a hole. In that case the index of one of the memslots - * bordering the hole is returned. - */ -static int gfn_to_memslot_approx(struct kvm_memslots *slots, gfn_t gfn) -{ - int start = 0, end = slots->used_slots; - int slot = atomic_read(&slots->last_used_slot); - struct kvm_memory_slot *memslots = slots->memslots; - - if (gfn >= memslots[slot].base_gfn && - gfn < memslots[slot].base_gfn + memslots[slot].npages) - return slot; - - while (start < end) { - slot = start + (end - start) / 2; - - if (gfn >= memslots[slot].base_gfn) - end = slot; - else - start = slot + 1; - } - - if (start >= slots->used_slots) - return slots->used_slots - 1; - - if (gfn >= memslots[start].base_gfn && - gfn < memslots[start].base_gfn + memslots[start].npages) { - atomic_set(&slots->last_used_slot, start); - } - - return start; -} - static int kvm_s390_peek_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args, u8 *res, unsigned long bufsize) { @@ -1999,11 +1964,17 @@ static int kvm_s390_peek_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args, return 0; } +static struct kvm_memory_slot *gfn_to_memslot_approx(struct kvm_memslots *slots, + gfn_t gfn) +{ + return ____gfn_to_memslot(slots, gfn, true); +} + static unsigned long kvm_s390_next_dirty_cmma(struct kvm_memslots *slots, unsigned long cur_gfn) { - int slotidx = gfn_to_memslot_approx(slots, cur_gfn); - struct kvm_memory_slot *ms = slots->memslots + slotidx; + struct kvm_memory_slot *ms = gfn_to_memslot_approx(slots, cur_gfn); + int slotidx = ms - slots->memslots; unsigned long ofs = cur_gfn - ms->base_gfn; if (ms->base_gfn + ms->npages <= cur_gfn) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 2ef946e94a73..9d46937a3a4e 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1230,10 +1230,14 @@ try_get_memslot(struct kvm_memslots *slots, int slot_index, gfn_t gfn) * Returns a pointer to the memslot that contains gfn and records the index of * the slot in index. Otherwise returns NULL. * + * With "approx" set returns the memslot also when the address falls + * in a hole. In that case one of the memslots bordering the hole is + * returned. + * * IMPORTANT: Slots are sorted from highest GFN to lowest GFN! */ static inline struct kvm_memory_slot * -search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index) +search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index, bool approx) { int start = 0, end = slots->used_slots; struct kvm_memory_slot *memslots = slots->memslots; @@ -1251,22 +1255,26 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index) start = slot + 1; } + if (approx && start >= slots->used_slots) { + *index = slots->used_slots - 1; + return &memslots[slots->used_slots - 1]; + } + slot = try_get_memslot(slots, start, gfn); if (slot) { *index = start; return slot; } + if (approx) { + *index = start; + return &memslots[start]; + } return NULL; } -/* - * __gfn_to_memslot() and its descendants are here because it is called from - * non-modular code in arch/powerpc/kvm/book3s_64_vio{,_hv}.c. gfn_to_memslot() - * itself isn't here as an inline because that would bloat other code too much. - */ static inline struct kvm_memory_slot * -__gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn) +____gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn, bool approx) { struct kvm_memory_slot *slot; int slot_index = atomic_read(&slots->last_used_slot); @@ -1275,7 +1283,7 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn) if (slot) return slot; - slot = search_memslots(slots, gfn, &slot_index); + slot = search_memslots(slots, gfn, &slot_index, approx); if (slot) { atomic_set(&slots->last_used_slot, slot_index); return slot; @@ -1284,6 +1292,17 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn) return NULL; } +/* + * __gfn_to_memslot() and its descendants are here to allow arch code to inline + * the lookups in hot paths. gfn_to_memslot() itself isn't here as an inline + * because that would bloat other code too much. + */ +static inline struct kvm_memory_slot * +__gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn) +{ + return ____gfn_to_memslot(slots, gfn, false); +} + static inline unsigned long __gfn_to_hva_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index bbaa01afac43..a2d51ce957e1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2126,7 +2126,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn * search_memslots() instead of __gfn_to_memslot() to avoid * thrashing the VM-wide last_used_index in kvm_memslots. */ - slot = search_memslots(slots, gfn, &slot_index); + slot = search_memslots(slots, gfn, &slot_index, false); if (slot) { vcpu->last_used_slot = slot_index; return slot; From patchwork Thu Nov 4 00:25:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CAD0C433F5 for ; Thu, 4 Nov 2021 00:48:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1180260F9D for ; Thu, 4 Nov 2021 00:48:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1180260F9D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ObDT8IrgddkLNbG/uc+/ot2rdBmWm1zIdMLppDyLu4Q=; b=g9kYQQUVxhMcTO nPTHk2LrGCEnILlBsEe16Lr+2wljf1Fk5xRQucQndbqP9mofqKjJ6OXFuiPpIpZc5QzfaUxByvBla Bd769sBK4d75q2oPdocovwtnFTdpTkeQIMhqKu8z55gFZpGccl/JhoXW5M1H73aCVJlCkpXDL3y0o 6rc75VmlNftzKW+Vx15kPuaa5TBDlZleX+JMITC/wdGrRLoc+hSXSEldmr439bVxh60y001M4UG3O htuuLNqYcDTV5DmOa/iIDOgJNCP1WOKbIlVMnFY4SnP/54uP22V9OPjYzNpzzg28SRQXAanfEcuSy eo5VeNpEyatZ43npVmpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQvz-007MA3-Q2; Thu, 04 Nov 2021 00:48:51 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaU-007CvT-8s for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:40 +0000 Received: by mail-pg1-x549.google.com with SMTP id 76-20020a63054f000000b002c9284978aaso2389989pgf.10 for ; Wed, 03 Nov 2021 17:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=z9H8IQzlDNgV44iopVPD/a5Trpa0ZdTpJckFcsIyzz0=; b=YKpIJ4e4/wfganLDuC8K208Zpsp5JVcK50xnWQQTLSFM+X50LE78T6m87p5qT3DblR Oqy3rk1VQsN1i/paB2gXcz5pvl+Jhd8ELjKdXAD0WmwFnDoXvIApQ5t8wATtH/xHBo3e E3zgwsISbQWVPjz/S1RIgDi94l9z3MF9WBJMMJEtMbLYrCmw/M0/pklTGnCcvsIM1IzC 7p8hk9aIycQqutIs+/eYz7g/lbNHI/oVf07ie2vQl+tlRkiyfau6HsmDc9DeODVl5ybk WJzJ95EFnzX+H5HwRah9xIveGt9v2x/cU0TbcVn1MuEYGyTz7eHSGiF0kmPeMBG8mGAx hpBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=z9H8IQzlDNgV44iopVPD/a5Trpa0ZdTpJckFcsIyzz0=; b=M+Hce+9u3SZ6YWpOLIrOqlWUjY2L2lkNEGBMa4+TKmBfC6JsezD2I9Fbmj7b0E9ew1 jv7z6ySMxP5h+yWWXHffm3DfE2WfzRT3ICui9VZfhp3EdN/oLuYVXYveXBf2cq+z8sVC H0WpCOpOKXSbo4ZHGdNJM7SLBDAnXieUVCbDtdV1pZ2xH0uP1J8YtAqotx5bRCWGWf0B qBwdZOfdlEFALJUOCk79b8CwTNMdtIgfgRB4G/MQ8M1B6a30s2gMDeSMrs5uwNQ6gPfX 7bSbm0kKZVn7X+IDWNADZ/dgKQJGPBWfS2BIS9R/F1AB3iJ6qCZlpkgsMPejBphp/KXf EtUg== X-Gm-Message-State: AOAM530Jgo3w56zOcKqSRnHE8gnOLWwK6bztyNlCL1NirlOyFvtbQJgj 807qVnxGYl9R4hlYq3B8ssKM/mEolw4= X-Google-Smtp-Source: ABdhPJzonurVPT2PlJyw4WaELXDlFkKjeT4hoSllYUuMDrNarwts75/W8wr8EQC45lWUJfQCoW+ypti7LUM= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:5285:: with SMTP id w5mr261421pjh.1.1635985596589; Wed, 03 Nov 2021 17:26:36 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:23 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-23-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 22/30] KVM: Move WARN on invalid memslot index to update_memslots() From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172638_383245_5473BBAA X-CRM114-Status: UNSURE ( 9.86 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero Since kvm_memslot_move_forward() can theoretically return a negative memslot index even when kvm_memslot_move_backward() returned a positive one (and so did not WARN) let's just move the warning to the common code. Signed-off-by: Maciej S. Szmigiero Reviewed-by: Claudio Imbrenda Reviewed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a2d51ce957e1..d45d574a5a2d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1307,8 +1307,7 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots, struct kvm_memory_slot *mslots = slots->memslots; int i; - if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) || - WARN_ON_ONCE(!slots->used_slots)) + if (slots->id_to_index[memslot->id] == -1 || !slots->used_slots) return -1; /* @@ -1412,6 +1411,9 @@ static void update_memslots(struct kvm_memslots *slots, i = kvm_memslot_move_backward(slots, memslot); i = kvm_memslot_move_forward(slots, memslot, i); + if (WARN_ON_ONCE(i < 0)) + return; + /* * Copy the memslot to its new position in memslots and update * its index accordingly. From patchwork Thu Nov 4 00:25:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59FE4C433F5 for ; Thu, 4 Nov 2021 00:50:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1F5E360EDF for ; Thu, 4 Nov 2021 00:50:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1F5E360EDF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iqo2T9NtvP+9XgUlFNZuHKhsDJUHrZO0UFyi8bTvm+E=; b=JZCFKWSFFbU2c7 hRaYQKHPR3vz2hkYJsRNJrBxPYjf9J9zuLOZjT7SYgxl5NeBAujV8sFMv2zTQlaL7lYFMDxvqY3P/ dsyAiRX6fdO8G3q3RBKHvL3CKYRKreG4oR9Fhxoiv4nWhjRPmoaB5SFZvmUdrAmOXyCKDxNImA1Xn 044oF9BMK5hFPjwp7/zjbZs0XmN/juVPWIsGFu9sbzU0s8vZIRmA/A4znhfz2k9vlgVlGmO8RpBpW 9DT7856qbyD9NcXAdWfbx0hLXexwvbsLGOVBNBV6M3jVNZMa4gYzIhFDPXsQcPeq2YC6I8UZ/JKAD /5h3oRPLlZ/cVdAhdwnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQxU-007MjF-2g; Thu, 04 Nov 2021 00:50:24 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaW-007CwX-CM for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:43 +0000 Received: by mail-pg1-x54a.google.com with SMTP id z4-20020a634c04000000b00299bdd9abdbso2367745pga.13 for ; Wed, 03 Nov 2021 17:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=bf5Wla2cVS/NSNTHi12y0NmBRJVt1N4KO7mJKYBsTzA=; b=oyskjhxq492MNnNgoYvCMpWUrybgKTR5+c9tJMCSXGTpHGd93ECu+cBn4hW6iXWMJb +tW7ZDiSS7M7A8gZq6psJebVWkHPkeURrhjB47LU2V3Rr9C3yFJYAWfKZ0h2g2hu5LrN +upXS7WnShG3NyC78PhAh0NEuEMfLSwzOwdD09Jx0ayis8Rco2MLGCgq7Eh20/Z9UGQf 0vOhM1m8OJ9I8KDHiTJEj2RH3hRsuvp7NQCAV83q1tm0e7hA6Fxs8smv8Ln6HLP55HMA P9Pew5nu/KnSAwmIX15Y7lYjPgNrQlQ71yBxESh2M4l5MyfuPfAlrZYxw3IqEOSxC17C VYqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=bf5Wla2cVS/NSNTHi12y0NmBRJVt1N4KO7mJKYBsTzA=; b=2kZ/0axjfJjZXYhVGBSYYqKhrWlqq510feVKPJKM7rvCpxmgaDmgEmZ+Lk+H05eJoN 8xboTXcLSp8DSGwAY75MsyYVYPKTOzEedDEjFdm1Hwa2yZfLG114G0I/sjHl+jPUxcMv 7loDUHmO4WWDyqGKv5l4kCEGp+Z6zyyVK+ml+1jvzcENe03LbocXATxgwOqepbK4YOhs cDbXD0VgGk46g/Zsw2pZ/Fu0cvNOwMzQf5x5Pf90aze12455dJEA5MCpm+lzhagC+8uu XdjZVloe5ntwWv9l9U0Z7jFEJ2Fbix7mwLy08rkWfp4iOj5rMtaqfZSuEcglylwSpC03 hCVQ== X-Gm-Message-State: AOAM531zWZX1yUf5kuyoMe1SglSW2pmvhdx9IvRx31kjFl0Tom+U73n/ bYe3d1kluAI7q0sCOBD8AHlwYpA0Kho= X-Google-Smtp-Source: ABdhPJxL7zht6iLFJV0a0zpNfp8Oa/5w3I7yQoZdwUk7Z/rSlm9jVidWozZVaWQaks2CN5zXVqK0nQSaWr0= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:c3:: with SMTP id v3mr252465pjd.0.1635985598344; Wed, 03 Nov 2021 17:26:38 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:24 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-24-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 23/30] KVM: Resolve memslot ID via a hash table instead of via a static array From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172640_505306_1E3CE409 X-CRM114-Status: GOOD ( 26.74 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero Memslot ID to the corresponding memslot mappings are currently kept as indices in static id_to_index array. The size of this array depends on the maximum allowed memslot count (regardless of the number of memslots actually in use). This has become especially problematic recently, when memslot count cap was removed, so the maximum count is now full 32k memslots - the maximum allowed by the current KVM API. Keeping these IDs in a hash table (instead of an array) avoids this problem. Resolving a memslot ID to the actual memslot (instead of its index) will also enable transitioning away from an array-based implementation of the whole memslots structure in a later commit. Signed-off-by: Maciej S. Szmigiero Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- include/linux/kvm_host.h | 16 +++---- virt/kvm/kvm_main.c | 96 +++++++++++++++++++++++++++++++--------- 2 files changed, 84 insertions(+), 28 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9d46937a3a4e..81003e3acd53 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -425,6 +426,7 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu) #define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1) struct kvm_memory_slot { + struct hlist_node id_node; gfn_t base_gfn; unsigned long npages; unsigned long *dirty_bitmap; @@ -527,7 +529,7 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) struct kvm_memslots { u64 generation; /* The mapping table from slot id to the index in memslots[]. */ - short id_to_index[KVM_MEM_SLOTS_NUM]; + DECLARE_HASHTABLE(id_hash, 7); atomic_t last_used_slot; int used_slots; struct kvm_memory_slot memslots[]; @@ -789,16 +791,14 @@ static inline struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu) static inline struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id) { - int index = slots->id_to_index[id]; struct kvm_memory_slot *slot; - if (index < 0) - return NULL; + hash_for_each_possible(slots->id_hash, slot, id_node, id) { + if (slot->id == id) + return slot; + } - slot = &slots->memslots[index]; - - WARN_ON(slot->id != id); - return slot; + return NULL; } /* diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d45d574a5a2d..13c497abaab8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -853,15 +853,13 @@ static void kvm_destroy_pm_notifier(struct kvm *kvm) static struct kvm_memslots *kvm_alloc_memslots(void) { - int i; struct kvm_memslots *slots; slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT); if (!slots) return NULL; - for (i = 0; i < KVM_MEM_SLOTS_NUM; i++) - slots->id_to_index[i] = -1; + hash_init(slots->id_hash); return slots; } @@ -1259,17 +1257,49 @@ static int kvm_alloc_dirty_bitmap(struct kvm_memory_slot *memslot) return 0; } +static void kvm_replace_memslot(struct kvm_memslots *slots, + struct kvm_memory_slot *old, + struct kvm_memory_slot *new) +{ + /* + * Remove the old memslot from the hash list, copying the node data + * would corrupt the list. + */ + if (old) { + hash_del(&old->id_node); + + if (!new) + return; + } + + /* Copy the source *data*, not the pointer, to the destination. */ + if (old) + *new = *old; + + /* (Re)Add the new memslot. */ + hash_add(slots->id_hash, &new->id_node, new->id); +} + +static void kvm_shift_memslot(struct kvm_memslots *slots, int dst, int src) +{ + struct kvm_memory_slot *mslots = slots->memslots; + + kvm_replace_memslot(slots, &mslots[src], &mslots[dst]); +} + /* * Delete a memslot by decrementing the number of used slots and shifting all * other entries in the array forward one spot. + * @memslot is a detached dummy struct with just .id and .as_id filled. */ static inline void kvm_memslot_delete(struct kvm_memslots *slots, struct kvm_memory_slot *memslot) { struct kvm_memory_slot *mslots = slots->memslots; + struct kvm_memory_slot *oldslot = id_to_memslot(slots, memslot->id); int i; - if (WARN_ON(slots->id_to_index[memslot->id] == -1)) + if (WARN_ON(!oldslot)) return; slots->used_slots--; @@ -1277,12 +1307,17 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots, if (atomic_read(&slots->last_used_slot) >= slots->used_slots) atomic_set(&slots->last_used_slot, 0); - for (i = slots->id_to_index[memslot->id]; i < slots->used_slots; i++) { - mslots[i] = mslots[i + 1]; - slots->id_to_index[mslots[i].id] = i; - } + /* + * Remove the to-be-deleted memslot from the list _before_ shifting + * the trailing memslots forward, its data will be overwritten. + * Defer the (somewhat pointless) copying of the memslot until after + * the last slot has been shifted to avoid overwriting said last slot. + */ + kvm_replace_memslot(slots, oldslot, NULL); + + for (i = oldslot - mslots; i < slots->used_slots; i++) + kvm_shift_memslot(slots, i, i + 1); mslots[i] = *memslot; - slots->id_to_index[memslot->id] = -1; } /* @@ -1300,30 +1335,39 @@ static inline int kvm_memslot_insert_back(struct kvm_memslots *slots) * itself is not preserved in the array, i.e. not swapped at this time, only * its new index into the array is tracked. Returns the changed memslot's * current index into the memslots array. + * The memslot at the returned index will not be in @slots->id_hash by then. + * @memslot is a detached struct with desired final data of the changed slot. */ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots, struct kvm_memory_slot *memslot) { struct kvm_memory_slot *mslots = slots->memslots; + struct kvm_memory_slot *oldslot = id_to_memslot(slots, memslot->id); int i; - if (slots->id_to_index[memslot->id] == -1 || !slots->used_slots) + if (!oldslot || !slots->used_slots) return -1; + /* + * Delete the slot from the hash table before sorting the remaining + * slots, the slot's data may be overwritten when copying slots as part + * of the sorting proccess. update_memslots() will unconditionally + * rewrite the entire slot and re-add it to the hash table. + */ + kvm_replace_memslot(slots, oldslot, NULL); + /* * Move the target memslot backward in the array by shifting existing * memslots with a higher GFN (than the target memslot) towards the * front of the array. */ - for (i = slots->id_to_index[memslot->id]; i < slots->used_slots - 1; i++) { + for (i = oldslot - mslots; i < slots->used_slots - 1; i++) { if (memslot->base_gfn > mslots[i + 1].base_gfn) break; WARN_ON_ONCE(memslot->base_gfn == mslots[i + 1].base_gfn); - /* Shift the next memslot forward one and update its index. */ - mslots[i] = mslots[i + 1]; - slots->id_to_index[mslots[i].id] = i; + kvm_shift_memslot(slots, i, i + 1); } return i; } @@ -1334,6 +1378,10 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots, * is not preserved in the array, i.e. not swapped at this time, only its new * index into the array is tracked. Returns the changed memslot's final index * into the memslots array. + * The memslot at the returned index will not be in @slots->id_hash by then. + * @memslot is a detached struct with desired final data of the new or + * changed slot. + * Assumes that the memslot at @start index is not in @slots->id_hash. */ static inline int kvm_memslot_move_forward(struct kvm_memslots *slots, struct kvm_memory_slot *memslot, @@ -1348,9 +1396,7 @@ static inline int kvm_memslot_move_forward(struct kvm_memslots *slots, WARN_ON_ONCE(memslot->base_gfn == mslots[i - 1].base_gfn); - /* Shift the next memslot back one and update its index. */ - mslots[i] = mslots[i - 1]; - slots->id_to_index[mslots[i].id] = i; + kvm_shift_memslot(slots, i, i - 1); } return i; } @@ -1395,6 +1441,9 @@ static inline int kvm_memslot_move_forward(struct kvm_memslots *slots, * most likely to be referenced, sorting it to the front of the array was * advantageous. The current binary search starts from the middle of the array * and uses an LRU pointer to improve performance for all memslots and GFNs. + * + * @memslot is a detached struct, not a part of the current or new memslot + * array. */ static void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *memslot, @@ -1419,7 +1468,7 @@ static void update_memslots(struct kvm_memslots *slots, * its index accordingly. */ slots->memslots[i] = *memslot; - slots->id_to_index[memslot->id] = i; + kvm_replace_memslot(slots, NULL, &slots->memslots[i]); } } @@ -1512,6 +1561,7 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, { struct kvm_memslots *slots; size_t new_size; + struct kvm_memory_slot *memslot; if (change == KVM_MR_CREATE) new_size = kvm_memslots_size(old->used_slots + 1); @@ -1519,8 +1569,14 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, new_size = kvm_memslots_size(old->used_slots); slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT); - if (likely(slots)) - memcpy(slots, old, kvm_memslots_size(old->used_slots)); + if (unlikely(!slots)) + return NULL; + + memcpy(slots, old, kvm_memslots_size(old->used_slots)); + + hash_init(slots->id_hash); + kvm_for_each_memslot(memslot, slots) + hash_add(slots->id_hash, &memslot->id_node, memslot->id); return slots; } From patchwork Thu Nov 4 00:25:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94DC5C433EF for ; Thu, 4 Nov 2021 00:54:02 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5B15D60F9D for ; Thu, 4 Nov 2021 00:54:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5B15D60F9D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=96GTuf3bVwWlUqPVXZ2+VUpquda+KzFk7ZzNlY2guGQ=; b=0QdtmpGKDH/cXJ zz3bjB1iR3j+iw9fGL3B8wBAKLrk5gZh2GHb4eOtHZYNiP8wasQOPBLPxXbxI4O0oJ1RxLR+UdwY9 d0sdKoHee7SdKrW82SNEcltk5es/66+FsItnskk7QycOVShO0b2z8CbwrSaOc9CnWv4vRb+x5HEse DHSqueXBV0olhWagYRaA+dRE4HssruaT27LsjLdOs0f2xk+Jrp1xIUPlhNDjUBxmWnk/IJ6EQvOqG KC68aUVu1aqm5P1uW/K4ecLPSD7oPfzIwCLy9BFo7/9cLOB8N8r//uzbHedclmQNWpoxibtWnBPYN dQiWkQZB6406Q650q1Qw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miR0s-007Nwa-Pu; Thu, 04 Nov 2021 00:53:54 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaX-007Cx6-KV for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:47 +0000 Received: by mail-pg1-x549.google.com with SMTP id c2-20020a63d5020000b029023ae853b72cso2358551pgg.18 for ; Wed, 03 Nov 2021 17:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=7rzhcX7MhxExPYBH2airLxvO3D/6cRPAJqYqBvFLkLI=; b=glYV1AwCXiCDMEzeHKJeBM7t5OOl/RIHw91NlGWfABgcjYJRSOPOw7GJRnkYACGeGR OtpZPEuTh6v3GqcOOsOB/46At1eJGTxMXgU/4c3t/Hj5zV58di7o7voo03nMlGivkhaY JoBBb1LkxqBIUVtFOxMZCYnn3HHaVkEs0wL2aEdpGRexTSo3rgI0NcRpK8O+0+jGrEFL /Ysjs1hlWs/4G8AfeSe2g1AGYGLsspoz6XqSGx1Z9H6dfSE1rIw8k3FWXRmftP8+qBE9 TI3jFj5zOOpeOJ5i/kx6GCn+0CaKSq4eOjXJNyUT9qDIGajUuDTcGhPVPnls/gbXVU4g ChWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=7rzhcX7MhxExPYBH2airLxvO3D/6cRPAJqYqBvFLkLI=; b=JkYxOk7+cW3XHuA4Hs5rCcnVe9dCSJEPOueLuCo4ciSDQciQ04MVyiZxpdD6wEVs2+ mGWnjswkvDVJP2Jc2TacrbuEgvFiFbkEwq/Bk6bIG7tyBuscpRGaa93vfqB6tQZ0IUUU 6g/v61WNjXiZuCYkKgwYT5D38dpCPlfOgzd1L8COtmNdvByl/pvbR0+Py0dizo+q6tAA /EVc8UfLvBZ+Gbk6AdoCtKullAwKx8XVe15BKlLOGSPbicr4RHKZGEhP3AFJ9Eg3OUTO JzHqmJQvovB+c9RbiaW7JNvSLw1hd7c7v+52RKIhbh8D055VrKJzAR4sLZc6aPACd284 xthw== X-Gm-Message-State: AOAM530oJm0Txyuj1qlP+KWD5+VWjA1XteWIS+KXuRdei/JiRt6R/X2c 8j8DEId1VTEOFZFk0r3eLp3YtqrBZOY= X-Google-Smtp-Source: ABdhPJw+suRRBiZu3StiwU37JwgZC93JimIk+hnUmSuDbQX1c0RQsyp1eUCtkfUaD/MUVYbOsMPkNBO+zXU= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:903:18b:b0:141:eda2:d5fa with SMTP id z11-20020a170903018b00b00141eda2d5famr21586756plg.63.1635985599965; Wed, 03 Nov 2021 17:26:39 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:25 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-25-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 24/30] KVM: Use interval tree to do fast hva lookup in memslots From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172641_714700_5AFAECF0 X-CRM114-Status: GOOD ( 27.20 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero The current memslots implementation only allows quick binary search by gfn, quick lookup by hva is not possible - the implementation has to do a linear scan of the whole memslots array, even though the operation being performed might apply just to a single memslot. This significantly hurts performance of per-hva operations with higher memslot counts. Since hva ranges can overlap between memslots an interval tree is needed for tracking them. Signed-off-by: Maciej S. Szmigiero [sean: handle interval tree updates in kvm_replace_memslot()] Signed-off-by: Sean Christopherson --- arch/arm64/kvm/Kconfig | 1 + arch/mips/kvm/Kconfig | 1 + arch/powerpc/kvm/Kconfig | 1 + arch/s390/kvm/Kconfig | 1 + arch/x86/kvm/Kconfig | 1 + include/linux/kvm_host.h | 3 ++ virt/kvm/kvm_main.c | 60 +++++++++++++++++++++++++++++----------- 7 files changed, 52 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index d7eec0b43744..42185dcc9596 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -38,6 +38,7 @@ menuconfig KVM select HAVE_KVM_IRQ_BYPASS select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO + select INTERVAL_TREE help Support hosting virtualized guest machines. diff --git a/arch/mips/kvm/Kconfig b/arch/mips/kvm/Kconfig index a77297480f56..91d197bee9c0 100644 --- a/arch/mips/kvm/Kconfig +++ b/arch/mips/kvm/Kconfig @@ -27,6 +27,7 @@ config KVM select KVM_MMIO select MMU_NOTIFIER select SRCU + select INTERVAL_TREE help Support for hosting Guest kernels. diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig index ff581d70f20c..e4c24f524ba8 100644 --- a/arch/powerpc/kvm/Kconfig +++ b/arch/powerpc/kvm/Kconfig @@ -26,6 +26,7 @@ config KVM select KVM_VFIO select IRQ_BYPASS_MANAGER select HAVE_KVM_IRQ_BYPASS + select INTERVAL_TREE config KVM_BOOK3S_HANDLER bool diff --git a/arch/s390/kvm/Kconfig b/arch/s390/kvm/Kconfig index 67a8e770e369..2e84d3922f7c 100644 --- a/arch/s390/kvm/Kconfig +++ b/arch/s390/kvm/Kconfig @@ -33,6 +33,7 @@ config KVM select HAVE_KVM_NO_POLL select SRCU select KVM_VFIO + select INTERVAL_TREE help Support hosting paravirtualized guest machines using the SIE virtualization capability on the mainframe. This should work diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 619186138176..7618bef0a4a9 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -43,6 +43,7 @@ config KVM select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_VFIO select SRCU + select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM help Support hosting fully virtualized guest machines using hardware diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 81003e3acd53..d0363e2ba098 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -30,6 +30,7 @@ #include #include #include +#include #include #include @@ -427,6 +428,7 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu) struct kvm_memory_slot { struct hlist_node id_node; + struct interval_tree_node hva_node; gfn_t base_gfn; unsigned long npages; unsigned long *dirty_bitmap; @@ -528,6 +530,7 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) */ struct kvm_memslots { u64 generation; + struct rb_root_cached hva_tree; /* The mapping table from slot id to the index in memslots[]. */ DECLARE_HASHTABLE(id_hash, 7); atomic_t last_used_slot; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 13c497abaab8..f2235c430e64 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -498,6 +498,12 @@ static void kvm_null_fn(void) } #define IS_KVM_NULL_FN(fn) ((fn) == (void *)kvm_null_fn) +/* Iterate over each memslot intersecting [start, last] (inclusive) range */ +#define kvm_for_each_memslot_in_hva_range(node, slots, start, last) \ + for (node = interval_tree_iter_first(&slots->hva_tree, start, last); \ + node; \ + node = interval_tree_iter_next(node, start, last)) \ + static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, const struct kvm_hva_range *range) { @@ -507,6 +513,9 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, struct kvm_memslots *slots; int i, idx; + if (WARN_ON_ONCE(range->end <= range->start)) + return 0; + /* A null handler is allowed if and only if on_lock() is provided. */ if (WARN_ON_ONCE(IS_KVM_NULL_FN(range->on_lock) && IS_KVM_NULL_FN(range->handler))) @@ -515,15 +524,17 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, idx = srcu_read_lock(&kvm->srcu); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + struct interval_tree_node *node; + slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(slot, slots) { + kvm_for_each_memslot_in_hva_range(node, slots, + range->start, range->end - 1) { unsigned long hva_start, hva_end; + slot = container_of(node, struct kvm_memory_slot, hva_node); hva_start = max(range->start, slot->userspace_addr); hva_end = min(range->end, slot->userspace_addr + (slot->npages << PAGE_SHIFT)); - if (hva_start >= hva_end) - continue; /* * To optimize for the likely case where the address @@ -859,6 +870,7 @@ static struct kvm_memslots *kvm_alloc_memslots(void) if (!slots) return NULL; + slots->hva_tree = RB_ROOT_CACHED; hash_init(slots->id_hash); return slots; @@ -1262,22 +1274,32 @@ static void kvm_replace_memslot(struct kvm_memslots *slots, struct kvm_memory_slot *new) { /* - * Remove the old memslot from the hash list, copying the node data - * would corrupt the list. + * Remove the old memslot from the hash list and interval tree, copying + * the node data would corrupt the structures. */ if (old) { hash_del(&old->id_node); + interval_tree_remove(&old->hva_node, &slots->hva_tree); if (!new) return; } - /* Copy the source *data*, not the pointer, to the destination. */ - if (old) + /* + * Copy the source *data*, not the pointer, to the destination. If + * @old is NULL, initialize @new's hva range. + */ + if (old) { *new = *old; + } else if (new) { + new->hva_node.start = new->userspace_addr; + new->hva_node.last = new->userspace_addr + + (new->npages << PAGE_SHIFT) - 1; + } /* (Re)Add the new memslot. */ hash_add(slots->id_hash, &new->id_node, new->id); + interval_tree_insert(&new->hva_node, &slots->hva_tree); } static void kvm_shift_memslot(struct kvm_memslots *slots, int dst, int src) @@ -1308,7 +1330,7 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots, atomic_set(&slots->last_used_slot, 0); /* - * Remove the to-be-deleted memslot from the list _before_ shifting + * Remove the to-be-deleted memslot from the list/tree _before_ shifting * the trailing memslots forward, its data will be overwritten. * Defer the (somewhat pointless) copying of the memslot until after * the last slot has been shifted to avoid overwriting said last slot. @@ -1335,7 +1357,8 @@ static inline int kvm_memslot_insert_back(struct kvm_memslots *slots) * itself is not preserved in the array, i.e. not swapped at this time, only * its new index into the array is tracked. Returns the changed memslot's * current index into the memslots array. - * The memslot at the returned index will not be in @slots->id_hash by then. + * The memslot at the returned index will not be in @slots->hva_tree or + * @slots->id_hash by then. * @memslot is a detached struct with desired final data of the changed slot. */ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots, @@ -1349,10 +1372,10 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots, return -1; /* - * Delete the slot from the hash table before sorting the remaining - * slots, the slot's data may be overwritten when copying slots as part - * of the sorting proccess. update_memslots() will unconditionally - * rewrite the entire slot and re-add it to the hash table. + * Delete the slot from the hash table and interval tree before sorting + * the remaining slots, the slot's data may be overwritten when copying + * slots as part of the sorting proccess. update_memslots() will + * unconditionally rewrite and re-add the entire slot. */ kvm_replace_memslot(slots, oldslot, NULL); @@ -1378,10 +1401,12 @@ static inline int kvm_memslot_move_backward(struct kvm_memslots *slots, * is not preserved in the array, i.e. not swapped at this time, only its new * index into the array is tracked. Returns the changed memslot's final index * into the memslots array. - * The memslot at the returned index will not be in @slots->id_hash by then. + * The memslot at the returned index will not be in @slots->hva_tree or + * @slots->id_hash by then. * @memslot is a detached struct with desired final data of the new or * changed slot. - * Assumes that the memslot at @start index is not in @slots->id_hash. + * Assumes that the memslot at @start index is not in @slots->hva_tree or + * @slots->id_hash. */ static inline int kvm_memslot_move_forward(struct kvm_memslots *slots, struct kvm_memory_slot *memslot, @@ -1574,9 +1599,12 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, memcpy(slots, old, kvm_memslots_size(old->used_slots)); + slots->hva_tree = RB_ROOT_CACHED; hash_init(slots->id_hash); - kvm_for_each_memslot(memslot, slots) + kvm_for_each_memslot(memslot, slots) { + interval_tree_insert(&memslot->hva_node, &slots->hva_tree); hash_add(slots->id_hash, &memslot->id_node, memslot->id); + } return slots; } From patchwork Thu Nov 4 00:25:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 603A2C433F5 for ; Thu, 4 Nov 2021 00:53:50 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1918060EB9 for ; Thu, 4 Nov 2021 00:53:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1918060EB9 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mSpXQfOVlCuivspaKhPv3PmoIjsKOGFOdmcZl31Pt9A=; b=Rpq/W7iprHaK9A HV668/kcEVOT2wJZmfixOZ0wJ+qq8/o4tf0Tj6GJVLY3AWxK72F1uFpN2nWSVLipqtM4jbyzGNibS lTFKq09OUrC/0KR8r4xi/pOcTzR+LVbDAhGddmJXbv1uqTASR819tlv8HW5sREwxjCiWpoF2DbmBT ZzkOMn0nssSyZr1ND/i+DizxPAg/uE4w+owx3QAMIhUK14E2VLJphaZptATzAemM/Vk7+QYR72SLp SX+B42gNNdfTpAZlhTyacFYpXsHvqNUEul80zpUMmT3WuADm2Y/yCJOWODpzAF8GuaEUTJOPKT+oP iLQht3JdWxbx61BJPTAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miR0e-007Nr7-DQ; Thu, 04 Nov 2021 00:53:40 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQaZ-007Cz0-89 for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:47 +0000 Received: by mail-pf1-x449.google.com with SMTP id r2-20020a627602000000b00480f8ce37abso2349553pfc.8 for ; Wed, 03 Nov 2021 17:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=g2JE1NchR5OuOuOibWq+8/K2CyWjv7Vlzl0jcmmyHKg=; b=UK0IuaxMRM8VI2p4RKcIncQCt3/rVVcLXbP5evDmYD/5MRFF0cnYdOJohdcwHEX0UL s1jIdejQk2RCs3P65ng7Qi/Nch6xWlLb3AJpNe9St0b08O8bclGAcjcwNlhuwKwlVdU1 UAsDfdzka5c/8R6jSfqZAOtxqrF3sWb5+CRYIfY9EL/+phX4RhDTxJKT1BO5pHPTYR1k fxXTYBb9kgrmzPKpgw9GKqSRQ5hvlxWlweftTu0Q9G5eaVLj6I+ooPTk+H0MMEpbFXMo f+ER1N4nuFT1F4WSwfp35dekPy4XgcZqB/FM/tQgwuYCGxh5wKA5FoaSXrqIYkgUbgeI PAHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=g2JE1NchR5OuOuOibWq+8/K2CyWjv7Vlzl0jcmmyHKg=; b=5mLRaJR++cDG3wG1De4Gvikrboj773T9aFCHsyrQbTKbQSPhygvA9vsl5zCOka8ncI +GmGnJoIxm7oRAB0d05NBngqGx91yZE58Pnic4RY8PtTxx/l1ZgXK0ZlxlrMO4QhDaQW q5k1okkU7Vz6iatGtraOhCoQ62Xq4yvJ/c4lCZ0KgSVC3hEIOFDKoyR6jhlWSueOQacV pwzdVCFCWCtPxBTj8zFyJe6VXzPreE6axHQqsSEzRWQluaOGFDucRaEVj9KHxdXv4Ivb Z4b0DV65vXFvaWta+ZrHPuJNXyLz5Li6iQY2THhAXwlpCk96KP2npR8MoCISNe1ppRqh fVjw== X-Gm-Message-State: AOAM532IEBXFk6oM+WLoIVYz8uVvVY24dTpbY34//dDQ2JuShOZYB/Mz 8ph1ctkonCIDrzu4Wr2TpQGr+6E54U4= X-Google-Smtp-Source: ABdhPJxEVZZh8c79kJBVhyomHBKtGxgWv7d32Tr+YGvJXzn/bVGhXaf6aJM8D0LQz8M/cEOUCA1R/OXLlBc= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:d2c7:b0:142:f06:e5fa with SMTP id n7-20020a170902d2c700b001420f06e5famr12493355plc.87.1635985601544; Wed, 03 Nov 2021 17:26:41 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:26 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-26-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 25/30] KVM: s390: Introduce kvm_s390_get_gfn_end() From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172643_316094_F3438760 X-CRM114-Status: GOOD ( 12.28 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero And use it where s390 code would just access the memslot with the highest gfn directly. No functional change intended. Signed-off-by: Maciej S. Szmigiero Reviewed-by: Claudio Imbrenda Signed-off-by: Sean Christopherson --- arch/s390/kvm/kvm-s390.c | 2 +- arch/s390/kvm/kvm-s390.h | 12 ++++++++++++ arch/s390/kvm/pv.c | 4 +--- 3 files changed, 14 insertions(+), 4 deletions(-) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 4e032e176216..f7cc0853866b 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -2012,7 +2012,7 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args, if (!ms) return 0; next_gfn = kvm_s390_next_dirty_cmma(slots, cur_gfn + 1); - mem_end = slots->memslots[0].base_gfn + slots->memslots[0].npages; + mem_end = kvm_s390_get_gfn_end(slots); while (args->count < bufsize) { hva = gfn_to_hva(kvm, cur_gfn); diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h index 52bc8fbaa60a..207d299d7fea 100644 --- a/arch/s390/kvm/kvm-s390.h +++ b/arch/s390/kvm/kvm-s390.h @@ -208,6 +208,18 @@ static inline int kvm_s390_user_cpu_state_ctrl(struct kvm *kvm) return kvm->arch.user_cpu_state_ctrl != 0; } +/* get the end gfn of the last (highest gfn) memslot */ +static inline unsigned long kvm_s390_get_gfn_end(struct kvm_memslots *slots) +{ + struct kvm_memory_slot *ms; + + if (WARN_ON(!slots->used_slots)) + return 0; + + ms = slots->memslots; + return ms->base_gfn + ms->npages; +} + /* implemented in pv.c */ int kvm_s390_pv_destroy_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc); int kvm_s390_pv_create_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc); diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c index c8841f476e91..e51cccfded25 100644 --- a/arch/s390/kvm/pv.c +++ b/arch/s390/kvm/pv.c @@ -117,7 +117,6 @@ static int kvm_s390_pv_alloc_vm(struct kvm *kvm) unsigned long base = uv_info.guest_base_stor_len; unsigned long virt = uv_info.guest_virt_var_stor_len; unsigned long npages = 0, vlen = 0; - struct kvm_memory_slot *memslot; kvm->arch.pv.stor_var = NULL; kvm->arch.pv.stor_base = __get_free_pages(GFP_KERNEL_ACCOUNT, get_order(base)); @@ -131,8 +130,7 @@ static int kvm_s390_pv_alloc_vm(struct kvm *kvm) * Slots are sorted by GFN */ mutex_lock(&kvm->slots_lock); - memslot = kvm_memslots(kvm)->memslots; - npages = memslot->base_gfn + memslot->npages; + npages = kvm_s390_get_gfn_end(kvm_memslots(kvm)); mutex_unlock(&kvm->slots_lock); kvm->arch.pv.guest_len = npages * PAGE_SIZE; From patchwork Thu Nov 4 00:25:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCB9EC433F5 for ; Thu, 4 Nov 2021 00:58:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5BC8C61051 for ; Thu, 4 Nov 2021 00:58:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5BC8C61051 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jhd/CYDKT9pseGblncZCPejVC026oCyHrYSQb9X8nt0=; b=4eruCVi5OJqaWu Ytqy4MvQSnL7b9BKfpmGi2MspKxOzgjL2GAywQXsPWh968kp4+nky9L+NhBqomnTNtRo5/GcKxU/v aiN+CTR+5Hxbnt6lESDclOe8yWZ6a0hbLcfvFpQPLtWdC9cP7oRKXnbMlKvdh6jgyL/ohZbDlA6d7 WXeZnvKM/1SaIp0Fl65lUDOldq3jc4+4G8bexWA7SVv7uCDLT9GhggiKno0dhJoDQ5N278HznHZUv s26Sf4RU05eEJtTRZSw2iH0+MsfeoKFKtWG2Ac3rJC4m7AbeYu7z1Jqg9Zb9aB9fBx8EsIPs3mJ/l DmDDJ3uxu1hklpDFbw+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miR5G-007PgO-NU; Thu, 04 Nov 2021 00:58:26 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQab-007D0M-BL for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:57 +0000 Received: by mail-pl1-x64a.google.com with SMTP id q2-20020a170902dac200b001422673d86fso1025768plx.20 for ; Wed, 03 Nov 2021 17:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=10Pf0PbobwONbtcQvL3ozkQdooLcZ25+8uhX4og5XZs=; b=nIE9zwGQlDNmQLPFY2/IIyWpp69SWo8IBfZAf5ZLRyVYgd0eRPspMC8mEFAuWxBcP/ ww25lbFnhVyo2NV3dknKW+4BUspKpmTbPdPbX+X5ryRy6Fwn+TIkewCw0iG9imI0nqhX YoKOGQySXPDRWm4AR9iRGkvjomXFFlVsosiw5NoPPNo8fdfVmIeRPeMEqQRGR2n8OXNE ZjhxoxAjOMNp593MarEd4TMxUBC/ddL5hG/H7s25pmXTeyld2vCqxKzEy1+gyvvPB3rl lnB3LNdf+gyK9VZNlsaYvzTG+s15R4sn26secuyaWrxR8XpIwOLJ5bKhaBazXFF4U6uw u02w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=10Pf0PbobwONbtcQvL3ozkQdooLcZ25+8uhX4og5XZs=; b=ESZCniSsgV2MNXsapBrInhNQaC5ioOMdKx9sl9EZEie1nLlAErCk6IxJc5Z8qzBdeU cwVpLLtsB22zMOL1BDzSetEZm3MY79R2doufgEI6dBlNd04zXsDjZwFYyTm+jpzJ0yfX MWGBg00mQOwkN7Y3/LYBQUKSuA973egbCcPcxM8tfY1Et1Sm4SjUEig9vwO73oNU8oDe yh3045ZEIhwLJgko+RMLg/4lrjX5w4nbRYorxM2ptXqODyuc+7BHH850cZAg6rQmvZeH /IU7pXjIst1Twt46pd7S9s/9QpSwkr5iAh+7DO8+ZloKLcoBZucvk+0UAIHXb3XkDB4y LbtA== X-Gm-Message-State: AOAM533nCHhSqEChNU6sUtS0VXU8Iff7B+TAzcuOtptMY4pjiiF1A2Nv VAXXVWUu5pHS1ci9jHJCW4r/T9LF5mM= X-Google-Smtp-Source: ABdhPJwaaTASDdWDih033Wq9u1s1/TONGBy7iVSj07hV4xh3eVD2ob2PadhjKf7zJC81629udQnVD1mB2jQ= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:c3:: with SMTP id v3mr252486pjd.0.1635985603238; Wed, 03 Nov 2021 17:26:43 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:27 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-27-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 26/30] KVM: Keep memslots in tree-based structures instead of array-based ones From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172645_664976_BF897726 X-CRM114-Status: GOOD ( 25.10 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero The current memslot code uses a (reverse gfn-ordered) memslot array for keeping track of them. Because the memslot array that is currently in use cannot be modified every memslot management operation (create, delete, move, change flags) has to make a copy of the whole array so it has a scratch copy to work on. Strictly speaking, however, it is only necessary to make copy of the memslot that is being modified, copying all the memslots currently present is just a limitation of the array-based memslot implementation. Two memslot sets, however, are still needed so the VM continues to run on the currently active set while the requested operation is being performed on the second, currently inactive one. In order to have two memslot sets, but only one copy of actual memslots it is necessary to split out the memslot data from the memslot sets. The memslots themselves should be also kept independent of each other so they can be individually added or deleted. These two memslot sets should normally point to the same set of memslots. They can, however, be desynchronized when performing a memslot management operation by replacing the memslot to be modified by its copy. After the operation is complete, both memslot sets once again point to the same, common set of memslot data. This commit implements the aforementioned idea. For tracking of gfns an ordinary rbtree is used since memslots cannot overlap in the guest address space and so this data structure is sufficient for ensuring that lookups are done quickly. The "last used slot" mini-caches (both per-slot set one and per-vCPU one), that keep track of the last found-by-gfn memslot, are still present in the new code. Signed-off-by: Maciej S. Szmigiero Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/arm64/kvm/mmu.c | 8 +- arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 +- arch/powerpc/kvm/book3s_hv.c | 3 +- arch/powerpc/kvm/book3s_hv_nested.c | 4 +- arch/powerpc/kvm/book3s_hv_uvmem.c | 14 +- arch/s390/kvm/kvm-s390.c | 24 +- arch/s390/kvm/kvm-s390.h | 6 +- arch/x86/kvm/debugfs.c | 6 +- arch/x86/kvm/mmu/mmu.c | 8 +- include/linux/kvm_host.h | 141 +++-- virt/kvm/kvm_main.c | 809 ++++++++++++++-------------- 11 files changed, 524 insertions(+), 503 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a76718388cbd..c27f472b4d24 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -210,13 +210,13 @@ static void stage2_flush_vm(struct kvm *kvm) { struct kvm_memslots *slots; struct kvm_memory_slot *memslot; - int idx; + int idx, bkt; idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) + kvm_for_each_memslot(memslot, bkt, slots) stage2_flush_memslot(kvm, memslot); spin_unlock(&kvm->mmu_lock); @@ -595,14 +595,14 @@ void stage2_unmap_vm(struct kvm *kvm) { struct kvm_memslots *slots; struct kvm_memory_slot *memslot; - int idx; + int idx, bkt; idx = srcu_read_lock(&kvm->srcu); mmap_read_lock(current->mm); spin_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) + kvm_for_each_memslot(memslot, bkt, slots) stage2_unmap_memslot(kvm, memslot); spin_unlock(&kvm->mmu_lock); diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index c63e263312a4..213232914367 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -734,11 +734,11 @@ void kvmppc_rmap_reset(struct kvm *kvm) { struct kvm_memslots *slots; struct kvm_memory_slot *memslot; - int srcu_idx; + int srcu_idx, bkt; srcu_idx = srcu_read_lock(&kvm->srcu); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) { + kvm_for_each_memslot(memslot, bkt, slots) { /* Mutual exclusion with kvm_unmap_hva_range etc. */ spin_lock(&kvm->mmu_lock); /* diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 4d40c1867be5..2a97363c9a31 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5854,11 +5854,12 @@ static int kvmhv_svm_off(struct kvm *kvm) for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { struct kvm_memory_slot *memslot; struct kvm_memslots *slots = __kvm_memslots(kvm, i); + int bkt; if (!slots) continue; - kvm_for_each_memslot(memslot, slots) { + kvm_for_each_memslot(memslot, bkt, slots) { kvmppc_uvmem_drop_pages(memslot, kvm, true); uv_unregister_mem_slot(kvm->arch.lpid, memslot->id); } diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index ed8a2c9f5629..9435e482d514 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -749,7 +749,7 @@ void kvmhv_release_all_nested(struct kvm *kvm) struct kvm_nested_guest *gp; struct kvm_nested_guest *freelist = NULL; struct kvm_memory_slot *memslot; - int srcu_idx; + int srcu_idx, bkt; spin_lock(&kvm->mmu_lock); for (i = 0; i <= kvm->arch.max_nested_lpid; i++) { @@ -770,7 +770,7 @@ void kvmhv_release_all_nested(struct kvm *kvm) } srcu_idx = srcu_read_lock(&kvm->srcu); - kvm_for_each_memslot(memslot, kvm_memslots(kvm)) + kvm_for_each_memslot(memslot, bkt, kvm_memslots(kvm)) kvmhv_free_memslot_nest_rmap(memslot); srcu_read_unlock(&kvm->srcu, srcu_idx); } diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index a7061ee3b157..adc1c495d47c 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -459,7 +459,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) struct kvm_memslots *slots; struct kvm_memory_slot *memslot, *m; int ret = H_SUCCESS; - int srcu_idx; + int srcu_idx, bkt; kvm->arch.secure_guest = KVMPPC_SECURE_INIT_START; @@ -478,7 +478,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) /* register the memslot */ slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) { + kvm_for_each_memslot(memslot, bkt, slots) { ret = __kvmppc_uvmem_memslot_create(kvm, memslot); if (ret) break; @@ -486,7 +486,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) if (ret) { slots = kvm_memslots(kvm); - kvm_for_each_memslot(m, slots) { + kvm_for_each_memslot(m, bkt, slots) { if (m == memslot) break; __kvmppc_uvmem_memslot_delete(kvm, memslot); @@ -647,7 +647,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *slot, unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm) { - int srcu_idx; + int srcu_idx, bkt; struct kvm_memory_slot *memslot; /* @@ -662,7 +662,7 @@ unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm) srcu_idx = srcu_read_lock(&kvm->srcu); - kvm_for_each_memslot(memslot, kvm_memslots(kvm)) + kvm_for_each_memslot(memslot, bkt, kvm_memslots(kvm)) kvmppc_uvmem_drop_pages(memslot, kvm, false); srcu_read_unlock(&kvm->srcu, srcu_idx); @@ -821,7 +821,7 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) { struct kvm_memslots *slots; struct kvm_memory_slot *memslot; - int srcu_idx; + int srcu_idx, bkt; long ret = H_SUCCESS; if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) @@ -830,7 +830,7 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) /* migrate any unmoved normal pfn to device pfns*/ srcu_idx = srcu_read_lock(&kvm->srcu); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) { + kvm_for_each_memslot(memslot, bkt, slots) { ret = kvmppc_uv_migrate_mem_slot(kvm, memslot); if (ret) { /* diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index f7cc0853866b..f2c12456a047 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1035,13 +1035,13 @@ static int kvm_s390_vm_start_migration(struct kvm *kvm) struct kvm_memory_slot *ms; struct kvm_memslots *slots; unsigned long ram_pages = 0; - int slotnr; + int bkt; /* migration mode already enabled */ if (kvm->arch.migration_mode) return 0; slots = kvm_memslots(kvm); - if (!slots || !slots->used_slots) + if (!slots || kvm_memslots_empty(slots)) return -EINVAL; if (!kvm->arch.use_cmma) { @@ -1049,8 +1049,7 @@ static int kvm_s390_vm_start_migration(struct kvm *kvm) return 0; } /* mark all the pages in active slots as dirty */ - for (slotnr = 0; slotnr < slots->used_slots; slotnr++) { - ms = slots->memslots + slotnr; + kvm_for_each_memslot(ms, bkt, slots) { if (!ms->dirty_bitmap) return -EINVAL; /* @@ -1974,22 +1973,21 @@ static unsigned long kvm_s390_next_dirty_cmma(struct kvm_memslots *slots, unsigned long cur_gfn) { struct kvm_memory_slot *ms = gfn_to_memslot_approx(slots, cur_gfn); - int slotidx = ms - slots->memslots; unsigned long ofs = cur_gfn - ms->base_gfn; + struct rb_node *mnode = &ms->gfn_node[slots->node_idx]; if (ms->base_gfn + ms->npages <= cur_gfn) { - slotidx--; + mnode = rb_next(mnode); /* If we are above the highest slot, wrap around */ - if (slotidx < 0) - slotidx = slots->used_slots - 1; + if (!mnode) + mnode = rb_first(&slots->gfn_tree); - ms = slots->memslots + slotidx; + ms = container_of(mnode, struct kvm_memory_slot, gfn_node[slots->node_idx]); ofs = 0; } ofs = find_next_bit(kvm_second_dirty_bitmap(ms), ms->npages, ofs); - while ((slotidx > 0) && (ofs >= ms->npages)) { - slotidx--; - ms = slots->memslots + slotidx; + while (ofs >= ms->npages && (mnode = rb_next(mnode))) { + ms = container_of(mnode, struct kvm_memory_slot, gfn_node[slots->node_idx]); ofs = find_next_bit(kvm_second_dirty_bitmap(ms), ms->npages, 0); } return ms->base_gfn + ofs; @@ -2002,7 +2000,7 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args, struct kvm_memslots *slots = kvm_memslots(kvm); struct kvm_memory_slot *ms; - if (unlikely(!slots->used_slots)) + if (unlikely(kvm_memslots_empty(slots))) return 0; cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn); diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h index 207d299d7fea..a8769c1b5cec 100644 --- a/arch/s390/kvm/kvm-s390.h +++ b/arch/s390/kvm/kvm-s390.h @@ -211,12 +211,14 @@ static inline int kvm_s390_user_cpu_state_ctrl(struct kvm *kvm) /* get the end gfn of the last (highest gfn) memslot */ static inline unsigned long kvm_s390_get_gfn_end(struct kvm_memslots *slots) { + struct rb_node *node; struct kvm_memory_slot *ms; - if (WARN_ON(!slots->used_slots)) + if (WARN_ON(kvm_memslots_empty(slots))) return 0; - ms = slots->memslots; + node = rb_last(&slots->gfn_tree); + ms = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]); return ms->base_gfn + ms->npages; } diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c index 54a83a744538..543a8c04025c 100644 --- a/arch/x86/kvm/debugfs.c +++ b/arch/x86/kvm/debugfs.c @@ -107,9 +107,10 @@ static int kvm_mmu_rmaps_stat_show(struct seq_file *m, void *v) write_lock(&kvm->mmu_lock); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + int bkt; + slots = __kvm_memslots(kvm, i); - for (j = 0; j < slots->used_slots; j++) { - slot = &slots->memslots[j]; + kvm_for_each_memslot(slot, bkt, slots) for (k = 0; k < KVM_NR_PAGE_SIZES; k++) { rmap = slot->arch.rmap[k]; lpage_size = kvm_mmu_slot_lpages(slot, k + 1); @@ -121,7 +122,6 @@ static int kvm_mmu_rmaps_stat_show(struct seq_file *m, void *v) cur[index]++; } } - } } write_unlock(&kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 564781585fd2..09ff0ccaa203 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3405,7 +3405,7 @@ static int mmu_first_shadow_root_alloc(struct kvm *kvm) { struct kvm_memslots *slots; struct kvm_memory_slot *slot; - int r = 0, i; + int r = 0, i, bkt; /* * Check if this is the first shadow root being allocated before @@ -3430,7 +3430,7 @@ static int mmu_first_shadow_root_alloc(struct kvm *kvm) for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(slot, slots) { + kvm_for_each_memslot(slot, bkt, slots) { /* * Both of these functions are no-ops if the target is * already allocated, so unconditionally calling both @@ -5716,14 +5716,14 @@ static bool __kvm_zap_rmaps(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) struct kvm_memslots *slots; bool flush = false; gfn_t start, end; - int i; + int i, bkt; if (!kvm_memslots_have_rmaps(kvm)) return flush; for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, slots) { + kvm_for_each_memslot(memslot, bkt, slots) { start = max(gfn_start, memslot->base_gfn); end = min(gfn_end, memslot->base_gfn + memslot->npages); if (start >= end) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d0363e2ba098..6888f3c2e04b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #include @@ -357,11 +358,13 @@ struct kvm_vcpu { struct kvm_dirty_ring dirty_ring; /* - * The index of the most recently used memslot by this vCPU. It's ok - * if this becomes stale due to memslot changes since we always check - * it is a valid slot. + * The most recently used memslot by this vCPU and the slots generation + * for which it is valid. + * No wraparound protection is needed since generations won't overflow in + * thousands of years, even assuming 1M memslot operations per second. */ - int last_used_slot; + struct kvm_memory_slot *last_used_slot; + u64 last_used_slot_gen; }; /* must be called with irqs disabled */ @@ -426,9 +429,26 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu) */ #define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1) +/* + * Since at idle each memslot belongs to two memslot sets it has to contain + * two embedded nodes for each data structure that it forms a part of. + * + * Two memslot sets (one active and one inactive) are necessary so the VM + * continues to run on one memslot set while the other is being modified. + * + * These two memslot sets normally point to the same set of memslots. + * They can, however, be desynchronized when performing a memslot management + * operation by replacing the memslot to be modified by its copy. + * After the operation is complete, both memslot sets once again point to + * the same, common set of memslot data. + * + * The memslots themselves are independent of each other so they can be + * individually added or deleted. + */ struct kvm_memory_slot { - struct hlist_node id_node; - struct interval_tree_node hva_node; + struct hlist_node id_node[2]; + struct interval_tree_node hva_node[2]; + struct rb_node gfn_node[2]; gfn_t base_gfn; unsigned long npages; unsigned long *dirty_bitmap; @@ -523,19 +543,14 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) } #endif -/* - * Note: - * memslots are not sorted by id anymore, please use id_to_memslot() - * to get the memslot by its id. - */ struct kvm_memslots { u64 generation; + atomic_long_t last_used_slot; struct rb_root_cached hva_tree; - /* The mapping table from slot id to the index in memslots[]. */ + struct rb_root gfn_tree; + /* The mapping table from slot id to memslot. */ DECLARE_HASHTABLE(id_hash, 7); - atomic_t last_used_slot; - int used_slots; - struct kvm_memory_slot memslots[]; + int node_idx; }; struct kvm { @@ -557,6 +572,9 @@ struct kvm { struct mutex slots_arch_lock; struct mm_struct *mm; /* userspace tied to this vm */ unsigned long nr_memslot_pages; + /* The two memslot sets - active and inactive (per address space) */ + struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2]; + /* The current active memslot set for each address space */ struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; @@ -725,11 +743,10 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id) return NULL; } -#define kvm_for_each_memslot(memslot, slots) \ - for (memslot = &slots->memslots[0]; \ - memslot < slots->memslots + slots->used_slots; memslot++) \ - if (WARN_ON_ONCE(!memslot->npages)) { \ - } else +static inline int kvm_vcpu_get_idx(struct kvm_vcpu *vcpu) +{ + return vcpu->vcpu_idx; +} void kvm_vcpu_destroy(struct kvm_vcpu *vcpu); @@ -791,12 +808,23 @@ static inline struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu) return __kvm_memslots(vcpu->kvm, as_id); } +static inline bool kvm_memslots_empty(struct kvm_memslots *slots) +{ + return RB_EMPTY_ROOT(&slots->gfn_tree); +} + +#define kvm_for_each_memslot(memslot, bkt, slots) \ + hash_for_each(slots->id_hash, bkt, memslot, id_node[slots->node_idx]) \ + if (WARN_ON_ONCE(!memslot->npages)) { \ + } else + static inline struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id) { struct kvm_memory_slot *slot; + int idx = slots->node_idx; - hash_for_each_possible(slots->id_hash, slot, id_node, id) { + hash_for_each_possible(slots->id_hash, slot, id_node[idx], id) { if (slot->id == id) return slot; } @@ -1204,25 +1232,15 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id); bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args); /* - * Returns a pointer to the memslot at slot_index if it contains gfn. + * Returns a pointer to the memslot if it contains gfn. * Otherwise returns NULL. */ static inline struct kvm_memory_slot * -try_get_memslot(struct kvm_memslots *slots, int slot_index, gfn_t gfn) +try_get_memslot(struct kvm_memory_slot *slot, gfn_t gfn) { - struct kvm_memory_slot *slot; - - if (slot_index < 0 || slot_index >= slots->used_slots) + if (!slot) return NULL; - /* - * slot_index can come from vcpu->last_used_slot which is not kept - * in sync with userspace-controllable memslot deletion. So use nospec - * to prevent the CPU from speculating past the end of memslots[]. - */ - slot_index = array_index_nospec(slot_index, slots->used_slots); - slot = &slots->memslots[slot_index]; - if (gfn >= slot->base_gfn && gfn < slot->base_gfn + slot->npages) return slot; else @@ -1230,65 +1248,46 @@ try_get_memslot(struct kvm_memslots *slots, int slot_index, gfn_t gfn) } /* - * Returns a pointer to the memslot that contains gfn and records the index of - * the slot in index. Otherwise returns NULL. + * Returns a pointer to the memslot that contains gfn. Otherwise returns NULL. * * With "approx" set returns the memslot also when the address falls * in a hole. In that case one of the memslots bordering the hole is * returned. - * - * IMPORTANT: Slots are sorted from highest GFN to lowest GFN! */ static inline struct kvm_memory_slot * -search_memslots(struct kvm_memslots *slots, gfn_t gfn, int *index, bool approx) +search_memslots(struct kvm_memslots *slots, gfn_t gfn, bool approx) { - int start = 0, end = slots->used_slots; - struct kvm_memory_slot *memslots = slots->memslots; struct kvm_memory_slot *slot; + struct rb_node *node; + int idx = slots->node_idx; - if (unlikely(!slots->used_slots)) - return NULL; - - while (start < end) { - int slot = start + (end - start) / 2; - - if (gfn >= memslots[slot].base_gfn) - end = slot; - else - start = slot + 1; - } - - if (approx && start >= slots->used_slots) { - *index = slots->used_slots - 1; - return &memslots[slots->used_slots - 1]; - } - - slot = try_get_memslot(slots, start, gfn); - if (slot) { - *index = start; - return slot; - } - if (approx) { - *index = start; - return &memslots[start]; + slot = NULL; + for (node = slots->gfn_tree.rb_node; node; ) { + slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]); + if (gfn >= slot->base_gfn) { + if (gfn < slot->base_gfn + slot->npages) + return slot; + node = node->rb_right; + } else + node = node->rb_left; } - return NULL; + return approx ? slot : NULL; } static inline struct kvm_memory_slot * ____gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn, bool approx) { struct kvm_memory_slot *slot; - int slot_index = atomic_read(&slots->last_used_slot); - slot = try_get_memslot(slots, slot_index, gfn); + slot = (struct kvm_memory_slot *)atomic_long_read(&slots->last_used_slot); + slot = try_get_memslot(slot, gfn); if (slot) return slot; - slot = search_memslots(slots, gfn, &slot_index, approx); + slot = search_memslots(slots, gfn, approx); if (slot) { - atomic_set(&slots->last_used_slot, slot_index); + atomic_long_set(&slots->last_used_slot, (unsigned long)slot); return slot; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f2235c430e64..d095e01838bf 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -432,7 +432,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) vcpu->preempted = false; vcpu->ready = false; preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops); - vcpu->last_used_slot = 0; + vcpu->last_used_slot = NULL; } void kvm_vcpu_destroy(struct kvm_vcpu *vcpu) @@ -531,7 +531,7 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, range->start, range->end - 1) { unsigned long hva_start, hva_end; - slot = container_of(node, struct kvm_memory_slot, hva_node); + slot = container_of(node, struct kvm_memory_slot, hva_node[slots->node_idx]); hva_start = max(range->start, slot->userspace_addr); hva_end = min(range->end, slot->userspace_addr + (slot->npages << PAGE_SHIFT)); @@ -862,20 +862,6 @@ static void kvm_destroy_pm_notifier(struct kvm *kvm) } #endif /* CONFIG_HAVE_KVM_PM_NOTIFIER */ -static struct kvm_memslots *kvm_alloc_memslots(void) -{ - struct kvm_memslots *slots; - - slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT); - if (!slots) - return NULL; - - slots->hva_tree = RB_ROOT_CACHED; - hash_init(slots->id_hash); - - return slots; -} - static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) { if (!memslot->dirty_bitmap) @@ -885,27 +871,33 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) memslot->dirty_bitmap = NULL; } +/* This does not remove the slot from struct kvm_memslots data structures */ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { kvm_destroy_dirty_bitmap(slot); kvm_arch_free_memslot(kvm, slot); - slot->flags = 0; - slot->npages = 0; + kfree(slot); } static void kvm_free_memslots(struct kvm *kvm, struct kvm_memslots *slots) { + struct hlist_node *idnode; struct kvm_memory_slot *memslot; + int bkt; - if (!slots) + /* + * The same memslot objects live in both active and inactive sets, + * arbitrarily free using index '1' so the second invocation of this + * function isn't operating over a structure with dangling pointers + * (even though this function isn't actually touching them). + */ + if (!slots->node_idx) return; - kvm_for_each_memslot(memslot, slots) + hash_for_each_safe(slots->id_hash, bkt, idnode, memslot, id_node[1]) kvm_free_memslot(kvm, memslot); - - kvfree(slots); } static umode_t kvm_stats_debugfs_mode(const struct _kvm_stats_desc *pdesc) @@ -1044,8 +1036,9 @@ int __weak kvm_arch_create_vm_debugfs(struct kvm *kvm) static struct kvm *kvm_create_vm(unsigned long type) { struct kvm *kvm = kvm_arch_alloc_vm(); + struct kvm_memslots *slots; int r = -ENOMEM; - int i; + int i, j; if (!kvm) return ERR_PTR(-ENOMEM); @@ -1072,13 +1065,20 @@ static struct kvm *kvm_create_vm(unsigned long type) refcount_set(&kvm->users_count, 1); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_memslots *slots = kvm_alloc_memslots(); + for (j = 0; j < 2; j++) { + slots = &kvm->__memslots[i][j]; - if (!slots) - goto out_err_no_arch_destroy_vm; - /* Generations must be different for each address space. */ - slots->generation = i; - rcu_assign_pointer(kvm->memslots[i], slots); + atomic_long_set(&slots->last_used_slot, (unsigned long)NULL); + slots->hva_tree = RB_ROOT_CACHED; + slots->gfn_tree = RB_ROOT; + hash_init(slots->id_hash); + slots->node_idx = j; + + /* Generations must be different for each address space. */ + slots->generation = i; + } + + rcu_assign_pointer(kvm->memslots[i], &kvm->__memslots[i][0]); } for (i = 0; i < KVM_NR_BUSES; i++) { @@ -1132,8 +1132,6 @@ static struct kvm *kvm_create_vm(unsigned long type) WARN_ON_ONCE(!refcount_dec_and_test(&kvm->users_count)); for (i = 0; i < KVM_NR_BUSES; i++) kfree(kvm_get_bus(kvm, i)); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) - kvm_free_memslots(kvm, __kvm_memslots(kvm, i)); cleanup_srcu_struct(&kvm->irq_srcu); out_err_no_irq_srcu: cleanup_srcu_struct(&kvm->srcu); @@ -1198,8 +1196,10 @@ static void kvm_destroy_vm(struct kvm *kvm) #endif kvm_arch_destroy_vm(kvm); kvm_destroy_devices(kvm); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) - kvm_free_memslots(kvm, __kvm_memslots(kvm, i)); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + kvm_free_memslots(kvm, &kvm->__memslots[i][0]); + kvm_free_memslots(kvm, &kvm->__memslots[i][1]); + } cleanup_srcu_struct(&kvm->irq_srcu); cleanup_srcu_struct(&kvm->srcu); kvm_arch_free_vm(kvm); @@ -1269,231 +1269,136 @@ static int kvm_alloc_dirty_bitmap(struct kvm_memory_slot *memslot) return 0; } -static void kvm_replace_memslot(struct kvm_memslots *slots, +static struct kvm_memslots *kvm_get_inactive_memslots(struct kvm *kvm, int as_id) +{ + struct kvm_memslots *active = __kvm_memslots(kvm, as_id); + int node_idx_inactive = active->node_idx ^ 1; + + return &kvm->__memslots[as_id][node_idx_inactive]; +} + +/* + * Helper to get the address space ID when one of memslot pointers may be NULL. + * This also serves as a sanity that at least one of the pointers is non-NULL, + * and that their address space IDs don't diverge. + */ +static int kvm_memslots_get_as_id(struct kvm_memory_slot *a, + struct kvm_memory_slot *b) +{ + if (WARN_ON_ONCE(!a && !b)) + return 0; + + if (!a) + return b->as_id; + if (!b) + return a->as_id; + + WARN_ON_ONCE(a->as_id != b->as_id); + return a->as_id; +} + +static void kvm_insert_gfn_node(struct kvm_memslots *slots, + struct kvm_memory_slot *slot) +{ + struct rb_root *gfn_tree = &slots->gfn_tree; + struct rb_node **node, *parent; + int idx = slots->node_idx; + + parent = NULL; + for (node = &gfn_tree->rb_node; *node; ) { + struct kvm_memory_slot *tmp; + + tmp = container_of(*node, struct kvm_memory_slot, gfn_node[idx]); + parent = *node; + if (slot->base_gfn < tmp->base_gfn) + node = &(*node)->rb_left; + else if (slot->base_gfn > tmp->base_gfn) + node = &(*node)->rb_right; + else + BUG(); + } + + rb_link_node(&slot->gfn_node[idx], parent, node); + rb_insert_color(&slot->gfn_node[idx], gfn_tree); +} + +static void kvm_erase_gfn_node(struct kvm_memslots *slots, + struct kvm_memory_slot *slot) +{ + rb_erase(&slot->gfn_node[slots->node_idx], &slots->gfn_tree); +} + +static void kvm_replace_gfn_node(struct kvm_memslots *slots, + struct kvm_memory_slot *old, + struct kvm_memory_slot *new) +{ + int idx = slots->node_idx; + + WARN_ON_ONCE(old->base_gfn != new->base_gfn); + + rb_replace_node(&old->gfn_node[idx], &new->gfn_node[idx], + &slots->gfn_tree); +} + +/* + * Replace @old with @new in the inactive memslots. + * + * With NULL @old this simply adds @new. + * With NULL @new this simply removes @old. + * + * If @new is non-NULL its hva_node[slots_idx] range has to be set + * appropriately. + */ +static void kvm_replace_memslot(struct kvm *kvm, struct kvm_memory_slot *old, struct kvm_memory_slot *new) { - /* - * Remove the old memslot from the hash list and interval tree, copying - * the node data would corrupt the structures. - */ + int as_id = kvm_memslots_get_as_id(old, new); + struct kvm_memslots *slots = kvm_get_inactive_memslots(kvm, as_id); + int idx = slots->node_idx; + if (old) { - hash_del(&old->id_node); - interval_tree_remove(&old->hva_node, &slots->hva_tree); + hash_del(&old->id_node[idx]); + interval_tree_remove(&old->hva_node[idx], &slots->hva_tree); - if (!new) + if ((long)old == atomic_long_read(&slots->last_used_slot)) + atomic_long_set(&slots->last_used_slot, (long)new); + + if (!new) { + kvm_erase_gfn_node(slots, old); return; + } } /* - * Copy the source *data*, not the pointer, to the destination. If - * @old is NULL, initialize @new's hva range. + * Initialize @new's hva range. Do this even when replacing an @old + * slot, kvm_copy_memslot() deliberately does not touch node data. */ - if (old) { - *new = *old; - } else if (new) { - new->hva_node.start = new->userspace_addr; - new->hva_node.last = new->userspace_addr + - (new->npages << PAGE_SHIFT) - 1; - } - - /* (Re)Add the new memslot. */ - hash_add(slots->id_hash, &new->id_node, new->id); - interval_tree_insert(&new->hva_node, &slots->hva_tree); -} - -static void kvm_shift_memslot(struct kvm_memslots *slots, int dst, int src) -{ - struct kvm_memory_slot *mslots = slots->memslots; - - kvm_replace_memslot(slots, &mslots[src], &mslots[dst]); -} - -/* - * Delete a memslot by decrementing the number of used slots and shifting all - * other entries in the array forward one spot. - * @memslot is a detached dummy struct with just .id and .as_id filled. - */ -static inline void kvm_memslot_delete(struct kvm_memslots *slots, - struct kvm_memory_slot *memslot) -{ - struct kvm_memory_slot *mslots = slots->memslots; - struct kvm_memory_slot *oldslot = id_to_memslot(slots, memslot->id); - int i; - - if (WARN_ON(!oldslot)) - return; - - slots->used_slots--; - - if (atomic_read(&slots->last_used_slot) >= slots->used_slots) - atomic_set(&slots->last_used_slot, 0); + new->hva_node[idx].start = new->userspace_addr; + new->hva_node[idx].last = new->userspace_addr + + (new->npages << PAGE_SHIFT) - 1; /* - * Remove the to-be-deleted memslot from the list/tree _before_ shifting - * the trailing memslots forward, its data will be overwritten. - * Defer the (somewhat pointless) copying of the memslot until after - * the last slot has been shifted to avoid overwriting said last slot. + * (Re)Add the new memslot. There is no O(1) interval_tree_replace(), + * hva_node needs to be swapped with remove+insert even though hva can't + * change when replacing an existing slot. */ - kvm_replace_memslot(slots, oldslot, NULL); - - for (i = oldslot - mslots; i < slots->used_slots; i++) - kvm_shift_memslot(slots, i, i + 1); - mslots[i] = *memslot; -} - -/* - * "Insert" a new memslot by incrementing the number of used slots. Returns - * the new slot's initial index into the memslots array. - */ -static inline int kvm_memslot_insert_back(struct kvm_memslots *slots) -{ - return slots->used_slots++; -} - -/* - * Move a changed memslot backwards in the array by shifting existing slots - * with a higher GFN toward the front of the array. Note, the changed memslot - * itself is not preserved in the array, i.e. not swapped at this time, only - * its new index into the array is tracked. Returns the changed memslot's - * current index into the memslots array. - * The memslot at the returned index will not be in @slots->hva_tree or - * @slots->id_hash by then. - * @memslot is a detached struct with desired final data of the changed slot. - */ -static inline int kvm_memslot_move_backward(struct kvm_memslots *slots, - struct kvm_memory_slot *memslot) -{ - struct kvm_memory_slot *mslots = slots->memslots; - struct kvm_memory_slot *oldslot = id_to_memslot(slots, memslot->id); - int i; - - if (!oldslot || !slots->used_slots) - return -1; + hash_add(slots->id_hash, &new->id_node[idx], new->id); + interval_tree_insert(&new->hva_node[idx], &slots->hva_tree); /* - * Delete the slot from the hash table and interval tree before sorting - * the remaining slots, the slot's data may be overwritten when copying - * slots as part of the sorting proccess. update_memslots() will - * unconditionally rewrite and re-add the entire slot. + * If the memslot gfn is unchanged, rb_replace_node() can be used to + * switch the node in the gfn tree instead of removing the old and + * inserting the new as two separate operations. Replacement is a + * single O(1) operation versus two O(log(n)) operations for + * remove+insert. */ - kvm_replace_memslot(slots, oldslot, NULL); - - /* - * Move the target memslot backward in the array by shifting existing - * memslots with a higher GFN (than the target memslot) towards the - * front of the array. - */ - for (i = oldslot - mslots; i < slots->used_slots - 1; i++) { - if (memslot->base_gfn > mslots[i + 1].base_gfn) - break; - - WARN_ON_ONCE(memslot->base_gfn == mslots[i + 1].base_gfn); - - kvm_shift_memslot(slots, i, i + 1); - } - return i; -} - -/* - * Move a changed memslot forwards in the array by shifting existing slots with - * a lower GFN toward the back of the array. Note, the changed memslot itself - * is not preserved in the array, i.e. not swapped at this time, only its new - * index into the array is tracked. Returns the changed memslot's final index - * into the memslots array. - * The memslot at the returned index will not be in @slots->hva_tree or - * @slots->id_hash by then. - * @memslot is a detached struct with desired final data of the new or - * changed slot. - * Assumes that the memslot at @start index is not in @slots->hva_tree or - * @slots->id_hash. - */ -static inline int kvm_memslot_move_forward(struct kvm_memslots *slots, - struct kvm_memory_slot *memslot, - int start) -{ - struct kvm_memory_slot *mslots = slots->memslots; - int i; - - for (i = start; i > 0; i--) { - if (memslot->base_gfn < mslots[i - 1].base_gfn) - break; - - WARN_ON_ONCE(memslot->base_gfn == mslots[i - 1].base_gfn); - - kvm_shift_memslot(slots, i, i - 1); - } - return i; -} - -/* - * Re-sort memslots based on their GFN to account for an added, deleted, or - * moved memslot. Sorting memslots by GFN allows using a binary search during - * memslot lookup. - * - * IMPORTANT: Slots are sorted from highest GFN to lowest GFN! I.e. the entry - * at memslots[0] has the highest GFN. - * - * The sorting algorithm takes advantage of having initially sorted memslots - * and knowing the position of the changed memslot. Sorting is also optimized - * by not swapping the updated memslot and instead only shifting other memslots - * and tracking the new index for the update memslot. Only once its final - * index is known is the updated memslot copied into its position in the array. - * - * - When deleting a memslot, the deleted memslot simply needs to be moved to - * the end of the array. - * - * - When creating a memslot, the algorithm "inserts" the new memslot at the - * end of the array and then it forward to its correct location. - * - * - When moving a memslot, the algorithm first moves the updated memslot - * backward to handle the scenario where the memslot's GFN was changed to a - * lower value. update_memslots() then falls through and runs the same flow - * as creating a memslot to move the memslot forward to handle the scenario - * where its GFN was changed to a higher value. - * - * Note, slots are sorted from highest->lowest instead of lowest->highest for - * historical reasons. Originally, invalid memslots where denoted by having - * GFN=0, thus sorting from highest->lowest naturally sorted invalid memslots - * to the end of the array. The current algorithm uses dedicated logic to - * delete a memslot and thus does not rely on invalid memslots having GFN=0. - * - * The other historical motiviation for highest->lowest was to improve the - * performance of memslot lookup. KVM originally used a linear search starting - * at memslots[0]. On x86, the largest memslot usually has one of the highest, - * if not *the* highest, GFN, as the bulk of the guest's RAM is located in a - * single memslot above the 4gb boundary. As the largest memslot is also the - * most likely to be referenced, sorting it to the front of the array was - * advantageous. The current binary search starts from the middle of the array - * and uses an LRU pointer to improve performance for all memslots and GFNs. - * - * @memslot is a detached struct, not a part of the current or new memslot - * array. - */ -static void update_memslots(struct kvm_memslots *slots, - struct kvm_memory_slot *memslot, - enum kvm_mr_change change) -{ - int i; - - if (change == KVM_MR_DELETE) { - kvm_memslot_delete(slots, memslot); + if (old && old->base_gfn == new->base_gfn) { + kvm_replace_gfn_node(slots, old, new); } else { - if (change == KVM_MR_CREATE) - i = kvm_memslot_insert_back(slots); - else - i = kvm_memslot_move_backward(slots, memslot); - i = kvm_memslot_move_forward(slots, memslot, i); - - if (WARN_ON_ONCE(i < 0)) - return; - - /* - * Copy the memslot to its new position in memslots and update - * its index accordingly. - */ - slots->memslots[i] = *memslot; - kvm_replace_memslot(slots, NULL, &slots->memslots[i]); + if (old) + kvm_erase_gfn_node(slots, old); + kvm_insert_gfn_node(slots, new); } } @@ -1511,11 +1416,12 @@ static int check_memory_region_flags(const struct kvm_userspace_memory_region *m return 0; } -static struct kvm_memslots *install_new_memslots(struct kvm *kvm, - int as_id, struct kvm_memslots *slots) +static void kvm_swap_active_memslots(struct kvm *kvm, int as_id) { - struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id); - u64 gen = old_memslots->generation; + struct kvm_memslots *slots = kvm_get_inactive_memslots(kvm, as_id); + + /* Grab the generation from the activate memslots. */ + u64 gen = __kvm_memslots(kvm, as_id)->generation; WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS); slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS; @@ -1566,58 +1472,6 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, kvm_arch_memslots_updated(kvm, gen); slots->generation = gen; - - return old_memslots; -} - -static size_t kvm_memslots_size(int slots) -{ - return sizeof(struct kvm_memslots) + - (sizeof(struct kvm_memory_slot) * slots); -} - -/* - * Note, at a minimum, the current number of used slots must be allocated, even - * when deleting a memslot, as we need a complete duplicate of the memslots for - * use when invalidating a memslot prior to deleting/moving the memslot. - */ -static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, - enum kvm_mr_change change) -{ - struct kvm_memslots *slots; - size_t new_size; - struct kvm_memory_slot *memslot; - - if (change == KVM_MR_CREATE) - new_size = kvm_memslots_size(old->used_slots + 1); - else - new_size = kvm_memslots_size(old->used_slots); - - slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT); - if (unlikely(!slots)) - return NULL; - - memcpy(slots, old, kvm_memslots_size(old->used_slots)); - - slots->hva_tree = RB_ROOT_CACHED; - hash_init(slots->id_hash); - kvm_for_each_memslot(memslot, slots) { - interval_tree_insert(&memslot->hva_node, &slots->hva_tree); - hash_add(slots->id_hash, &memslot->id_node, memslot->id); - } - - return slots; -} - -static void kvm_copy_memslots_arch(struct kvm_memslots *to, - struct kvm_memslots *from) -{ - int i; - - WARN_ON_ONCE(to->used_slots != from->used_slots); - - for (i = 0; i < from->used_slots; i++) - to->memslots[i].arch = from->memslots[i].arch; } static int kvm_prepare_memory_region(struct kvm *kvm, @@ -1672,31 +1526,214 @@ static void kvm_commit_memory_region(struct kvm *kvm, kvm_arch_commit_memory_region(kvm, old, new, change); - /* - * Free the old memslot's metadata. On DELETE, free the whole thing, - * otherwise free the dirty bitmap as needed (the below effectively - * checks both the flags and whether a ring buffer is being used). - */ - if (change == KVM_MR_DELETE) + switch (change) { + case KVM_MR_CREATE: + /* Nothing more to do. */ + break; + case KVM_MR_DELETE: + /* Free the old memslot and all its metadata. */ kvm_free_memslot(kvm, old); - else if (old->dirty_bitmap && !new->dirty_bitmap) - kvm_destroy_dirty_bitmap(old); + break; + case KVM_MR_MOVE: + case KVM_MR_FLAGS_ONLY: + /* + * Free the dirty bitmap as needed; the below check encompasses + * both the flags and whether a ring buffer is being used) + */ + if (old->dirty_bitmap && !new->dirty_bitmap) + kvm_destroy_dirty_bitmap(old); + + /* + * The final quirk. Free the detached, old slot, but only its + * memory, not any metadata. Metadata, including arch specific + * data, may be reused by @new. + */ + kfree(old); + break; + default: + BUG(); + } +} + +/* + * Activate @new, which must be installed in the inactive slots by the caller, + * by swapping the active slots and then propagating @new to @old once @old is + * unreachable and can be safely modified. + * + * With NULL @old this simply adds @new to @active (while swapping the sets). + * With NULL @new this simply removes @old from @active and frees it + * (while also swapping the sets). + */ +static void kvm_activate_memslot(struct kvm *kvm, + struct kvm_memory_slot *old, + struct kvm_memory_slot *new) +{ + int as_id = kvm_memslots_get_as_id(old, new); + + kvm_swap_active_memslots(kvm, as_id); + + /* Propagate the new memslot to the now inactive memslots. */ + kvm_replace_memslot(kvm, old, new); +} + +static void kvm_copy_memslot(struct kvm_memory_slot *dest, + const struct kvm_memory_slot *src) +{ + dest->base_gfn = src->base_gfn; + dest->npages = src->npages; + dest->dirty_bitmap = src->dirty_bitmap; + dest->arch = src->arch; + dest->userspace_addr = src->userspace_addr; + dest->flags = src->flags; + dest->id = src->id; + dest->as_id = src->as_id; +} + +static void kvm_invalidate_memslot(struct kvm *kvm, + struct kvm_memory_slot *old, + struct kvm_memory_slot *working_slot) +{ + /* + * Mark the current slot INVALID. As with all memslot modifications, + * this must be done on an unreachable slot to avoid modifying the + * current slot in the active tree. + */ + kvm_copy_memslot(working_slot, old); + working_slot->flags |= KVM_MEMSLOT_INVALID; + kvm_replace_memslot(kvm, old, working_slot); + + /* + * Activate the slot that is now marked INVALID, but don't propagate + * the slot to the now inactive slots. The slot is either going to be + * deleted or recreated as a new slot. + */ + kvm_swap_active_memslots(kvm, old->as_id); + + /* + * From this point no new shadow pages pointing to a deleted, or moved, + * memslot will be created. Validation of sp->gfn happens in: + * - gfn_to_hva (kvm_read_guest, gfn_to_pfn) + * - kvm_is_visible_gfn (mmu_check_root) + */ + kvm_arch_flush_shadow_memslot(kvm, old); + + /* Was released by kvm_swap_active_memslots, reacquire. */ + mutex_lock(&kvm->slots_arch_lock); + + /* + * Copy the arch-specific field of the newly-installed slot back to the + * old slot as the arch data could have changed between releasing + * slots_arch_lock in install_new_memslots() and re-acquiring the lock + * above. Writers are required to retrieve memslots *after* acquiring + * slots_arch_lock, thus the active slot's data is guaranteed to be fresh. + */ + old->arch = working_slot->arch; +} + +static void kvm_create_memslot(struct kvm *kvm, + const struct kvm_memory_slot *new, + struct kvm_memory_slot *working) +{ + /* + * Add the new memslot to the inactive set as a copy of the + * new memslot data provided by userspace. + */ + kvm_copy_memslot(working, new); + kvm_replace_memslot(kvm, NULL, working); + kvm_activate_memslot(kvm, NULL, working); +} + +static void kvm_delete_memslot(struct kvm *kvm, + struct kvm_memory_slot *old, + struct kvm_memory_slot *invalid_slot) +{ + /* + * Remove the old memslot (in the inactive memslots) by passing NULL as + * the "new" slot. + */ + kvm_replace_memslot(kvm, old, NULL); + + /* And do the same for the invalid version in the active slot. */ + kvm_activate_memslot(kvm, invalid_slot, NULL); + + /* Free the invalid slot, the caller will clean up the old slot. */ + kfree(invalid_slot); +} + +static struct kvm_memory_slot *kvm_move_memslot(struct kvm *kvm, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + struct kvm_memory_slot *invalid_slot) +{ + struct kvm_memslots *slots = kvm_get_inactive_memslots(kvm, old->as_id); + + /* + * The memslot's gfn is changing, remove it from the inactive tree, it + * will be re-added with its updated gfn. Because its range is + * changing, an in-place replace is not possible. + */ + kvm_erase_gfn_node(slots, old); + + /* + * The old slot is now fully disconnected, reuse its memory for the + * persistent copy of "new". + */ + kvm_copy_memslot(old, new); + + /* Re-add to the gfn tree with the updated gfn */ + kvm_insert_gfn_node(slots, old); + + /* Replace the current INVALID slot with the updated memslot. */ + kvm_activate_memslot(kvm, invalid_slot, old); + + /* + * Clear the INVALID flag so that the invalid_slot is now a perfect + * copy of the old slot. Return it for cleanup in the caller. + */ + WARN_ON_ONCE(!(invalid_slot->flags & KVM_MEMSLOT_INVALID)); + invalid_slot->flags &= ~KVM_MEMSLOT_INVALID; + return invalid_slot; +} + +static void kvm_update_flags_memslot(struct kvm *kvm, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + struct kvm_memory_slot *working_slot) +{ + /* + * Similar to the MOVE case, but the slot doesn't need to be zapped as + * an intermediate step. Instead, the old memslot is simply replaced + * with a new, updated copy in both memslot sets. + */ + kvm_copy_memslot(working_slot, new); + kvm_replace_memslot(kvm, old, working_slot); + kvm_activate_memslot(kvm, old, working_slot); } static int kvm_set_memslot(struct kvm *kvm, + struct kvm_memory_slot *old, struct kvm_memory_slot *new, enum kvm_mr_change change) { - struct kvm_memory_slot *slot, old; - struct kvm_memslots *slots; + struct kvm_memory_slot *working; int r; /* - * Released in install_new_memslots. + * Modifications are done on an unreachable slot. Any changes are then + * (eventually) propagated to both the active and inactive slots. This + * allocation would ideally be on-demand (in helpers), but is done here + * to avoid having to handle failure after kvm_prepare_memory_region(). + */ + working = kzalloc(sizeof(*working), GFP_KERNEL_ACCOUNT); + if (!working) + return -ENOMEM; + + /* + * Released in kvm_swap_active_memslots. * * Must be held from before the current memslots are copied until * after the new memslots are installed with rcu_assign_pointer, - * then released before the synchronize srcu in install_new_memslots. + * then released before the synchronize srcu in kvm_swap_active_memslots. * * When modifying memslots outside of the slots_lock, must be held * before reading the pointer to the current memslots until after all @@ -1707,87 +1744,60 @@ static int kvm_set_memslot(struct kvm *kvm, */ mutex_lock(&kvm->slots_arch_lock); - slots = kvm_dup_memslots(__kvm_memslots(kvm, new->as_id), change); - if (!slots) { - mutex_unlock(&kvm->slots_arch_lock); - return -ENOMEM; - } - - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { - /* - * Note, the INVALID flag needs to be in the appropriate entry - * in the freshly allocated memslots, not in @old or @new. - */ - slot = id_to_memslot(slots, new->id); - slot->flags |= KVM_MEMSLOT_INVALID; - - /* - * We can re-use the old memslots, the only difference from the - * newly installed memslots is the invalid flag, which will get - * dropped by update_memslots anyway. We'll also revert to the - * old memslots if preparing the new memory region fails. - */ - slots = install_new_memslots(kvm, new->as_id, slots); - - /* From this point no new shadow pages pointing to a deleted, - * or moved, memslot will be created. - * - * validation of sp->gfn happens in: - * - gfn_to_hva (kvm_read_guest, gfn_to_pfn) - * - kvm_is_visible_gfn (mmu_check_root) - */ - kvm_arch_flush_shadow_memslot(kvm, slot); - - /* Released in install_new_memslots. */ - mutex_lock(&kvm->slots_arch_lock); - - /* - * The arch-specific fields of the now-active memslots could - * have been modified between releasing slots_arch_lock in - * install_new_memslots and re-acquiring slots_arch_lock above. - * Copy them to the inactive memslots. Arch code is required - * to retrieve memslots *after* acquiring slots_arch_lock, thus - * the active memslots are guaranteed to be fresh. - */ - kvm_copy_memslots_arch(slots, __kvm_memslots(kvm, new->as_id)); - } - /* - * Make a full copy of the old memslot, the pointer will become stale - * when the memslots are re-sorted by update_memslots(), and the old - * memslot needs to be referenced after calling update_memslots(), e.g. - * to free its resources and for arch specific behavior. This needs to - * happen *after* (re)acquiring slots_arch_lock. + * Invalidate the old slot if it's being deleted or moved. This is + * done prior to actually deleting/moving the memslot to allow vCPUs to + * continue running by ensuring there are no mappings or shadow pages + * for the memslot when it is deleted/moved. Without pre-invalidation + * (and without a lock), a window would exist between effecting the + * delete/move and committing the changes in arch code where KVM or a + * guest could access a non-existent memslot. */ - slot = id_to_memslot(slots, new->id); - if (slot) { - old = *slot; - } else { - WARN_ON_ONCE(change != KVM_MR_CREATE); - memset(&old, 0, sizeof(old)); - old.id = new->id; - old.as_id = new->as_id; - } - - r = kvm_prepare_memory_region(kvm, &old, new, change); - if (r) - goto out_slots; - - update_memslots(slots, new, change); - slots = install_new_memslots(kvm, new->as_id, slots); - - kvm_commit_memory_region(kvm, &old, new, change); - - kvfree(slots); - return 0; - -out_slots: if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) - slots = install_new_memslots(kvm, new->as_id, slots); + kvm_invalidate_memslot(kvm, old, working); + + r = kvm_prepare_memory_region(kvm, old, new, change); + if (r) { + /* + * For DELETE/MOVE, revert the above INVALID change. No + * modifications required since the original slot was preserved + * in the inactive slots. Changing the active memslots also + * release slots_arch_lock. + */ + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) + kvm_activate_memslot(kvm, working, old); + else + mutex_unlock(&kvm->slots_arch_lock); + kfree(working); + return r; + } + + /* + * For DELETE and MOVE, the working slot is now active as the INVALID + * version of the old slot. MOVE is particularly special as it reuses + * the old slot and returns a copy of the old slot (in working_slot). + * For CREATE, there is no old slot. For DELETE and FLAGS_ONLY, the + * old slot is detached but otherwise preserved. + */ + if (change == KVM_MR_CREATE) + kvm_create_memslot(kvm, new, working); + else if (change == KVM_MR_DELETE) + kvm_delete_memslot(kvm, old, working); + else if (change == KVM_MR_MOVE) + old = kvm_move_memslot(kvm, old, new, working); + else if (change == KVM_MR_FLAGS_ONLY) + kvm_update_flags_memslot(kvm, old, new, working); else - mutex_unlock(&kvm->slots_arch_lock); - kvfree(slots); - return r; + BUG(); + + /* + * No need to refresh new->arch, changes after dropping slots_arch_lock + * will directly hit the final, active memsot. Architectures are + * responsible for knowing that new->arch may be stale. + */ + kvm_commit_memory_region(kvm, old, new, change); + + return 0; } /* @@ -1848,7 +1858,7 @@ int __kvm_set_memory_region(struct kvm *kvm, new.id = id; new.as_id = as_id; - return kvm_set_memslot(kvm, &new, KVM_MR_DELETE); + return kvm_set_memslot(kvm, old, &new, KVM_MR_DELETE); } new.as_id = as_id; @@ -1885,8 +1895,10 @@ int __kvm_set_memory_region(struct kvm *kvm, } if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) { + int bkt; + /* Check for overlaps */ - kvm_for_each_memslot(tmp, __kvm_memslots(kvm, as_id)) { + kvm_for_each_memslot(tmp, bkt, __kvm_memslots(kvm, as_id)) { if (tmp->id == id) continue; if (!((new.base_gfn + new.npages <= tmp->base_gfn) || @@ -1895,7 +1907,7 @@ int __kvm_set_memory_region(struct kvm *kvm, } } - return kvm_set_memslot(kvm, &new, change); + return kvm_set_memslot(kvm, old, &new, change); } EXPORT_SYMBOL_GPL(__kvm_set_memory_region); @@ -2200,21 +2212,30 @@ EXPORT_SYMBOL_GPL(gfn_to_memslot); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn) { struct kvm_memslots *slots = kvm_vcpu_memslots(vcpu); + u64 gen = slots->generation; struct kvm_memory_slot *slot; - int slot_index; - slot = try_get_memslot(slots, vcpu->last_used_slot, gfn); + /* + * This also protects against using a memslot from a different address space, + * since different address spaces have different generation numbers. + */ + if (unlikely(gen != vcpu->last_used_slot_gen)) { + vcpu->last_used_slot = NULL; + vcpu->last_used_slot_gen = gen; + } + + slot = try_get_memslot(vcpu->last_used_slot, gfn); if (slot) return slot; /* * Fall back to searching all memslots. We purposely use * search_memslots() instead of __gfn_to_memslot() to avoid - * thrashing the VM-wide last_used_index in kvm_memslots. + * thrashing the VM-wide last_used_slot in kvm_memslots. */ - slot = search_memslots(slots, gfn, &slot_index, false); + slot = search_memslots(slots, gfn, false); if (slot) { - vcpu->last_used_slot = slot_index; + vcpu->last_used_slot = slot; return slot; } From patchwork Thu Nov 4 00:25:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 972ACC433F5 for ; Thu, 4 Nov 2021 00:57:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 584DD60F9D for ; Thu, 4 Nov 2021 00:57:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 584DD60F9D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8ZCq3vq4q9rhTfrgdDsG++5ed4pAkIuIYEGqAzzE3j4=; b=AYdMOHQJPwrmnP mxCRI1+ddGtAJu32q4lI2xlb8t3Bqp7qX5/6jV2ngC9J7EpEq3YCFC2u1vNKJxGo/FQxXv3CGyCtF 9nXaeK2PfVijkgkhXyu7Egjway1C6Slw8eeXKNeOYhzIAMJRrYyYWZrv5E9h+/Tznoy4jA1RRtGe2 NZ1jJ+IxkoB7rurujcASLp13HDqR4wpzf4/VQhvEry3rkwNMIgHCr4GW6nYjgQ93H0Iu99pE5JQbf iYTSOunQ9SqoN8wNiGn1Kv8NU69mYLfNBcQ4dFNlNbtqKvM9U4ABYRD3e+Opi3zvqrau94v4etsff RINJfAuIgQLN1+60hE3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miR3d-007OzG-Ud; Thu, 04 Nov 2021 00:56:46 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQad-007D1J-4e for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:53 +0000 Received: by mail-pg1-x54a.google.com with SMTP id z4-20020a634c04000000b00299bdd9abdbso2367895pga.13 for ; Wed, 03 Nov 2021 17:26:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=WFKHlikOkpZJpdiLsH4aJihxblDmBHivJrGM8NlUt9c=; b=CRXCwBEqUdB8xBBWvbn9rvEyj5xPoJz0Yiq4O9bYP8Cnw9QN2e+/rU9Z5zno6u5DVK hNycH7Nu/XPtuVsGfPUbSSTGueoe3SJ6D/+uQvX8yXVdBR6JZwONNcY2XuE+tNblfPrg DzYe3VWSE9iz+kLrReWCKtPXc8mvDFxOyEZwsrJmqEqT2dddKgFB7DSFo4206QBqUe3l k94nG8IGw8Qk397gTW+mXweZuq2D9/OlhQu3qaAukGk74Ifow4G0x6SSEWRb7kyDOXHM mtFt8FUjKSwLn45cNEc6CmGXWE+Jks8EonoUKThzOIBSyaz6+hVlcyMpoJKcO82sckqF 5Fzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=WFKHlikOkpZJpdiLsH4aJihxblDmBHivJrGM8NlUt9c=; b=GYwbpcUQeeynJ5vaOvPefwRJ3TIhfo42coplKl3wwePJTWjc5JiEyfnafOOHIUz202 uQhYwW4/Z8Th7igdg1tNFZZadpcLQONHUPHdbZ9QPt679Iabk9vph7ON6B4YNH5eLhKL vdm3UpdgzjlamLwAgzfeO8P7R3LczstQ+9IgnxymXks6MrlxsPz1jEfwjRjLLxwIHaq2 n4lohygo0By0JhB0D5rAGdwijgagjatlts4S3b5YwVPBwgoQvoLgeeJ5lpeX03jmOUJZ bRnZ82WCxn46YhP3B3fj6zhVnyJGQDH4n0lKhpzBl94EMsh+QJWMIFNsAvtyCZThJXj0 S59A== X-Gm-Message-State: AOAM532eOpgI9elVfMADcwjaxRz8V9YX1o1xABWP69a2gT3I0Sq4lM3H rfGj+fO405u741ckNhlaIhhVVd83m/k= X-Google-Smtp-Source: ABdhPJxoDZvqdkusy2MitxT+1dAwOu5YXFxH0crl6GBFlwKLx55yi8xQIaLV4Wu5oujGScgLca++ESEBh4I= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1994:b0:47e:64e5:a1b3 with SMTP id d20-20020a056a00199400b0047e64e5a1b3mr42939171pfl.64.1635985605387; Wed, 03 Nov 2021 17:26:45 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:28 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-28-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 27/30] KVM: Optimize gfn lookup in kvm_zap_gfn_range() From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172647_229380_DBA54631 X-CRM114-Status: GOOD ( 16.29 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero Introduce a memslots gfn upper bound operation and use it to optimize kvm_zap_gfn_range(). This way this handler can do a quick lookup for intersecting gfns and won't have to do a linear scan of the whole memslot set. Signed-off-by: Maciej S. Szmigiero Not-signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 11 +++++-- include/linux/kvm_host.h | 69 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 09ff0ccaa203..14e41278c069 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5714,16 +5714,20 @@ static bool __kvm_zap_rmaps(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) { const struct kvm_memory_slot *memslot; struct kvm_memslots *slots; + struct rb_node *node; bool flush = false; gfn_t start, end; - int i, bkt; + int i, idx; if (!kvm_memslots_have_rmaps(kvm)) return flush; for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, bkt, slots) { + idx = slots->node_idx; + + kvm_for_each_memslot_in_gfn_range(node, slots, gfn_start, gfn_end) { + memslot = container_of(node, struct kvm_memory_slot, gfn_node[idx]); start = max(gfn_start, memslot->base_gfn); end = min(gfn_end, memslot->base_gfn + memslot->npages); if (start >= end) @@ -5747,6 +5751,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) bool flush; int i; + if (WARN_ON_ONCE(gfn_end <= gfn_start)) + return; + write_lock(&kvm->mmu_lock); kvm_inc_notifier_count(kvm, gfn_start, gfn_end); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6888f3c2e04b..810a5b958697 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -832,6 +832,75 @@ struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id) return NULL; } +static inline +struct rb_node *kvm_memslots_gfn_upper_bound(struct kvm_memslots *slots, gfn_t gfn) +{ + int idx = slots->node_idx; + struct rb_node *node, *result = NULL; + + for (node = slots->gfn_tree.rb_node; node; ) { + struct kvm_memory_slot *slot; + + slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]); + if (gfn < slot->base_gfn) { + result = node; + node = node->rb_left; + } else + node = node->rb_right; + } + + return result; +} + +static inline +struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t start) +{ + struct rb_node *node; + + /* + * Find the slot with the lowest gfn that can possibly intersect with + * the range, so we'll ideally have slot start <= range start + */ + node = kvm_memslots_gfn_upper_bound(slots, start); + if (node) { + struct rb_node *pnode; + + /* + * A NULL previous node means that the very first slot + * already has a higher start gfn. + * In this case slot start > range start. + */ + pnode = rb_prev(node); + if (pnode) + node = pnode; + } else { + /* a NULL node below means no slots */ + node = rb_last(&slots->gfn_tree); + } + + return node; +} + +static inline +bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *node, gfn_t end) +{ + struct kvm_memory_slot *memslot; + + memslot = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]); + + /* + * If this slot starts beyond or at the end of the range so does + * every next one + */ + return memslot->base_gfn >= end; +} + +/* Iterate over each memslot *possibly* intersecting [start, end) range */ +#define kvm_for_each_memslot_in_gfn_range(node, slots, start, end) \ + for (node = kvm_for_each_in_gfn_first(slots, start); \ + node && !kvm_for_each_in_gfn_no_more(slots, node, end); \ + node = rb_next(node)) \ + /* * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations: * - create a new memory slot From patchwork Thu Nov 4 00:25:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A463AC433EF for ; Thu, 4 Nov 2021 00:57:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6A23260249 for ; Thu, 4 Nov 2021 00:57:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6A23260249 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LV+s45qGLA9LEkkzt7GdRGvL541ULZ620Yw+k6lioCA=; b=UCel599vBuIFxw u33V0WwQuHXvWaigCD2hueD6nqs1sQ5JzXj9dnGA3t/Ompf9iayHhsDHH0qp2f4LVJsTZR+We+pVo uAlKLKgmmQnBo6BFgwTeplSCgrogDGc8yINptIN0AMZ/zL6Db2J/rmbZ1Dv+x61MPMubxDpKLG51P WrI9gSuM3MlNR4leoqrrPGa88S68aB/kWfyV2ERcZcKeuZt5c6UwUiEfEcugSMbqhY3C6XQE4tTz9 8/48e1qqAzeWtAMuENa76qYujqJu+KDM2ZIuNEtzrVSAttgV7HMAfwUK8FRqoTpHDzbH60qmdcZ+z b6Jbw9sU4jVJlyb91ftA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miR3n-007P2W-4n; Thu, 04 Nov 2021 00:56:55 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQae-007D2X-Qx for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:56 +0000 Received: by mail-pg1-x549.google.com with SMTP id i25-20020a631319000000b002cce0a43e94so2444932pgl.0 for ; Wed, 03 Nov 2021 17:26:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=eDJLgTwa4Mq+pfIoYabE+ramu2f6Q5nSW9TL6LbQQsI=; b=AwZyd1OGNd2jGC9KPlXvy0oV+JkIiHs9n78p1LAJEXYu3abpzCJ4C2qjD9WEdimC8h /U06PEKK8296gAr9y+mGJ7YU/AYB7mMLshQmjbfJT2JjYIs1QGfBkPinN0rTDsM0mJ8v y1VXf6q7OT9SnMxtkJVPvG6qnljmv27pg/hk28jYo+otmvhiKA+XTi2uKknvqQboocCi bNfT4/DvRM5KrXSU8LLRMexndgtLytkukMKS+6YJqn2ZlSHrKvNjmLuH6tB44y+OnCgy GssafhzFt1xigLvtK4LdQ29M/UL2+eLN8sYzUERpdj2aCM8q7wJx2XZ+IzRmu3TIf4VS eNLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=eDJLgTwa4Mq+pfIoYabE+ramu2f6Q5nSW9TL6LbQQsI=; b=yaEs9OCJSYBnjR5ypWwElppvLWpVJ5DJFaU62PuCb8WEk2SmzO3Nrnd9jVnWQjZW+j g5DbS3tK98bJZTd+HhlvIi82XJcrSHUE9ml/5aA8tRAf6Bxx79yNKvofFdzwqNd3d9kX aY33mI8/FlL3XC04SsOkQcWdZX4kyLwfiLaZtI+txLIXh+FK4NETl80T+vP2UarRmxI6 eog/DfQPfcb5qcEK2nPxQLqyQO5erwYvTuaI26a5aHznQHXXYJNm7TPMMEVnww2tbnpX tcEq0L5yond491+A43iZ3pyXzVPJV3jR5NJb7dD34G7Fd0fjrNwpGjQlnRvkNF1R5kj5 VDcA== X-Gm-Message-State: AOAM533Y6QoqEcMz3R0Q2lqXGsNDRFNYjk8zqsSmFgolWnnOs9XycgFE o1GHcdiy3DKaJXPToYXDIxF/DlsqeDE= X-Google-Smtp-Source: ABdhPJyj4rUoqs2915nhSyoqX9YGWRaiikX568lnB/vn8bpIGUoxqu2M9ISziOOEMnXgiMyENpFmkaA/Mco= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:ab50:b0:13f:4c70:9322 with SMTP id ij16-20020a170902ab5000b0013f4c709322mr41332797plb.89.1635985607093; Wed, 03 Nov 2021 17:26:47 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:29 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-29-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 28/30] KVM: Optimize overlapping memslots check From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172648_902212_A459F379 X-CRM114-Status: GOOD ( 13.48 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Maciej S. Szmigiero Do a quick lookup for possibly overlapping gfns when creating or moving a memslot instead of performing a linear scan of the whole memslot set. Signed-off-by: Maciej S. Szmigiero [sean: tweaked params to avoid churn in future cleanup] Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 46 +++++++++++++++++++++++++++++++-------------- 1 file changed, 32 insertions(+), 14 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d095e01838bf..d22e40225703 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1800,6 +1800,29 @@ static int kvm_set_memslot(struct kvm *kvm, return 0; } +static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, + gfn_t start, gfn_t end) +{ + int idx = slots->node_idx; + struct rb_node *node; + + kvm_for_each_memslot_in_gfn_range(node, slots, start, end) { + struct kvm_memory_slot *cslot; + gfn_t cend; + + cslot = container_of(node, struct kvm_memory_slot, gfn_node[idx]); + cend = cslot->base_gfn + cslot->npages; + if (cslot->id == id) + continue; + + /* kvm_for_each_in_gfn_no_more() guarantees that cslot->base_gfn < nend */ + if (cend > start) + return true; + } + + return false; +} + /* * Allocate some memory and give it an address in the guest physical address * space. @@ -1811,8 +1834,9 @@ static int kvm_set_memslot(struct kvm *kvm, int __kvm_set_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem) { - struct kvm_memory_slot *old, *tmp; + struct kvm_memory_slot *old; struct kvm_memory_slot new; + struct kvm_memslots *slots; enum kvm_mr_change change; int as_id, id; int r; @@ -1841,11 +1865,13 @@ int __kvm_set_memory_region(struct kvm *kvm, if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) return -EINVAL; + slots = __kvm_memslots(kvm, as_id); + /* * Note, the old memslot (and the pointer itself!) may be invalidated * and/or destroyed by kvm_set_memslot(). */ - old = id_to_memslot(__kvm_memslots(kvm, as_id), id); + old = id_to_memslot(slots, id); if (!mem->memory_size) { if (!old || !old->npages) @@ -1894,18 +1920,10 @@ int __kvm_set_memory_region(struct kvm *kvm, return 0; } - if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) { - int bkt; - - /* Check for overlaps */ - kvm_for_each_memslot(tmp, bkt, __kvm_memslots(kvm, as_id)) { - if (tmp->id == id) - continue; - if (!((new.base_gfn + new.npages <= tmp->base_gfn) || - (new.base_gfn >= tmp->base_gfn + tmp->npages))) - return -EEXIST; - } - } + if ((change == KVM_MR_CREATE || change == KVM_MR_MOVE) && + kvm_check_memslot_overlap(slots, id, new.base_gfn, + new.base_gfn + new.npages)) + return -EEXIST; return kvm_set_memslot(kvm, old, &new, change); } From patchwork Thu Nov 4 00:25:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92A48C433F5 for ; Thu, 4 Nov 2021 01:00:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4C29261051 for ; Thu, 4 Nov 2021 01:00:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4C29261051 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=84U/YnGO4sLqIBNEhDOVz8KNfzEmZHgAl2M49k8nG/U=; b=lxTtVbB2qyZdIu I9IV3sa1pICWreWYyJVHAJ0VnZHJzVbqkRPWzRBOKQ3C++5v7IZhazGg6Vd7FQ+iKnS72D+0Hw/+0 8LDtOdJPX1fsmtv3FXV8ZTYLhwAOWAil5E8g+nMWJnjvHy3MlBk0MGmhaw7KdGJHtfm3bNDNUJrKd rz54Yh5mB9S6do0+rnjlBk+6j1vAkQii0zGJYC+0ESpkDio+mhjG3fCD+7U8hNOTGw51I+q3/L9l3 ltosqGg0x3HTjDm405V0GIz52LIkRw+N21NjAJR/Fq7Dvc58B8wNlfTA6nmDHUTLnaObtd+5l/O3Y mLj+njhkR1KXuwGp2VoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miR6n-007QET-1Z; Thu, 04 Nov 2021 01:00:01 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQag-007D3r-Gz for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:26:58 +0000 Received: by mail-pf1-x449.google.com with SMTP id x25-20020aa79199000000b0044caf0d1ba8so2367060pfa.1 for ; Wed, 03 Nov 2021 17:26:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=cwptNtKIDGBxPZN/2DV/RXa3ZTx9EN7Ut32v7IxrbNY=; b=kp5W0d4ZKAqTQCeZRqX3O+a/DAfCjKvi3mu4Tg7fQ3jvfDHMY9wuQ5slcGHjKHrpuA O41kLcfJOvm1lVdKYW85uy2oNREGu2MSv7jrDheS1753o05M2Y57yo4STeIs65Q2EHGK B5up0zB0VZ6L8ytxDvtb0VGTD9hy1sbsnk0DAyC3cwRi4JiDc+TXwSManxsjb2qDyn2O OiXBUS1i7vJaF7j3il7TVpDWNEF8CLeaquv4Bhyqc0YSiiB0sbOWzXDrq3GuqvjBYdoT il3fanhD0s/lcfunhrA5jBW1o7Wn+UmL0jWWhjPVM5Fv3i3nvUx16q/ebHR7bCXwY+na ZUjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=cwptNtKIDGBxPZN/2DV/RXa3ZTx9EN7Ut32v7IxrbNY=; b=WCg9eUQrmA4r4hLHL9SzMyjdLHwnIKLVVrwaPeB1LA/ZmQEBEnRNa7Efo8JL07G1uP sJR/suu/vkvIDX4CscQh5rMMUSz9Opw7P8I+toFbG2AiA+TB5luGOw5CX8TInO66lt4N XgY0GT+nWj53nxMi5QJhKOfBiedBjj4+wddTqWnHgY8FVHuOEStYcuBu5yiZ7zAY48lY LGCMgzW3o/ev65knym4CJkv1AIijRKpTS170JDdalPcFQDg6NMnmx4b6AwtmiGmoqeZ3 knWfPDD5q6RcESDIZq/Xt7ePCJdyj66av0izPrG1H2lW5OMQNuR/dZiwQA47FyS4aOPn mP3g== X-Gm-Message-State: AOAM533YPRsFApUIqLlK+evDh/H+ZbEvsECd1S2Zf9S33Y2qzjjPAWSA CVuNtAAvMZbYmbvNJ0IGN78ESjb8pTU= X-Google-Smtp-Source: ABdhPJwR0Cgz5rdM3APhtKmfNq/3RdcWCCfmBz9ZLpAOZgawdF+Lq6eMD3AhC92nfdAaylW7AAzDiY84qhk= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1709:b0:481:203:d3bd with SMTP id h9-20020a056a00170900b004810203d3bdmr28140075pfc.58.1635985608753; Wed, 03 Nov 2021 17:26:48 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:30 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-30-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 29/30] KVM: Wait 'til the bitter end to initialize the "new" memslot From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172650_606067_E1375C5D X-CRM114-Status: GOOD ( 12.30 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Initialize the "new" memslot in the !DELETE path only after the various sanity checks have passed. This will allow a future commit to allocate @new dynamically without having to copy a memslot, and without having to deal with freeing @new in error paths and in the "nothing to change" path that's hiding in the sanity checks. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- virt/kvm/kvm_main.c | 37 ++++++++++++++++++++----------------- 1 file changed, 20 insertions(+), 17 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d22e40225703..5cc0b50faa8c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1838,6 +1838,8 @@ int __kvm_set_memory_region(struct kvm *kvm, struct kvm_memory_slot new; struct kvm_memslots *slots; enum kvm_mr_change change; + unsigned long npages; + gfn_t base_gfn; int as_id, id; int r; @@ -1864,6 +1866,8 @@ int __kvm_set_memory_region(struct kvm *kvm, return -EINVAL; if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) return -EINVAL; + if ((mem->memory_size >> PAGE_SHIFT) > KVM_MEM_MAX_NR_PAGES) + return -EINVAL; slots = __kvm_memslots(kvm, as_id); @@ -1887,15 +1891,8 @@ int __kvm_set_memory_region(struct kvm *kvm, return kvm_set_memslot(kvm, old, &new, KVM_MR_DELETE); } - new.as_id = as_id; - new.id = id; - new.base_gfn = mem->guest_phys_addr >> PAGE_SHIFT; - new.npages = mem->memory_size >> PAGE_SHIFT; - new.flags = mem->flags; - new.userspace_addr = mem->userspace_addr; - - if (new.npages > KVM_MEM_MAX_NR_PAGES) - return -EINVAL; + base_gfn = (mem->guest_phys_addr >> PAGE_SHIFT); + npages = (mem->memory_size >> PAGE_SHIFT); if (!old || !old->npages) { change = KVM_MR_CREATE; @@ -1904,27 +1901,33 @@ int __kvm_set_memory_region(struct kvm *kvm, * To simplify KVM internals, the total number of pages across * all memslots must fit in an unsigned long. */ - if ((kvm->nr_memslot_pages + new.npages) < kvm->nr_memslot_pages) + if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages) return -EINVAL; } else { /* Modify an existing slot. */ - if ((new.userspace_addr != old->userspace_addr) || - (new.npages != old->npages) || - ((new.flags ^ old->flags) & KVM_MEM_READONLY)) + if ((mem->userspace_addr != old->userspace_addr) || + (npages != old->npages) || + ((mem->flags ^ old->flags) & KVM_MEM_READONLY)) return -EINVAL; - if (new.base_gfn != old->base_gfn) + if (base_gfn != old->base_gfn) change = KVM_MR_MOVE; - else if (new.flags != old->flags) + else if (mem->flags != old->flags) change = KVM_MR_FLAGS_ONLY; else /* Nothing to change. */ return 0; } if ((change == KVM_MR_CREATE || change == KVM_MR_MOVE) && - kvm_check_memslot_overlap(slots, id, new.base_gfn, - new.base_gfn + new.npages)) + kvm_check_memslot_overlap(slots, id, base_gfn, base_gfn + npages)) return -EEXIST; + new.as_id = as_id; + new.id = id; + new.base_gfn = base_gfn; + new.npages = npages; + new.flags = mem->flags; + new.userspace_addr = mem->userspace_addr; + return kvm_set_memslot(kvm, old, &new, change); } EXPORT_SYMBOL_GPL(__kvm_set_memory_region); From patchwork Thu Nov 4 00:25:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12602281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D124C433F5 for ; Thu, 4 Nov 2021 01:00:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0E07B61051 for ; Thu, 4 Nov 2021 01:00:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0E07B61051 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References :Mime-Version:Message-Id:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cVEFmgk7erPIj9yPnpQHpDeUkgFAX/9FlIp01LM0itM=; b=apJUabZH0ujm9m rErMY7UZeoUUhpXBcjMS0nYuNGJoO51oBYKsT9UIm//SD2RYYRWVFpi/8Q/lD643XW4iTe9Hr6kyj wNFYcM4/R8eRr5WsPJNkrXtqPwX8bsmkl0yYTN+yYzQbx3X7VdCa8tqZqXMpoTDauSTF0S47v09Fd 8CKDHfSU99bt3euLaCll5P8hLeuaWlXnkOvkSHBM1E/9YrwUFiraiZINaBnm0rWYcs5iyPq3BkbjS I25yvfnDOphuSJmlHW0Ycw2KEWEDZWMW0z8BuIpNE83mF1fzJYIILK4GB9DOpCfU0OqpPvBGiQDzS HPWJo79HwS18O9GMg1Ug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1miR6t-007QId-SL; Thu, 04 Nov 2021 01:00:07 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1miQah-007D4s-N6 for linux-riscv@lists.infradead.org; Thu, 04 Nov 2021 00:27:00 +0000 Received: by mail-pl1-x64a.google.com with SMTP id v18-20020a170902e8d200b00141df2da949so1936372plg.10 for ; Wed, 03 Nov 2021 17:26:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=3Oix+myx2pXtaS6qqrhKXXYNSd1duRmSeNBjYBf01G8=; b=T7Li7ANfcPjBCbj9+DKfUi7E9HMHRR8t88UjyTqohCm/C++BNDUxZXBn8j2DcZEMpA mGH99sOh4E8WZVeHdeK5wa0d8OLkGRmOX4YXznGdWL6AB9m9J3jtSowWUg1EyEutLwr5 fkW1iLLRMw7OXqnk0p24PkMbV7hX3G2oA1DSKGJbgsR2aHpCbSNVMPsjMTDk/W+ZDEp/ b64dLnClxvtc2rQaa2CAN6FFezvBplkeeQHRAO6rLRDw6CPgh8y6I646/QDdI6sBQQy5 uSFKdlP01SbOENjWGmAGklxMJ4DGwF5mxZUiawv9p04MF/m4dQOPwzlzNNQs9/4alG9t x1FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=3Oix+myx2pXtaS6qqrhKXXYNSd1duRmSeNBjYBf01G8=; b=LHngJfcCD0NCCYp90wDdgfV84TrCzo+0q8xo9romRlgTx96+4DRlCW3amJUJAPxL07 Vj9y/v0UzGIaHNTOnxweJ+XVuoICxFKIEJZKiksKwTNAx7IDF8pidf1Kp+jo5KmN5m2i 3InuOmEIuFRSXqNB0ERAtccnzP+ex22av8SkAQ4vOanbKoPMzNhfJk98fwbNZnwJIvQu sUnB40vaZPLIo2LSnLdbX0y/h+A+1IkiRuK1G2uOmixSknxXDPeQ0pcAe1avNhRxSwAb aCxM7QML3ekygBXSCDev4ny6PkTLSA8ks2F0XVYuIPZi2733Yo+jQAKEBrTX8uBSJHmP WcQg== X-Gm-Message-State: AOAM533kOLVrhuWaqNl/0/kRwIUB6rx5ovkz1YDYq+N/glfNi6hoA2mv FhbjRKKAHNPxOy/wzdlqnLvteNMb5jc= X-Google-Smtp-Source: ABdhPJxLsPJIYgU5iZvZVjIfCmGT4IcCcexgXJGOXXxz+DqiMhdzOwQbPPyTYzjNKnPa760AvzYmQj7kGXY= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:114c:b0:47b:b98b:2210 with SMTP id b12-20020a056a00114c00b0047bb98b2210mr48415016pfm.75.1635985610490; Wed, 03 Nov 2021 17:26:50 -0700 (PDT) Date: Thu, 4 Nov 2021 00:25:31 +0000 In-Reply-To: <20211104002531.1176691-1-seanjc@google.com> Message-Id: <20211104002531.1176691-31-seanjc@google.com> Mime-Version: 1.0 References: <20211104002531.1176691-1-seanjc@google.com> X-Mailer: git-send-email 2.33.1.1089.g2158813163f-goog Subject: [PATCH v5.5 30/30] KVM: Dynamically allocate "new" memslots from the get-go From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , "Maciej S . Szmigiero" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211103_172651_841879_F7A63B2B X-CRM114-Status: GOOD ( 31.70 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Allocate the "new" memslot for !DELETE memslot updates straight away instead of filling an intermediate on-stack object and forcing kvm_set_memslot() to juggle the allocation and do weird things like reuse the old memslot object in MOVE. In the MOVE case, this results in an "extra" memslot allocation due to allocating both the "new" slot and the "invalid" slot, but that's a temporary and not-huge allocation, and MOVE is a relatively rare memslot operation. Regarding MOVE, drop the open-coded management of the gfn tree with a call to kvm_replace_memslot(), which already handles the case where new->base_gfn != old->base_gfn. This is made possible by virtue of not having to copy the "new" memslot data after erasing the old memslot from the gfn tree. Using kvm_replace_memslot(), and more specifically not reusing the old memslot, means the MOVE case now does hva tree and hash list updates, but that's a small price to pay for simplifying the code and making MOVE align with all the other flavors of updates. The "extra" updates are firmly in the noise from a performance perspective, e.g. the "move (in)active area" selfttests show a (very, very) slight improvement. Signed-off-by: Sean Christopherson Reviewed-by: Maciej S. Szmigiero --- virt/kvm/kvm_main.c | 178 +++++++++++++++++++------------------------- 1 file changed, 77 insertions(+), 101 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5cc0b50faa8c..b413082c081d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1488,23 +1488,25 @@ static int kvm_prepare_memory_region(struct kvm *kvm, * new and KVM isn't using a ring buffer, allocate and initialize a * new bitmap. */ - if (!(new->flags & KVM_MEM_LOG_DIRTY_PAGES)) - new->dirty_bitmap = NULL; - else if (old->dirty_bitmap) - new->dirty_bitmap = old->dirty_bitmap; - else if (!kvm->dirty_ring_size) { - r = kvm_alloc_dirty_bitmap(new); - if (r) - return r; + if (change != KVM_MR_DELETE) { + if (!(new->flags & KVM_MEM_LOG_DIRTY_PAGES)) + new->dirty_bitmap = NULL; + else if (old && old->dirty_bitmap) + new->dirty_bitmap = old->dirty_bitmap; + else if (!kvm->dirty_ring_size) { + r = kvm_alloc_dirty_bitmap(new); + if (r) + return r; - if (kvm_dirty_log_manual_protect_and_init_set(kvm)) - bitmap_set(new->dirty_bitmap, 0, new->npages); + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + bitmap_set(new->dirty_bitmap, 0, new->npages); + } } r = kvm_arch_prepare_memory_region(kvm, old, new, change); /* Free the bitmap on failure if it was allocated above. */ - if (r && new->dirty_bitmap && !old->dirty_bitmap) + if (r && new && new->dirty_bitmap && old && !old->dirty_bitmap) kvm_destroy_dirty_bitmap(new); return r; @@ -1591,16 +1593,16 @@ static void kvm_copy_memslot(struct kvm_memory_slot *dest, static void kvm_invalidate_memslot(struct kvm *kvm, struct kvm_memory_slot *old, - struct kvm_memory_slot *working_slot) + struct kvm_memory_slot *invalid_slot) { /* * Mark the current slot INVALID. As with all memslot modifications, * this must be done on an unreachable slot to avoid modifying the * current slot in the active tree. */ - kvm_copy_memslot(working_slot, old); - working_slot->flags |= KVM_MEMSLOT_INVALID; - kvm_replace_memslot(kvm, old, working_slot); + kvm_copy_memslot(invalid_slot, old); + invalid_slot->flags |= KVM_MEMSLOT_INVALID; + kvm_replace_memslot(kvm, old, invalid_slot); /* * Activate the slot that is now marked INVALID, but don't propagate @@ -1627,20 +1629,15 @@ static void kvm_invalidate_memslot(struct kvm *kvm, * above. Writers are required to retrieve memslots *after* acquiring * slots_arch_lock, thus the active slot's data is guaranteed to be fresh. */ - old->arch = working_slot->arch; + old->arch = invalid_slot->arch; } static void kvm_create_memslot(struct kvm *kvm, - const struct kvm_memory_slot *new, - struct kvm_memory_slot *working) + struct kvm_memory_slot *new) { - /* - * Add the new memslot to the inactive set as a copy of the - * new memslot data provided by userspace. - */ - kvm_copy_memslot(working, new); - kvm_replace_memslot(kvm, NULL, working); - kvm_activate_memslot(kvm, NULL, working); + /* Add the new memslot to the inactive set and activate. */ + kvm_replace_memslot(kvm, NULL, new); + kvm_activate_memslot(kvm, NULL, new); } static void kvm_delete_memslot(struct kvm *kvm, @@ -1649,65 +1646,36 @@ static void kvm_delete_memslot(struct kvm *kvm, { /* * Remove the old memslot (in the inactive memslots) by passing NULL as - * the "new" slot. + * the "new" slot, and for the invalid version in the active slots. */ kvm_replace_memslot(kvm, old, NULL); - - /* And do the same for the invalid version in the active slot. */ kvm_activate_memslot(kvm, invalid_slot, NULL); - - /* Free the invalid slot, the caller will clean up the old slot. */ - kfree(invalid_slot); } -static struct kvm_memory_slot *kvm_move_memslot(struct kvm *kvm, - struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - struct kvm_memory_slot *invalid_slot) +static void kvm_move_memslot(struct kvm *kvm, + struct kvm_memory_slot *old, + struct kvm_memory_slot *new, + struct kvm_memory_slot *invalid_slot) { - struct kvm_memslots *slots = kvm_get_inactive_memslots(kvm, old->as_id); - /* - * The memslot's gfn is changing, remove it from the inactive tree, it - * will be re-added with its updated gfn. Because its range is - * changing, an in-place replace is not possible. + * Replace the old memslot in the inactive slots, and then swap slots + * and replace the current INVALID with the new as well. */ - kvm_erase_gfn_node(slots, old); - - /* - * The old slot is now fully disconnected, reuse its memory for the - * persistent copy of "new". - */ - kvm_copy_memslot(old, new); - - /* Re-add to the gfn tree with the updated gfn */ - kvm_insert_gfn_node(slots, old); - - /* Replace the current INVALID slot with the updated memslot. */ - kvm_activate_memslot(kvm, invalid_slot, old); - - /* - * Clear the INVALID flag so that the invalid_slot is now a perfect - * copy of the old slot. Return it for cleanup in the caller. - */ - WARN_ON_ONCE(!(invalid_slot->flags & KVM_MEMSLOT_INVALID)); - invalid_slot->flags &= ~KVM_MEMSLOT_INVALID; - return invalid_slot; + kvm_replace_memslot(kvm, old, new); + kvm_activate_memslot(kvm, invalid_slot, new); } static void kvm_update_flags_memslot(struct kvm *kvm, struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - struct kvm_memory_slot *working_slot) + struct kvm_memory_slot *new) { /* * Similar to the MOVE case, but the slot doesn't need to be zapped as * an intermediate step. Instead, the old memslot is simply replaced * with a new, updated copy in both memslot sets. */ - kvm_copy_memslot(working_slot, new); - kvm_replace_memslot(kvm, old, working_slot); - kvm_activate_memslot(kvm, old, working_slot); + kvm_replace_memslot(kvm, old, new); + kvm_activate_memslot(kvm, old, new); } static int kvm_set_memslot(struct kvm *kvm, @@ -1715,19 +1683,9 @@ static int kvm_set_memslot(struct kvm *kvm, struct kvm_memory_slot *new, enum kvm_mr_change change) { - struct kvm_memory_slot *working; + struct kvm_memory_slot *invalid_slot; int r; - /* - * Modifications are done on an unreachable slot. Any changes are then - * (eventually) propagated to both the active and inactive slots. This - * allocation would ideally be on-demand (in helpers), but is done here - * to avoid having to handle failure after kvm_prepare_memory_region(). - */ - working = kzalloc(sizeof(*working), GFP_KERNEL_ACCOUNT); - if (!working) - return -ENOMEM; - /* * Released in kvm_swap_active_memslots. * @@ -1752,9 +1710,19 @@ static int kvm_set_memslot(struct kvm *kvm, * (and without a lock), a window would exist between effecting the * delete/move and committing the changes in arch code where KVM or a * guest could access a non-existent memslot. + * + * Modifications are done on a temporary, unreachable slot. The old + * slot needs to be preserved in case a later step fails and the + * invalidation needs to be reverted. */ - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) - kvm_invalidate_memslot(kvm, old, working); + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { + invalid_slot = kzalloc(sizeof(*invalid_slot), GFP_KERNEL_ACCOUNT); + if (!invalid_slot) { + mutex_unlock(&kvm->slots_arch_lock); + return -ENOMEM; + } + kvm_invalidate_memslot(kvm, old, invalid_slot); + } r = kvm_prepare_memory_region(kvm, old, new, change); if (r) { @@ -1764,11 +1732,12 @@ static int kvm_set_memslot(struct kvm *kvm, * in the inactive slots. Changing the active memslots also * release slots_arch_lock. */ - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) - kvm_activate_memslot(kvm, working, old); - else + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { + kvm_activate_memslot(kvm, invalid_slot, old); + kfree(invalid_slot); + } else { mutex_unlock(&kvm->slots_arch_lock); - kfree(working); + } return r; } @@ -1780,16 +1749,20 @@ static int kvm_set_memslot(struct kvm *kvm, * old slot is detached but otherwise preserved. */ if (change == KVM_MR_CREATE) - kvm_create_memslot(kvm, new, working); + kvm_create_memslot(kvm, new); else if (change == KVM_MR_DELETE) - kvm_delete_memslot(kvm, old, working); + kvm_delete_memslot(kvm, old, invalid_slot); else if (change == KVM_MR_MOVE) - old = kvm_move_memslot(kvm, old, new, working); + kvm_move_memslot(kvm, old, new, invalid_slot); else if (change == KVM_MR_FLAGS_ONLY) - kvm_update_flags_memslot(kvm, old, new, working); + kvm_update_flags_memslot(kvm, old, new); else BUG(); + /* Free the temporary INVALID slot used for DELETE and MOVE. */ + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) + kfree(invalid_slot); + /* * No need to refresh new->arch, changes after dropping slots_arch_lock * will directly hit the final, active memsot. Architectures are @@ -1834,8 +1807,7 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, int __kvm_set_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem) { - struct kvm_memory_slot *old; - struct kvm_memory_slot new; + struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; enum kvm_mr_change change; unsigned long npages; @@ -1884,11 +1856,7 @@ int __kvm_set_memory_region(struct kvm *kvm, if (WARN_ON_ONCE(kvm->nr_memslot_pages < old->npages)) return -EIO; - memset(&new, 0, sizeof(new)); - new.id = id; - new.as_id = as_id; - - return kvm_set_memslot(kvm, old, &new, KVM_MR_DELETE); + return kvm_set_memslot(kvm, old, NULL, KVM_MR_DELETE); } base_gfn = (mem->guest_phys_addr >> PAGE_SHIFT); @@ -1921,14 +1889,22 @@ int __kvm_set_memory_region(struct kvm *kvm, kvm_check_memslot_overlap(slots, id, base_gfn, base_gfn + npages)) return -EEXIST; - new.as_id = as_id; - new.id = id; - new.base_gfn = base_gfn; - new.npages = npages; - new.flags = mem->flags; - new.userspace_addr = mem->userspace_addr; + /* Allocate a slot that will persist in the memslot. */ + new = kzalloc(sizeof(*new), GFP_KERNEL_ACCOUNT); + if (!new) + return -ENOMEM; - return kvm_set_memslot(kvm, old, &new, change); + new->as_id = as_id; + new->id = id; + new->base_gfn = base_gfn; + new->npages = npages; + new->flags = mem->flags; + new->userspace_addr = mem->userspace_addr; + + r = kvm_set_memslot(kvm, old, new, change); + if (r) + kfree(new); + return r; } EXPORT_SYMBOL_GPL(__kvm_set_memory_region);