From patchwork Thu Sep 21 20:33:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394690 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F018E7D0AE for ; Thu, 21 Sep 2023 21:51:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232839AbjIUVvb (ORCPT ); Thu, 21 Sep 2023 17:51:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232933AbjIUVvI (ORCPT ); Thu, 21 Sep 2023 17:51:08 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB164ACD32 for ; Thu, 21 Sep 2023 13:33:36 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d77fa2e7771so1798377276.1 for ; Thu, 21 Sep 2023 13:33:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328416; x=1695933216; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=FGyjwBvsOtsfuIhphBYWtiq/+EdBc6Gwm+3ffP2TVfs=; b=MJ/4lc+cX7TzyAAdalfKhnJV2Qvqe61anNd0ZAc9E3gCp1vGBElc/PE1/LcOl5S/Ys /SosH7LMDdxj1R5b6cRKylf3vp9/cCdCxQy8wiHI8NDdjLFhb2iGCRqXQM9jPcGn0D/y AhXHiCAKC6bjlPMfQ5b1yFC5P7MgeMC1vZjJ72xoQWHG8fCAS9IxY8n3JAstImk4ToCL maNDy7MIwnhjd44s1O8ulnLZf8pdxO5o3POVq/31Pc85VMjVTexR/iuoPj+B2Y1J/eL1 ytLBkpBqa7FW85ROdg2r+V+xE/AIOogyD9uIsggayqwP4J3b6rJgaRU20ZV7fqmifiu9 KygA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328416; x=1695933216; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FGyjwBvsOtsfuIhphBYWtiq/+EdBc6Gwm+3ffP2TVfs=; b=CV2yl6M5rNHNkPu05r9XnSEJVCcrP4qSNZZoyvhyRw14khz9YHbywV1wdA6sN/mA7o 30bQaNFptso5jyLw9+ekGR9O23Jj5J1yxGlnd4uz9goxnro4iD5LJ+wqPP2SKc0kgyqA KLLJzM/4FIRGgluN7HnvP0z9r7S+pOzdTxJLJrc3sNSmB3vx4DKAlwJ6hzgPP1zwWWfB 4iE2fcfI83X16A27eHpvIwnFuOxrDswmjAxNv1VQbnrVuXE2LdhQxLxrwGLIP83ip07S BJ1Ubvau2T52xpXR23X20Xrav/oxxRwETWAQ0VtPGQ/sCxqbtDSpwRwFj5e8A0zPGKFy WsTQ== X-Gm-Message-State: AOJu0Yy1ML5IZq4K5O9mWvVe61bec8wsr5XN/EFriScz6N1GX4sR+LF/ C/B7CUjpjvlmXbwdKm2WujefCbdcm1k= X-Google-Smtp-Source: AGHT+IHmSxFj/rzaiI2sywRueiP80CEdjOd2hu2Iude5MVmXEp4VG1h2O78o3QM7I+FKA1VxJxlQX6QqLrA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:d849:0:b0:d81:58d3:cc71 with SMTP id p70-20020a25d849000000b00d8158d3cc71mr94599ybg.13.1695328415917; Thu, 21 Sep 2023 13:33:35 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:18 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-2-seanjc@google.com> Subject: [PATCH 01/13] KVM: Assert that mmu_invalidate_in_progress *never* goes negative From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the assertion on the in-progress invalidation count from the primary MMU's notifier path to KVM's common notification path, i.e. assert that the count doesn't go negative even when the invalidation is coming from KVM itself. Opportunistically convert the assertion to a KVM_BUG_ON(), i.e. kill only the affected VM, not the entire kernel. A corrupted count is fatal to the VM, e.g. the non-zero (negative) count will cause mmu_invalidate_retry() to block any and all attempts to install new mappings. But it's far from guaranteed that an end() without a start() is fatal or even problematic to anything other than the target VM, e.g. the underlying bug could simply be a duplicate call to end(). And it's much more likely that a missed invalidation, i.e. a potential use-after-free, would manifest as no notification whatsoever, not an end() without a start(). Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a83dfef1316e..30708e460568 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -870,6 +870,7 @@ void kvm_mmu_invalidate_end(struct kvm *kvm) * in conjunction with the smp_rmb in mmu_invalidate_retry(). */ kvm->mmu_invalidate_in_progress--; + KVM_BUG_ON(kvm->mmu_invalidate_in_progress < 0, kvm); /* * Assert that at least one range must be added between start() and @@ -906,8 +907,6 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, */ if (wake) rcuwait_wake_up(&kvm->mn_memslots_update_rcuwait); - - BUG_ON(kvm->mmu_invalidate_in_progress < 0); } static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn, From patchwork Thu Sep 21 20:33:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394590 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9120E7D0AA for ; Thu, 21 Sep 2023 20:56:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232410AbjIUU4X (ORCPT ); Thu, 21 Sep 2023 16:56:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232330AbjIUU4H (ORCPT ); Thu, 21 Sep 2023 16:56:07 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 517C82118 for ; Thu, 21 Sep 2023 13:33:38 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c5cfd475fbso10566985ad.1 for ; Thu, 21 Sep 2023 13:33:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328418; x=1695933218; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=pekGycpm690ftEnzP10xe6Ye5goWUg/PNLNn/FLDQrU=; b=lwSNIm63R+7w1SGh93pi7J7uo6VBJm+uY9BngSBpoYHuLRB1GwdCJ7SWRCejttudXa ThCTZ+h2tE/t/XM3gNkIreJrE6CjrLT4MeO5waQTOpIbIirJ958HB0ItsrXXUAhkK3pz 6+j2P74GJCOqjsYqPrVPu2z25l34HXWg1Fr7d7Ub7xY9E1PGz43Rmplcd45u3ml5LZ8J Gc/KittbcQLNdF4Ctn3YV4aorSEWHg0epHChNlB1R59rDpndkyVrdmRogZp556tzw/8B 6U5czBzwMrasMbHvuB6OeEWhkMco9dQg2/jFPlwO7ePqYf3JHivzvTKbdPKDLrnvAsXU uyWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328418; x=1695933218; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=pekGycpm690ftEnzP10xe6Ye5goWUg/PNLNn/FLDQrU=; b=ksxlravANtt82PKEdjESGlWmoyFswsq707ftArl78zp/S+kZXEDxzo6h+58Hl2EnA7 2hy5goO6+u0FLyC78HpJcAeO5GepGa48pmo7xNdo4zl/w1X55ygIE1LMtSS/Ttt2WehY D19nTPMrIGh0Krgja2j0iPZBSTP657DLi52rWAjAydwTWiTMCwZRHGJUdJ2kakEVRAM0 Y9c6GXs7PeD/wAtGmBXXn+Ra2KI8YOMRxxmAc30GL78Gcnz+d9x+Xf5PH79ZV9Jf3ZFT ML3WORuhT5kalXlFmbKPXWwRF9cPitxM7kaMz8iaa4iJU9+oy21ZJSrMJDX/knwQdFW2 Y7JA== X-Gm-Message-State: AOJu0YzD5U47wnmlFGWjR1yk5+IZ+6rce8FfSFf5mVXbRbdImlCwp9AN CbX1kiQ0eMo59ae9G8AhHg7wFi3NtIY= X-Google-Smtp-Source: AGHT+IEF4UT19Y0uaQX2IVlYHJIurtmC15TJZ1HuZVfkpN+vNSXFUmg/Of1RoXtQbwOKngdV2kpar7enry4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c945:b0:1c5:a79f:8ebb with SMTP id i5-20020a170902c94500b001c5a79f8ebbmr85481pla.6.1695328417821; Thu, 21 Sep 2023 13:33:37 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:19 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-3-seanjc@google.com> Subject: [PATCH 02/13] KVM: Actually truncate the inode when doing PUNCH_HOLE for guest_memfd From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Restore the call to truncate_inode_pages_range() in guest_memfd's handling of PUNCH_HOLE that was unintentionally removed in a rebase gone bad. Reported-by: Michael Roth Fixes: 1d46f95498c5 ("KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory") Signed-off-by: Sean Christopherson --- virt/kvm/guest_mem.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index a819367434e9..3c9e83a596fe 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -140,10 +140,13 @@ static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) */ filemap_invalidate_lock(inode->i_mapping); - list_for_each_entry(gmem, gmem_list, entry) { + list_for_each_entry(gmem, gmem_list, entry) kvm_gmem_invalidate_begin(gmem, start, end); + + truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); + + list_for_each_entry(gmem, gmem_list, entry) kvm_gmem_invalidate_end(gmem, start, end); - } filemap_invalidate_unlock(inode->i_mapping); From patchwork Thu Sep 21 20:33:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30B90E7D0A1 for ; Thu, 21 Sep 2023 22:55:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231759AbjIUW4C (ORCPT ); Thu, 21 Sep 2023 18:56:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231604AbjIUWzp (ORCPT ); Thu, 21 Sep 2023 18:55:45 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9321F6E44D for ; Thu, 21 Sep 2023 13:33:40 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-59bdac026f7so20896907b3.0 for ; Thu, 21 Sep 2023 13:33:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328420; x=1695933220; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=MoKhLmBudHFsqVub0iZnr0qo3mrdYvg37SqAKaFM83k=; b=wacm37VVEAoOBFG9ghqEMvSqQOsqj2uX+efwm1Kt9mu4xfpI4eBbzRd28dN6+kwnbx gbjMsrKrA/az3kHsFFv75Pnpvq8QZeVN+1hPcWnI8aftMXlfGumWnzs8egptFzpeL6xd 6SkuW6gB4bTQJ0laoVIAFd/0TWq5uT4ohszUAQ63ON2mjLUp222i2rIb9wA6X7sqOfwx YPhXb5y9pnZ5l21FdyrmLMZcX6kQAa0zo5Hesdi2jATItiQQG9JTUbTm3Uc0RB7DgQIb WUdsxwgaUGmn+EcCIjRcL6WkWkWN77/Nz3uOuYld23IMCYG+6Pr6WO7aNbbQ8RcYycsm hxAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328420; x=1695933220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=MoKhLmBudHFsqVub0iZnr0qo3mrdYvg37SqAKaFM83k=; b=ciJyih9G7lb3G17W8IoyeJVe5lxt1IOuSbBI//fxnbjocSMCnx13PiaRCvaQdjI/bG xo6+mAf6Lh+LRVEVw3f8kb5R7sqO1bPiMVWYeJm9PChu961k2u6Ml46454vCT/QtAPXA ZLN2+tS40olLx21dE212dIako83RVqjYvE+Vwya3faZcBbva6yxN/Nbwm0Ro2KjvKxj8 2g8u1YycVvx2O/+WkR0e7SwoCxNhuMZM1cK1u0nM43PUKXsKVU5upi/1Hq9KeaQTUPWM qQ4X4VCTURV0WUiy1RKnT/JjZH63hQjkNmHGcIgZKbYGfDINP8YwBVWscFwfImRWUhFN 1f/Q== X-Gm-Message-State: AOJu0YxfMqmXz6aUxg7/06hQohUVxbe5ZWcug66p2zdJChBE/C5DM8aV jLC8b6IbAK4EIEXZdSHvtLkKslZE2DA= X-Google-Smtp-Source: AGHT+IEYG8/hrFY0O/v1dhWCGuKqtsz5qSqkYbmrQr34bAdR9bPEuoVobXgQyPT7lOV22JgARaBCkSNMs4U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:b726:0:b0:59b:ccba:124f with SMTP id v38-20020a81b726000000b0059bccba124fmr93086ywh.9.1695328419949; Thu, 21 Sep 2023 13:33:39 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:20 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-4-seanjc@google.com> Subject: [PATCH 03/13] KVM: WARN if *any* MMU invalidation sequence doesn't add a range From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Tweak the assertion in kvm_mmu_invalidate_end() to unconditionally require a range to be added between start() and end(). Asserting if and only if kvm->mmu_invalidate_in_progress is non-zero makes the assertion all but useless as it would fire only when there are multiple invalidations in flight, which is not common and would also get a false negative if one or more sequences, but not all, added a range. Reported-by: Binbin Wu Fixes: 145725d1542a ("KVM: Use gfn instead of hva for mmu_notifier_retry") Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 30708e460568..54480655bcce 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -873,11 +873,10 @@ void kvm_mmu_invalidate_end(struct kvm *kvm) KVM_BUG_ON(kvm->mmu_invalidate_in_progress < 0, kvm); /* - * Assert that at least one range must be added between start() and - * end(). Not adding a range isn't fatal, but it is a KVM bug. + * Assert that at least one range was added between start() and end(). + * Not adding a range isn't fatal, but it is a KVM bug. */ - WARN_ON_ONCE(kvm->mmu_invalidate_in_progress && - kvm->mmu_invalidate_range_start == INVALID_GPA); + WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA); } static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, From patchwork Thu Sep 21 20:33:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56F4BE7D0AE for ; Thu, 21 Sep 2023 21:45:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229907AbjIUVpu (ORCPT ); Thu, 21 Sep 2023 17:45:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231381AbjIUVp2 (ORCPT ); Thu, 21 Sep 2023 17:45:28 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C97ED5FEF for ; Thu, 21 Sep 2023 13:33:42 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-59c0dd156e5so20149087b3.3 for ; Thu, 21 Sep 2023 13:33:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328422; x=1695933222; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GXnfKvHXTg+GXCkfuzsuluimpzEK7TKl935+HXdmmVk=; b=LbW7OvmzGYx86t/DB1TIa4aie8OrmSOTIMtNm/JXLP6Hddgey1cj0Cdt8bNQpoziRB RL5lL7FoxOClWdH1c3CxdW6z2mh6lLSlLONUPzMcqlpjQRgivq8mBkQQPScxlABXbNTZ 0slD7P+iOjHIPOutQ1BeA26IYGi7ufkcODf4uA1xKARXC0/tneDwOmjmFb44mD7R8MFk Na7m2dTmOCp7qZDDihuHQ63wAUZoyObMFkVwGRVNuBV1CJrDKPFZuhoPgnxHLrxfgmDM +imtx0lw7GWFXxbScIR5pllKVWHQGpzrWfKYi538KiG8TwBlmfnsZl+mEHIldlQwH9CT 8zeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328422; x=1695933222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GXnfKvHXTg+GXCkfuzsuluimpzEK7TKl935+HXdmmVk=; b=l/WEV7q4QsWDC9+dTBdh/2Sq/Xdtoru3BGuJ6o9YGndm0KN98NxhVQu8LxnfX05jq6 qu0RWjSqY5DbIu11bEmRnCIL07IPWaQZFO6EbSO9QlGOeD8eyRVuReebpmcKjf+umjyZ LegbG9Ru3/Eq82FUmi25+h9FjvTUa4d7GrRfxhgkrJkDkTpi/PHfTjJodZgOgTnQCs1h nuehPWdAV70jdz6z8+T81FCNawXX30AFqFK0WdZC4XleDrGjGPvEAkrG6Pdsco3CdNIN Lj8KCYRis9ttBtkD3QUvuedKtUkWFRihfNOqWzl2ocWwW1IzsN9xFKSzeZjFybvUnRVg 0gYg== X-Gm-Message-State: AOJu0YxJkfBhMUtH0o1KWMggRo5aM8Ye0fwHBagqZ4QqSjOU7AIwgE3G g9zNn9jlsLNDBwjqXEqpBezXxods90s= X-Google-Smtp-Source: AGHT+IEi/fpx5qwnyQazg0USr1DHbY5reNhmjREs+J+LHxISH07TrHV0vwV7RrUiPSmPNiZfUH6GP2uptjE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:77d8:0:b0:d81:bb31:d2fa with SMTP id s207-20020a2577d8000000b00d81bb31d2famr91450ybc.3.1695328421812; Thu, 21 Sep 2023 13:33:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:21 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-5-seanjc@google.com> Subject: [PATCH 04/13] KVM: WARN if there are danging MMU invalidations at VM destruction From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an assertion that there are no in-progress MMU invalidations when a VM is being destroyed, with the exception of the scenario where KVM unregisters its MMU notifier between an .invalidate_range_start() call and the corresponding .invalidate_range_end(). KVM can't detect unpaired calls from the mmu_notifier due to the above exception waiver, but the assertion can detect KVM bugs, e.g. such as the bug that *almost* escaped initial guest_memfd development. Link: https://lore.kernel.org/all/e397d30c-c6af-e68f-d18e-b4e3739c5389@linux.intel.com Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 54480655bcce..277afeedd670 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1381,9 +1381,16 @@ static void kvm_destroy_vm(struct kvm *kvm) * No threads can be waiting in kvm_swap_active_memslots() as the * last reference on KVM has been dropped, but freeing * memslots would deadlock without this manual intervention. + * + * If the count isn't unbalanced, i.e. KVM did NOT unregister between + * a start() and end(), then there shouldn't be any in-progress + * invalidations. */ WARN_ON(rcuwait_active(&kvm->mn_memslots_update_rcuwait)); - kvm->mn_active_invalidate_count = 0; + if (kvm->mn_active_invalidate_count) + kvm->mn_active_invalidate_count = 0; + else + WARN_ON(kvm->mmu_invalidate_in_progress); #else kvm_flush_shadow_all(kvm); #endif From patchwork Thu Sep 21 20:33:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38967E7D0AA for ; Thu, 21 Sep 2023 21:08:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232096AbjIUVIe (ORCPT ); Thu, 21 Sep 2023 17:08:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230390AbjIUVID (ORCPT ); Thu, 21 Sep 2023 17:08:03 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86726C06B4 for ; Thu, 21 Sep 2023 13:33:44 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c41538c7eeso11043055ad.1 for ; Thu, 21 Sep 2023 13:33:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328423; x=1695933223; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ooFkuvOD+X01OBgJAJRWL8fBqVSu8Kag+d9BzvqrsOA=; b=xfuWEt8Xy/eLnkQ+WkpVXF77FY2mYKZh+pVi4BHapkczPN939dDp/4nGYBFzE6i91L dE6VUtYLN+gmelAtViKrulirUCx/d258IsmZ+60MuScpzpV3/jwA1F2tgr3VDi39qQPt ASSWsuUMBHCmc0y6haYEQqqPH6k1gt3sb7+6E/lFXpUnGSdpmyeIpyW5wz6krEG/GdaE lw31qnFx0PoBhiZ2iKwClwEFJddwOBujQlMtZdvzJ3VewGoIGQBKmJHBfEnk6sJFlErJ 4g6x81YuZKCJQVkYAAzXb2KPj9GqNdtlZ/VUmu45nsdbVBE6g5RORCDXLMfTzEDE2Ro2 9OYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328423; x=1695933223; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ooFkuvOD+X01OBgJAJRWL8fBqVSu8Kag+d9BzvqrsOA=; b=reHoC4FBSpnbsM9tf01gx27C7WmtpUeGN/kb3ZGJACflMBMlTSvF1XioIyubqM3LSl SMurC+rQuxH9O4/p8zm0qjFFd3KObjzXYeVz4dncGPI9c874M/UO3ZMGXNQWQuH5QGSN HhrBhFx6cgjAiewXC+YcofFtKZTXPEj13hfxdNXoCdtW328Tj2z+sjWHbcc5vptPP/rk MuonKcBLjnPjqR2DTluT5wlHuKW5JQ84O9ZYDzHXCY68ydxl0Ft0deMhpS4IoB8c0erA vkpO7Nz0jZ9xTyH1NaZq/xmZ+kn8Hg6IA3QFOKPu7tX2gVdZxXfkxWxZ+eBLGue+7yCc b6tQ== X-Gm-Message-State: AOJu0YyKL7pQwhfM5sdZmocehUFnRWSc6NezX+Z9n+djd9Rshsun6ArV oGxCiCqHzGdKhCaeGrbl8KRqF3N+yzA= X-Google-Smtp-Source: AGHT+IEU5m4YoGaLYARxz6CkyfsMJNbrUD8SUx5Ubkh7a53+UYT0k7pUszgIuGSWr5rg40lyfuo4qcQeNaM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e5d2:b0:1bf:cc5:7b53 with SMTP id u18-20020a170902e5d200b001bf0cc57b53mr91087plf.1.1695328423673; Thu, 21 Sep 2023 13:33:43 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:22 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-6-seanjc@google.com> Subject: [PATCH 05/13] KVM: Fix MMU invalidation bookkeeping in guest_memfd From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Acquire mmu_lock and do invalidate_{begin,end}() if and only if there is at least one memslot that overlaps the to-be-invalidated range. This fixes a bug where KVM would leave a danging in-progress invalidation as the begin() call was unconditional, but the end() was not (only performed if there was overlap). Reported-by: Binbin Wu Fixes: 1d46f95498c5 ("KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory") Signed-off-by: Sean Christopherson --- virt/kvm/guest_mem.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index 3c9e83a596fe..68528e9cddd7 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -88,14 +88,10 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, pgoff_t end) { + bool flush = false, found_memslot = false; struct kvm_memory_slot *slot; struct kvm *kvm = gmem->kvm; unsigned long index; - bool flush = false; - - KVM_MMU_LOCK(kvm); - - kvm_mmu_invalidate_begin(kvm); xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) { pgoff_t pgoff = slot->gmem.pgoff; @@ -107,13 +103,21 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, .may_block = true, }; + if (!found_memslot) { + found_memslot = true; + + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_begin(kvm); + } + flush |= kvm_mmu_unmap_gfn_range(kvm, &gfn_range); } if (flush) kvm_flush_remote_tlbs(kvm); - KVM_MMU_UNLOCK(kvm); + if (found_memslot) + KVM_MMU_UNLOCK(kvm); } static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, @@ -121,10 +125,11 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, { struct kvm *kvm = gmem->kvm; - KVM_MMU_LOCK(kvm); - if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) + if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) { + KVM_MMU_LOCK(kvm); kvm_mmu_invalidate_end(kvm); - KVM_MMU_UNLOCK(kvm); + KVM_MMU_UNLOCK(kvm); + } } static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) From patchwork Thu Sep 21 20:33:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FC45E7D0AB for ; Thu, 21 Sep 2023 21:51:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232921AbjIUVvX (ORCPT ); Thu, 21 Sep 2023 17:51:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232894AbjIUVvG (ORCPT ); Thu, 21 Sep 2023 17:51:06 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C32ABC06BE for ; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c46ce0c39fso11210595ad.2 for ; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328425; x=1695933225; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TrGawBYlg9dO9+tpkWoFC/Bm9/vroFhKv3wNgQsPHaE=; b=rFUUewuO4GiLk6j9N+4yp4fVjBcvcNNt50CIzAVuZSejDlBuhrCOxxOZhc8+s1TLM8 J7VZoW31U63mk8Sr2bA42PCP59D6/UbKYU68Gb988+21XZqO/uiTRQ++zSP9M7VuQdTY fWZHjOXbjbBzOIH5lb8Me7NfER71QI426xW2bVmo6OVkMZN1P3Kf/95+V7SaqbWIvUep S9L8rrrGHAloQASoSFIgpXDbWCOcL0YurN1tRF6PwSf3OY4QSn4i3SuxJKZNG5H0bxkw t/huqM7dO8MESLuMR8p9A9Vg/hr6nzktEEkW4cxkVgBpB7t0tc23FLgGxiLfq0imX7GG L96w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328425; x=1695933225; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TrGawBYlg9dO9+tpkWoFC/Bm9/vroFhKv3wNgQsPHaE=; b=M3ibDPqmlPiuPhhX3bMIJrvX3geJUyDOdtfEAYubnUJwPxhBp5CQHJ9NPFwveAw9Oy 8MlsMPf4WG6Cm5zmQM/vrdXilaCsV/ClskFiSHPjFPUf1GGr429RgbPVjPOGguOUyJ9l aKEV23o/PnuF78WUigbGCAAjeJngCayienJgwZ8runKMO2bJA2Gl3OLGtyyj84bB5cpw eCYfXSJRSw8601tpWxuvJBq350ckvSEheMtTIX00iB6o2MMmbPwdQjeNV8eB5HnxmgJo /Q6QtC9aAg/oHDJK498rvq4CZK/Qewi5By8h5Axb8UyKJ1hPHXBNIKoplQZ8xsOvcZcN y7wQ== X-Gm-Message-State: AOJu0Yx1dq9MXowxtivGcTpEILRDSLS5d6YI4JTUcSvbsWa/4+GfUvt8 rz1CSWOIvK0ZsqAdniPf6OxtkC/HBAs= X-Google-Smtp-Source: AGHT+IG+uc2R/SKzIbV4YSj6Mluwpdoxl1dWdFaELq1DVKwkomUBRUUtPG4pltoyI3X4CUGacjj01mPun/o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d490:b0:1bc:7c69:925c with SMTP id c16-20020a170902d49000b001bc7c69925cmr94805plg.10.1695328425216; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:23 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-7-seanjc@google.com> Subject: [PATCH 06/13] KVM: Disallow hugepages for incompatible gmem bindings, but let 'em succeed From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the restriction that a guest_memfd instance that supports hugepages can *only* be bound by memslots that are 100% compatible with hugepage mappings, and instead force KVM to use an order-0 mapping if the binding isn't compatible with hugepages. The intent of the draconian binding restriction was purely to simplify the guest_memfd implementation, e.g. to avoid repeatining the existing logic in KVM x86ial for precisely tracking which GFNs support hugepages. But checking that the binding's offset and size is compatible is just as easy to do when KVM wants to create a mapping. And on the other hand, completely rejecting bindings that are incompatible with hugepages makes it practically impossible for userspace to use a single guest_memfd instance for all guest memory, e.g. on x86 it would be impossible to skip the legacy VGA hole while still allowing hugepage mappings for the rest of guest memory. Suggested-by: Michael Roth Link: https://lore.kernel.org/all/20230918163647.m6bjgwusc7ww5tyu@amd.com Signed-off-by: Sean Christopherson --- virt/kvm/guest_mem.c | 54 ++++++++++++++++++++++---------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index 68528e9cddd7..4f3a313f5532 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -434,20 +434,6 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags, return err; } -static bool kvm_gmem_is_valid_size(loff_t size, u64 flags) -{ - if (size < 0 || !PAGE_ALIGNED(size)) - return false; - -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) && - !IS_ALIGNED(size, HPAGE_PMD_SIZE)) - return false; -#endif - - return true; -} - int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) { loff_t size = args->size; @@ -460,9 +446,15 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) if (flags & ~valid_flags) return -EINVAL; - if (!kvm_gmem_is_valid_size(size, flags)) + if (size < 0 || !PAGE_ALIGNED(size)) return -EINVAL; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) && + !IS_ALIGNED(size, HPAGE_PMD_SIZE)) + return -EINVAL; +#endif + return __kvm_gmem_create(kvm, size, flags, kvm_gmem_mnt); } @@ -470,7 +462,7 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) { loff_t size = slot->npages << PAGE_SHIFT; - unsigned long start, end, flags; + unsigned long start, end; struct kvm_gmem *gmem; struct inode *inode; struct file *file; @@ -489,16 +481,9 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, goto err; inode = file_inode(file); - flags = (unsigned long)inode->i_private; - /* - * For simplicity, require the offset into the file and the size of the - * memslot to be aligned to the largest possible page size used to back - * the file (same as the size of the file itself). - */ - if (!kvm_gmem_is_valid_size(offset, flags) || - !kvm_gmem_is_valid_size(size, flags)) - goto err; + if (offset < 0 || !PAGE_ALIGNED(offset)) + return -EINVAL; if (offset + size > i_size_read(inode)) goto err; @@ -599,8 +584,23 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, page = folio_file_page(folio, index); *pfn = page_to_pfn(page); - if (max_order) - *max_order = compound_order(compound_head(page)); + if (!max_order) + goto success; + + *max_order = compound_order(compound_head(page)); + if (!*max_order) + goto success; + + /* + * For simplicity, allow mapping a hugepage if and only if the entire + * binding is compatible, i.e. don't bother supporting mapping interior + * sub-ranges with hugepages (unless userspace comes up with a *really* + * strong use case for needing hugepages within unaligned bindings). + */ + if (!IS_ALIGNED(slot->gmem.pgoff, 1ull << *max_order) || + !IS_ALIGNED(slot->npages, 1ull << *max_order)) + *max_order = 0; +success: r = 0; out_unlock: From patchwork Thu Sep 21 20:33:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4E1EE7D0A2 for ; Thu, 21 Sep 2023 20:56:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231882AbjIUU4t (ORCPT ); Thu, 21 Sep 2023 16:56:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232583AbjIUU4M (ORCPT ); Thu, 21 Sep 2023 16:56:12 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 547D9C1128 for ; Thu, 21 Sep 2023 13:33:47 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c5cfd475fbso10568145ad.1 for ; Thu, 21 Sep 2023 13:33:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328427; x=1695933227; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=sB11xgPqav8kJOlkOtFOCiG4fm9eEcXZMC+/0AduleA=; b=rkStgMBLtwv1RojQ9/AzrLzoUD6XPX/6Q7zZXlfRVRbzsL+dFVV+XnsNsJUMsIpqPE 6z5X6GiM1FbIWRl8AAW6bMjIDFrwQaVRIi59lU3mS++FdnYzD+5Ox7fIrB4l2wYpdWcy BbCNnhkyQPHiVT0yLFOzVvl5eGjZLCm6lh7ZHrvVXcgTHdRGKtbWy3ShYlQQXwWkQKRb fGQJyBLynmBe3tQ4YxNsYHWwLsUjOc4qZAMFelFSaJMgNKZKj8qlbbCnpjrgq3l3XXtZ prxYHqMS8o10bw6IUXUhgtwt30OPzaykWHnMveciKDjcyjCNwK4rBBI1pc1tnAkg8RVw TqYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328427; x=1695933227; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=sB11xgPqav8kJOlkOtFOCiG4fm9eEcXZMC+/0AduleA=; b=LY5ZQCQYnVW1YlC7/DbpkMq/4PxovX8OU7ky1qEimSor+8QNc+Hk6w3F0yNMwzMadL VJ/AIfIcafxwRWursxNHhmYYxuMb2kIEtbOHOLJOxDY4PtBS8FUugRt+w1VeV0NdF8yq Q9sDoL0U3Uf4hNo6iWMSEIXRVwjs8rV/m5CKugvTy9e6DW40V4+ITA1AoBFsNQGoeiau IMz3RXtsevf8IIJ25Utv1EYSoXYsW/7NLzqFJM5rlgjPy4A6omCZpfG8fIAByxt9HbJM LyjH7dGeXZYNrxcz0JSSHg1PcW7xqUVIVwJqi0a7xGzAS9VF0weBu/i/wLOrRJ8z7+qN Vy1w== X-Gm-Message-State: AOJu0Yx08+017OcXQLoDbZ2H1zLSEIeyCbl2ckWKQDV9GDX9Y/IaF6YH /wJ+leE4bIjKlcuvyYcbk+KRDwuxV7g= X-Google-Smtp-Source: AGHT+IGx7Eb0egTaRgjCjWsoC3+ZOFv+sbP9oLGdFxJ5fYEqfixkRwWHlHwIm7T6uuPwO4ciyM1sX09Mndk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d490:b0:1bc:7c69:925c with SMTP id c16-20020a170902d49000b001bc7c69925cmr94808plg.10.1695328426758; Thu, 21 Sep 2023 13:33:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:24 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-8-seanjc@google.com> Subject: [PATCH 07/13] KVM: x86/mmu: Track PRIVATE impact on hugepage mappings for all memslots From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Track the effects of private attributes on potential hugepage mappings if the VM supports private memory, i.e. even if the target memslot can only ever be mapped shared. If userspace configures a chunk of memory as private, KVM must not allow that memory to be mapped shared regardless of whether or not the *current* memslot can be mapped private. E.g. if the guest accesses a private range using a shared memslot, then KVM must exit to userspace. Fixes: 5bb0b4e162d1 ("KVM: x86: Disallow hugepages when memory attributes are mixed") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 269d4dc47c98..148931cf9dba 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7314,10 +7314,12 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, lockdep_assert_held(&kvm->slots_lock); /* - * KVM x86 currently only supports KVM_MEMORY_ATTRIBUTE_PRIVATE, skip - * the slot if the slot will never consume the PRIVATE attribute. + * Calculate which ranges can be mapped with hugepages even if the slot + * can't map memory PRIVATE. KVM mustn't create a SHARED hugepage over + * a range that has PRIVATE GFNs, and conversely converting a range to + * SHARED may now allow hugepages. */ - if (!kvm_slot_can_be_private(slot)) + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) return false; /* @@ -7372,7 +7374,7 @@ void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, { int level; - if (!kvm_slot_can_be_private(slot)) + if (!kvm_arch_has_private_mem(kvm)) return; for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { From patchwork Thu Sep 21 20:33:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17815E7D0A2 for ; Thu, 21 Sep 2023 20:56:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231961AbjIUU4r (ORCPT ); Thu, 21 Sep 2023 16:56:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232630AbjIUU4O (ORCPT ); Thu, 21 Sep 2023 16:56:14 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70922C3314 for ; Thu, 21 Sep 2023 13:33:49 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-570096f51acso1065655a12.0 for ; Thu, 21 Sep 2023 13:33:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328428; x=1695933228; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gEbkWjE9idIufV/ddZ5aR1H9FUodW3tbJDRD7kgzYUw=; b=Tu5WSHXGB+q7z+GKZFp8/JnK6DVPjpxSNbok85q1Y+PvCvreZ/0X4/GtKRrY8Y/O9V SPjbGdeFwEVbYWT2MAX06r9U87PtYOOe2Qyszugqwqj5Cry1j8d9c1JoERb+po3qXahq RtvUIjU7W4OLRxNi9F71HXplyEYrwXuh5Ayv79nYL/JCNaHepb00X23aZnJFWKnrvtjw 84l5FW64zppbOyZ5SAsx66wd3gKrB2Qi4sy71q2iLxnjwUiuu6sUHei6MTxnYzIFbkeo GYWFzAVgVxFHUkTG0u7HBs6YrFwbU68G2wyZk6h+yX/0JHJncXpS0PxztrVVL82sciDt t34Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328428; x=1695933228; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gEbkWjE9idIufV/ddZ5aR1H9FUodW3tbJDRD7kgzYUw=; b=vA+aq7RfHVSezzORT0coSRwRYYIt4JqZJ3PDmRUT3bhEjWf+zhcomRE4/mUn9YCnzm LFRGN5qNh6PpEWSxw+7BfCZeDzL5koAbKwY1WFCpRD5rhAfXWDe/HyZ416tjSPnmSvYG O2Tux9iw5S7/UcWGXOWcXkl3y5KPOKqJIgsZtevc4aa9Hf1FzBn+rHJqosRmHrkQxP0l 9tvi1kl1s4bPbkjeplSAgacpQDlouWbexV9NqqiSA5uNFP5m4d7zAsuWLjfpbMp2Xuiw S05qw3gd6q3m1KdJ0OOKGSEiwRU1VmP2bnap2lxxZB3GTfyzlcLNW0oN9nR2JLEkgILA pA9w== X-Gm-Message-State: AOJu0YxEBSHYQWtNJWTNdoNP7fyxnKjH4dP3gro2Pk4GBbIKiB1IZnrL 84mpohBxAWL7IlKv6WnO5s/S2Nzog8I= X-Google-Smtp-Source: AGHT+IFdAeqMaeF2jgODrlMtOfJVR36IWpqSSK7eAjHnsSdG7m0GHCsiZUhO86QnzvgNYNps5uY+twCRS8I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:23d2:b0:1c3:d556:4f9e with SMTP id o18-20020a17090323d200b001c3d5564f9emr9906plh.0.1695328428491; Thu, 21 Sep 2023 13:33:48 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:25 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-9-seanjc@google.com> Subject: [PATCH 08/13] KVM: x86/mmu: Zap shared-only memslots when private attribute changes From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Zap all relevant memslots, including shared-only memslots, if the private memory attribute is being changed. If userspace converts a range to private, KVM must zap shared SPTEs to prevent the guest from accessing the memory as shared. If userspace converts a range to shared, zapping SPTEs for shared-only memslots isn't strictly necessary, but doing so ensures that KVM will install a hugepage mapping if possible, e.g. if a 2MiB range that was mixed is converted to be 100% shared. Fixes: dcde045383f3 ("KVM: x86/mmu: Handle page fault for private memory") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 148931cf9dba..aa67d9d6fcf8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7259,10 +7259,17 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, struct kvm_gfn_range *range) { /* - * KVM x86 currently only supports KVM_MEMORY_ATTRIBUTE_PRIVATE, skip - * the slot if the slot will never consume the PRIVATE attribute. + * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 only + * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM + * can simply ignore such slots. But if userspace is making memory + * PRIVATE, then KVM must prevent the guest from accessing the memory + * as shared. And if userspace is making memory SHARED and this point + * is reached, then at least one page within the range was previously + * PRIVATE, i.e. the slot's possible hugepage ranges are changing. + * Zapping SPTEs in this case ensures KVM will reassess whether or not + * a hugepage can be used for affected ranges. */ - if (!kvm_slot_can_be_private(range->slot)) + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) return false; return kvm_mmu_unmap_gfn_range(kvm, range); From patchwork Thu Sep 21 20:33:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8A0EE7D0AB for ; Thu, 21 Sep 2023 21:44:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231834AbjIUVoH (ORCPT ); Thu, 21 Sep 2023 17:44:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231718AbjIUVnp (ORCPT ); Thu, 21 Sep 2023 17:43:45 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C454984607 for ; Thu, 21 Sep 2023 13:33:51 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1c436b59144so12242035ad.0 for ; Thu, 21 Sep 2023 13:33:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328430; x=1695933230; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9mpd9iMp7FZTg5cux5Hx9q8PM9e8tiNEMrk/Ih+mEqk=; b=RHwf1XNWOhKlI43Qfk0o7cRJi92EbxHj9Itmb1+AumnyYqJ+CsIG2cw6vS568G7aWl Of2yLVO2tZTIn8+klQSXK58GnxplQoq8xEZk+176754KxxATHFRGt7nSbAnYgANbYEZ1 UJBVnhHN4DOvnCHGIVQ+SrNd1SAnrW2Sv7xUfX2tuIXDs5e2K08vsbT/QcqhXcQhU/bc N6l4BA02tMdH1urLf321sE9RjAw0CzRBvuObA1XHvToTM8OTiWncgxLMzEHC1bAoBD7j f9CZ30i6dN2F1Yd4nnVISBayRragA4nnfACdeGVFGbN0IGFfLJ/9Ptcl6T+dCaqhBUAJ VuhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328430; x=1695933230; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9mpd9iMp7FZTg5cux5Hx9q8PM9e8tiNEMrk/Ih+mEqk=; b=VMe2JnUZMdBnFNcr1G1+3ERjVhqEIDzBvYcU7oKgAb1v8H7qALHF3GU0D7TxZ735nw rjFBvrJrXhYjBuYv7CRjajjIeAJynaAvHu5WBvrxY6MWGw2cSbELd8fvzilUhOAinRHJ OFAbNojMyHXEXXYRARGTm8qf/lq4ILt71KmS7SC6QUyRoUCMN4RB/lMq//7c0bzQcHpj uEk89ea30zny/9ag8k4+e6dlEzMlaZtObsAxqfDi3X+nrzbRGmtTOexbAI2UTciCKdYB mxLmh9pqZi4gxbJHEH8AD8swirbi31AV7prgYQ0ukOZKMZXR477iHH+xoMgXmvxGwVe+ la8A== X-Gm-Message-State: AOJu0Yxm2kt/7DMzwN2IyL/z18/DG8pVMY9fwMyHkO8kvAfvEkSfb9tc UX2b08XSzccSdoAp4AdpgY+iKrbvM0g= X-Google-Smtp-Source: AGHT+IFLipqB61WXOOTnqiKbb8bDJkmz/woVY4xbUtfSVXedRobcUBhHek7BiWx82eS717I7/ce8FiF8A/s= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f0ca:b0:1c3:4d70:6ed9 with SMTP id v10-20020a170902f0ca00b001c34d706ed9mr63048pla.3.1695328430166; Thu, 21 Sep 2023 13:33:50 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:26 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-10-seanjc@google.com> Subject: [PATCH 09/13] KVM: Always add relevant ranges to invalidation set when changing attributes From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When setting memory attributes, add all affected memslot ranges to the set of invalidation ranges before calling into arch code. Even if the change in attributes doesn't strictly require zapping, it's not at all obvious that letting arch code establish new mappings while the attributes are in flux is safe and/or desirable. Unconditionally adding ranges allows KVM to keep its sanity check that at least one range is added between begin() and end(), e.g. to guard against a missed add() call, without needing complex code to condition the begin()/end() on arch behavior. Fixes: 9a327182447a ("KVM: Introduce per-page memory attributes") Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 277afeedd670..96fc609459e3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2529,6 +2529,25 @@ static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, KVM_MMU_UNLOCK(kvm); } +static bool kvm_pre_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + /* + * Unconditionally add the range to the invalidation set, regardless of + * whether or not the arch callback actually needs to zap SPTEs. E.g. + * if KVM supports RWX attributes in the future and the attributes are + * going from R=>RW, zapping isn't strictly necessary. Unconditionally + * adding the range allows KVM to require that MMU invalidations add at + * least one range between begin() and end(), e.g. allows KVM to detect + * bugs where the add() is missed. Rexlaing the rule *might* be safe, + * but it's not obvious that allowing new mappings while the attributes + * are in flux is desirable or worth the complexity. + */ + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); + + return kvm_arch_pre_set_memory_attributes(kvm, range); +} + /* Set @attributes for the gfn range [@start, @end). */ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, unsigned long attributes) @@ -2536,7 +2555,7 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, struct kvm_mmu_notifier_range pre_set_range = { .start = start, .end = end, - .handler = kvm_arch_pre_set_memory_attributes, + .handler = kvm_pre_set_memory_attributes, .on_lock = kvm_mmu_invalidate_begin, .flush_on_ret = true, .may_block = true, From patchwork Thu Sep 21 20:33:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394592 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73C76E7D0AD for ; Thu, 21 Sep 2023 20:56:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231855AbjIUU4h (ORCPT ); Thu, 21 Sep 2023 16:56:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230249AbjIUU4Q (ORCPT ); Thu, 21 Sep 2023 16:56:16 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20AED4220 for ; Thu, 21 Sep 2023 13:33:52 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d81c39acfd9so1818961276.0 for ; Thu, 21 Sep 2023 13:33:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328432; x=1695933232; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=e5f7fcNoXL+epp7GM6J6bVKjQgaoLneVTY2X9JXuWRg=; b=GmDzyG/T9FWlGLQM/yDebSC/pppt1SFf7DlQnd4YxYIs07qbm4BTIZ639qV8gbbs7m lvzy1Im+3ZCMr0BYirajSwh0nz1LQsDB+qBczOshuKJJTrW8v+m5dqWH6ydd6J4IsUBy op4+RPEpviqCWD7wI8xCeErcmwwXFqsXWw82wRucspQNalR9y6bXfZtfOqKq0He2I9W9 tkM3wU02+d859sXtpBPWNCQL8tMAPwoebZsdiepkA3c2K+5RptmQ01vQZ/7AJblzIs2G BgCUMbOSvDmNMUaSExgE1x/nVOG5u7IhPgaDLCy7Ha7iVDEfv+9eeIB0VRps/qwAnfmQ Pf/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328432; x=1695933232; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=e5f7fcNoXL+epp7GM6J6bVKjQgaoLneVTY2X9JXuWRg=; b=JtXn2A2ELj7w/1/89f2abRDqzk8GOfNRBl5zj+SGfAwcLeclwnjz+jmhnKF4hZ1aFZ eGXbqfBQlJGnrrooAejkv6zz+mSIorlCOy/tN2Y5JEF5NWjdfRn33mgbXOuBvZbN+YUu /GiCkycSCIkOiatUqFD8U12fja59y4DfVY0JUorX/u+xQB7/wmpspPGuRUfjVYH2HCZN fX+nKhaeaZ/mnHK5Wiwq/iI6ybuqqQCABm3stNJD4jNedvmQQxAbiohhO82kwzYRmYB0 +pzBq5NsewWPRKPx7JqdgjiiFroCEf7soY4LogPXls7SYdu9XClDFTS4HtQE0HgE0kKp J70A== X-Gm-Message-State: AOJu0YysNbFixrRdzV8NXjME256JQ+HLHGMhuYXkhNNj0dTvXrHSTXDI 5L0cvbTrFI+OslBME/oWQqVj2stqy6k= X-Google-Smtp-Source: AGHT+IE3ligRVXMNx7Ai6SVYOMLzKH9iL3vgDfzERbbEOnY3rAREZhYCuKUZj1rP+ggkOD4exEVe8RniJ6M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ce8b:0:b0:d81:503e:2824 with SMTP id x133-20020a25ce8b000000b00d81503e2824mr82196ybe.10.1695328432076; Thu, 21 Sep 2023 13:33:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:27 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-11-seanjc@google.com> Subject: [PATCH 10/13] KVM: x86/mmu: Drop repeated add() of to-be-invalidated range From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use kvm_unmap_gfn_range() instead of kvm_mmu_unmap_gfn_range() when handling memory attribute ranges now that common KVM adds the target range to the invalidation set, i.e. calls kvm_mmu_invalidate_range_add() before invoking the arch callback. Fixes: dcde045383f3 ("KVM: x86/mmu: Handle page fault for private memory") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aa67d9d6fcf8..bcb812a7f563 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7272,7 +7272,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) return false; - return kvm_mmu_unmap_gfn_range(kvm, range); + return kvm_unmap_gfn_range(kvm, range); } static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, From patchwork Thu Sep 21 20:33:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69CF6E7D0AA for ; Thu, 21 Sep 2023 21:03:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231222AbjIUVEC (ORCPT ); Thu, 21 Sep 2023 17:04:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232539AbjIUVDb (ORCPT ); Thu, 21 Sep 2023 17:03:31 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1518E98A53 for ; Thu, 21 Sep 2023 13:33:54 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-59e79a36097so48032607b3.0 for ; Thu, 21 Sep 2023 13:33:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328434; x=1695933234; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1e4MrCMHTLSZnyIO0FBl7+NQqWZG7f8pPm6hkGSjc9w=; b=Kbs8XBLABurM4wkP1Etj58hufgt+3MCkPFJf4zzdIhVZykp6uW1fcANzyqxo7QZZNl NRgReh2oREqXTzpcBwDeFHoEkOItBnCH183kGo1utsFnIkPRkK5BI9dALIeieSCxSM7y diCyBjYAiyru3Id99pb9LMUS9AeP3OBc3yvK5NgnXYcVN/OAAYm0gVtzAEq6OQEn2hHS oEiHWemCa7W6hyMQCZb+4dSxGWDPuCvu1NnG2+XdQ4n482kQ0vDk8ctTx56UKrWeHWJh p6Js5Td2gFR2c8zX7nC+28BAC6Oj1MwERK81clMq9u0ukz5vSe1LCEjicM/Cha4BBF6Q CLgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328434; x=1695933234; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1e4MrCMHTLSZnyIO0FBl7+NQqWZG7f8pPm6hkGSjc9w=; b=UWpvjwyfQ/udAnbIgp3mHHMvV1BDBsJDiGyxIwAPNHb0iej4Y3sMNVukuQpJ51NZay e49vf7taWRV+1OAQ1tZezzsCPBmNX0H/0lJRUp4FrLPoOmykNSTjOTz9w8Xg4KTi96Ze 2ugUrFA0kuvs5iHUIwnpYu9PrcSFCnZCA1UT0aDqDXb3eRoAsgtG/S6ZfRx4aI2r0a9s TXKBFMiUUKsUWjwaLqg6e4R+Fm2X1KhiHD9ZRfNXHKvXBXSjU3hey+OUU8/4L+l7tFnD 8RwXve+Hd3a/DsfV+MPQ/BaVQ9d/2WDqAEzmellfIhHOLA7khiNLxMvtSoiAjGDzH1OQ 3uyA== X-Gm-Message-State: AOJu0YxzbgtYHEg+gZ76DMobt7GGSFlsHRY8iMvOiMAwyt7HNMiBz41p i7h6V4T6Kh6JeLyK48hX5me6QE8nYUU= X-Google-Smtp-Source: AGHT+IEO6fBbhlkXcJTXrJ7RZL61QFV8RTGC1IpwqpAUy2RmP7+VLi8mBQy96qpxOGMgIPsWRqQ0yEMjFGQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:abf2:0:b0:d81:57ba:4d7a with SMTP id v105-20020a25abf2000000b00d8157ba4d7amr10576ybi.6.1695328433855; Thu, 21 Sep 2023 13:33:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:28 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-12-seanjc@google.com> Subject: [PATCH 11/13] KVM: selftests: Refactor private mem conversions to prep for punch_hole test From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor the private memory conversions test to prepare for adding a test to verify PUNCH_HOLE functionality *without* actually do a proper conversion, i.e. without calling KVM_SET_MEMORY_ATTRIBUTES. Make setting attributes optional, rename the guest code to be more descriptive, and extract the ranges to a global variable (iterating over multiple ranges is less interesting for PUNCH_HOLE, but with a common array it's trivially easy to do so). Fixes: 90535ca08f76 ("KVM: selftests: Add x86-only selftest for private memory conversions") Signed-off-by: Sean Christopherson --- .../kvm/x86_64/private_mem_conversions_test.c | 51 ++++++++++--------- 1 file changed, 27 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c index 50541246d6fd..b80cf7342d0d 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c @@ -83,13 +83,14 @@ static void guest_sync_private(uint64_t gpa, uint64_t size, uint8_t pattern) } /* Arbitrary values, KVM doesn't care about the attribute flags. */ -#define MAP_GPA_SHARED BIT(0) -#define MAP_GPA_DO_FALLOCATE BIT(1) +#define MAP_GPA_SET_ATTRIBUTES BIT(0) +#define MAP_GPA_SHARED BIT(1) +#define MAP_GPA_DO_FALLOCATE BIT(2) static void guest_map_mem(uint64_t gpa, uint64_t size, bool map_shared, bool do_fallocate) { - uint64_t flags = 0; + uint64_t flags = MAP_GPA_SET_ATTRIBUTES; if (map_shared) flags |= MAP_GPA_SHARED; @@ -108,19 +109,19 @@ static void guest_map_private(uint64_t gpa, uint64_t size, bool do_fallocate) guest_map_mem(gpa, size, false, do_fallocate); } -static void guest_run_test(uint64_t base_gpa, bool do_fallocate) +struct { + uint64_t offset; + uint64_t size; +} static const test_ranges[] = { + GUEST_STAGE(0, PAGE_SIZE), + GUEST_STAGE(0, SZ_2M), + GUEST_STAGE(PAGE_SIZE, PAGE_SIZE), + GUEST_STAGE(PAGE_SIZE, SZ_2M), + GUEST_STAGE(SZ_2M, PAGE_SIZE), +}; + +static void guest_test_explicit_conversion(uint64_t base_gpa, bool do_fallocate) { - struct { - uint64_t offset; - uint64_t size; - uint8_t pattern; - } stages[] = { - GUEST_STAGE(0, PAGE_SIZE), - GUEST_STAGE(0, SZ_2M), - GUEST_STAGE(PAGE_SIZE, PAGE_SIZE), - GUEST_STAGE(PAGE_SIZE, SZ_2M), - GUEST_STAGE(SZ_2M, PAGE_SIZE), - }; const uint8_t init_p = 0xcc; uint64_t j; int i; @@ -130,9 +131,9 @@ static void guest_run_test(uint64_t base_gpa, bool do_fallocate) guest_sync_shared(base_gpa, PER_CPU_DATA_SIZE, (uint8_t)~init_p, init_p); memcmp_g(base_gpa, init_p, PER_CPU_DATA_SIZE); - for (i = 0; i < ARRAY_SIZE(stages); i++) { - uint64_t gpa = base_gpa + stages[i].offset; - uint64_t size = stages[i].size; + for (i = 0; i < ARRAY_SIZE(test_ranges); i++) { + uint64_t gpa = base_gpa + test_ranges[i].offset; + uint64_t size = test_ranges[i].size; uint8_t p1 = 0x11; uint8_t p2 = 0x22; uint8_t p3 = 0x33; @@ -214,11 +215,11 @@ static void guest_run_test(uint64_t base_gpa, bool do_fallocate) static void guest_code(uint64_t base_gpa) { /* - * Run everything twice, with and without doing fallocate() on the - * guest_memfd backing when converting between shared and private. + * Run the conversion test twice, with and without doing fallocate() on + * the guest_memfd backing when converting between shared and private. */ - guest_run_test(base_gpa, false); - guest_run_test(base_gpa, true); + guest_test_explicit_conversion(base_gpa, false); + guest_test_explicit_conversion(base_gpa, true); GUEST_DONE(); } @@ -227,6 +228,7 @@ static void handle_exit_hypercall(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; uint64_t gpa = run->hypercall.args[0]; uint64_t size = run->hypercall.args[1] * PAGE_SIZE; + bool set_attributes = run->hypercall.args[2] & MAP_GPA_SET_ATTRIBUTES; bool map_shared = run->hypercall.args[2] & MAP_GPA_SHARED; bool do_fallocate = run->hypercall.args[2] & MAP_GPA_DO_FALLOCATE; struct kvm_vm *vm = vcpu->vm; @@ -238,8 +240,9 @@ static void handle_exit_hypercall(struct kvm_vcpu *vcpu) if (do_fallocate) vm_guest_mem_fallocate(vm, gpa, size, map_shared); - vm_set_memory_attributes(vm, gpa, size, - map_shared ? 0 : KVM_MEMORY_ATTRIBUTE_PRIVATE); + if (set_attributes) + vm_set_memory_attributes(vm, gpa, size, + map_shared ? 0 : KVM_MEMORY_ATTRIBUTE_PRIVATE); run->hypercall.ret = 0; } From patchwork Thu Sep 21 20:33:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0781E7D0AA for ; Thu, 21 Sep 2023 20:57:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231124AbjIUU5a (ORCPT ); Thu, 21 Sep 2023 16:57:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232170AbjIUU5R (ORCPT ); Thu, 21 Sep 2023 16:57:17 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 596D8C3321 for ; Thu, 21 Sep 2023 13:33:57 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-59beb3a8291so20360507b3.1 for ; Thu, 21 Sep 2023 13:33:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328436; x=1695933236; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=as3Xfcp/JRiOon5PWxqp/0cQrWwxzrg67R/BFsP5EZc=; b=wZtpeBeUwfckO0z7bOUNL0F6TUXNNsVTgy0qa4xfTtdUMrK3z/YnmEirYT40YEmX0l n6IRO8Bjz/oeONCBm08TY5eEwsCY2BU6x0IZIvfUWnjKGU47ANJfT9yX1zgAqLP3WAI+ o8WiWBL6PZKpY69El9xLmTaCGMvMTI9pRuEbZMw/yznEvTEqphA3ixdiPOHc4KABiMyt 1K4eY59/pTZMmJ6F8rLgG8/yrEQBhigRCksVDnisdDKK5OaCS4HY1qOUBbhCig3bFdzj 7nFG1O7HrX6U4TQZAbcB1xxhUaiPrsp8DVy3yadIXWZcgJcnF4VCTnsolv8pOTR179Cu Nx2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328436; x=1695933236; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=as3Xfcp/JRiOon5PWxqp/0cQrWwxzrg67R/BFsP5EZc=; b=diPLc8DlwgLJEzqHYjw1fzdypmErKOcwDCWu9Ky2M9Ht6Lx2mh1ZpLvh7yhnEest1J L8M/W682fjt4+j60cT1PovTv9nr6j1Eh2JEb4Q83fjk1P0bQPLEzOejOGxp9rKumnh4O CdN3YtYs9PfpkmRpWSk45kjE1i5RPPR6aohUriJp4D6LgHrUWbPVnQT/uyg2+L2jM7yE WWecu/cAMIjOj653gwdTf4p82IsiD4BW5/NyXVfCrBXoU7qXrl3I3RXC90ulsXq65Vte EYV9HSOjJOJggKUFppxG/MUEn34zpxKlBk3EIZcarGMWoGSCDvNOu4MDYuRT3z7Zcwdq gi/w== X-Gm-Message-State: AOJu0Yzuvh8lPYVeA20r52inHKAlEKdVHn98Dtj7M5DYDADXZMv5KHOw EeovZiDbkEXOaDecjYTxj86d8HelaNw= X-Google-Smtp-Source: AGHT+IHi7mnPSSHuR6uMQ9zB5hC3CAGDfRvOyb039qMm5sT311dR9pe9rzKOxh23Xqpo1jBpi8ZGPYUE+IQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:ad4f:0:b0:58c:6ddd:d27c with SMTP id l15-20020a81ad4f000000b0058c6dddd27cmr92527ywk.6.1695328436337; Thu, 21 Sep 2023 13:33:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:29 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-13-seanjc@google.com> Subject: [PATCH 12/13] KVM: selftests: Add a "pure" PUNCH_HOLE on guest_memfd testcase From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a PUNCH_HOLE testcase to the private memory conversions test that verifies PUNCH_HOLE actually frees memory. Directly verifying that KVM frees memory is impractical, if it's even possible, so instead indirectly verify memory is freed by asserting that the guest reads zeroes after a PUNCH_HOLE. E.g. if KVM zaps SPTEs but doesn't actually punch a hole in the inode, the subsequent read will still see the previous value. And obviously punching a hole shouldn't cause explosions. Signed-off-by: Sean Christopherson --- .../kvm/x86_64/private_mem_conversions_test.c | 61 +++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c index b80cf7342d0d..c04e7d61a585 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c @@ -212,6 +212,60 @@ static void guest_test_explicit_conversion(uint64_t base_gpa, bool do_fallocate) } } +static void guest_punch_hole(uint64_t gpa, uint64_t size) +{ + /* "Mapping" memory shared via fallocate() is done via PUNCH_HOLE. */ + uint64_t flags = MAP_GPA_SHARED | MAP_GPA_DO_FALLOCATE; + + kvm_hypercall_map_gpa_range(gpa, size, flags); +} + +/* + * Test that PUNCH_HOLE actually frees memory by punching holes without doing a + * proper conversion. Freeing (PUNCH_HOLE) should zap SPTEs, and reallocating + * (subsequent fault) should zero memory. + */ +static void guest_test_punch_hole(uint64_t base_gpa, bool precise) +{ + const uint8_t init_p = 0xcc; + int i; + + /* + * Convert the entire range to private, this testcase is all about + * punching holes in guest_memfd, i.e. shared mappings aren't needed. + */ + guest_map_private(base_gpa, PER_CPU_DATA_SIZE, false); + + for (i = 0; i < ARRAY_SIZE(test_ranges); i++) { + uint64_t gpa = base_gpa + test_ranges[i].offset; + uint64_t size = test_ranges[i].size; + + /* + * Free all memory before each iteration, even for the !precise + * case where the memory will be faulted back in. Freeing and + * reallocating should obviously work, and freeing all memory + * minimizes the probability of cross-testcase influence. + */ + guest_punch_hole(base_gpa, PER_CPU_DATA_SIZE); + + /* Fault-in and initialize memory, and verify the pattern. */ + if (precise) { + memset((void *)gpa, init_p, size); + memcmp_g(gpa, init_p, size); + } else { + memset((void *)base_gpa, init_p, PER_CPU_DATA_SIZE); + memcmp_g(base_gpa, init_p, PER_CPU_DATA_SIZE); + } + + /* + * Punch a hole at the target range and verify that reads from + * the guest succeed and return zeroes. + */ + guest_punch_hole(gpa, size); + memcmp_g(gpa, 0, size); + } +} + static void guest_code(uint64_t base_gpa) { /* @@ -220,6 +274,13 @@ static void guest_code(uint64_t base_gpa) */ guest_test_explicit_conversion(base_gpa, false); guest_test_explicit_conversion(base_gpa, true); + + /* + * Run the PUNCH_HOLE test twice too, once with the entire guest_memfd + * faulted in, once with only the target range faulted in. + */ + guest_test_punch_hole(base_gpa, false); + guest_test_punch_hole(base_gpa, true); GUEST_DONE(); } From patchwork Thu Sep 21 20:33:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13394681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F01C7E7D0AE for ; Thu, 21 Sep 2023 21:43:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232587AbjIUVnG (ORCPT ); Thu, 21 Sep 2023 17:43:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231200AbjIUVmv (ORCPT ); Thu, 21 Sep 2023 17:42:51 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 767B0C3331 for ; Thu, 21 Sep 2023 13:33:59 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d7fd4c23315so1767901276.2 for ; Thu, 21 Sep 2023 13:33:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328438; x=1695933238; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=iAecZcdPmeU17yYV8Y2t9q6VIXYShcA0VY6qsAlVigg=; b=l27JPRust67iglFCZrlA1cuXvzJaR3MpW2D1Xh+UOiTFHfamI+BAuN7dmuiOyylmEj HNyWSWpcRpNfVg41FVEw7qlT4qulyyZj/44JdIAeaN2fv6Qn5We7fZrY/KpKmUBttKxn fC2AQwg+tmGbFUdRSVAVDrPBSwq4xtIENgG116kBptSxEQHYslFHDWIwLEqkJlgOqCtS XOMv50Kkwhl9Cr+EF10KI5aaDEV1KnIpi6ETp/afFUQpSS3Bq1H9U8w34Q71wvBaD22p qElp5IgKAY12O/RnQYcGwrs5+hibOFX5c4kocjIlCcEfuZjmkYxD800dby/Zag1ZXWVN V3fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328438; x=1695933238; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=iAecZcdPmeU17yYV8Y2t9q6VIXYShcA0VY6qsAlVigg=; b=wWZE1KU02VRWsS6t7dYyFJMfDUfQh4uaKBrVQ+CCpeTgFYxgyMDjgL7smN8DBiSiBn 7EkqYCTH3wkSJyHcRcm2TvApzUbVe7SYuqoP5L6NaIwaV/CW2x3rdyi53HyQWDUIQiq9 IUfuorpaL4spfYJlJfwMdJyOOG37UbKW4GTeIJd/Qf2ggVGNHGZbHo6iSALkR6Z7LoeI W7ctuLziidkA372zoY8wN40yfJ9obfzLwTY3+u14vsXk4CtVvmw2XKdU3qv36wN9jV5Z iziA7hzjk0OPspeonfSuUd0vp55FA2T2XVZZL8oF050qUPT6c0jkSAy1PfPJf4TPoyfK oHOw== X-Gm-Message-State: AOJu0Yxtsoga/2PQAqhDx4e5jtFhrdU/dFzTJ/dZ2aJBj739c83fpeUD hyOlDSu8jGPIq5sKJ2hTtOBdc0gsLMw= X-Google-Smtp-Source: AGHT+IE6+oUdUlgLe9Ic7Hw4ijB8WbACCZNGhCwO/O53pnCY6EuyFornjYkRYInhosH4RMqIDwwt4GOWGFE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:6894:0:b0:d81:7d48:a459 with SMTP id d142-20020a256894000000b00d817d48a459mr89384ybc.8.1695328438609; Thu, 21 Sep 2023 13:33:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:30 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-14-seanjc@google.com> Subject: [PATCH 13/13] KVM: Rename guest_mem.c to guest_memfd.c From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use guest_memfd.c for the KVM_CREATE_GUEST_MEMFD implementation to make it more obvious that the file holds more than generic "guest memory" APIs, and to provide a stronger conceptual connection with memfd.c. Signed-off-by: Sean Christopherson --- virt/kvm/Makefile.kvm | 2 +- virt/kvm/{guest_mem.c => guest_memfd.c} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename virt/kvm/{guest_mem.c => guest_memfd.c} (100%) diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index a5a61bbe7f4c..724c89af78af 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -12,4 +12,4 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o -kvm-$(CONFIG_KVM_PRIVATE_MEM) += $(KVM)/guest_mem.o +kvm-$(CONFIG_KVM_PRIVATE_MEM) += $(KVM)/guest_memfd.o diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_memfd.c similarity index 100% rename from virt/kvm/guest_mem.c rename to virt/kvm/guest_memfd.c