From patchwork Thu Feb 29 02:57:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 13576443 Received: from mail-ot1-f41.google.com (mail-ot1-f41.google.com [209.85.210.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47E10374EF for ; Thu, 29 Feb 2024 02:58:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175494; cv=none; b=q03bLEJWgPN+7cJZkFf2b6sO/nGFkZQu4U72L61kD730iAjQZ4JQJae+4c5ExSajWqRhanUz0swTyTMGqwQFiYUXe16IMwmDvXEaKELCHYB0DwU8IAUB1Xv9CAM8FZlTuvKT/6mOLZEZvusenXHD87T1vahBdaaTy9yBoKskjOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175494; c=relaxed/simple; bh=4TOA+r5FgQu9r7XyUG+jnEXfKZGB1J0B4LAWQX2lpPQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rnQfjbDavoT/NG/dP7IZNp4Uy596aevM/jFq6dJhkZMqrDLvYM+U76BbvqGEir6m7VYBBlcIAVmeDVy4rdxFTt6ThkrWReUurGos5cv4V+xqbMn+a7n6L9YmGHPzkgmpmzfT/W/k75FYWW+7f8FtCVSXjlGSPG5vAZyVaf0JSpQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=KgjD/AlN; arc=none smtp.client-ip=209.85.210.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="KgjD/AlN" Received: by mail-ot1-f41.google.com with SMTP id 46e09a7af769-6e49872f576so218097a34.1 for ; Wed, 28 Feb 2024 18:58:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175491; x=1709780291; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h8gBHZBcb2YMWItHZ5v52k75LYEpNNA/oZohojyoTqA=; b=KgjD/AlNkquVx5HzOsnwQTVby5PEbzF8CT9VtOG6+9VTjGFd1/M9KSvgEKOA9xLvbb BY/KVjfOgJj9vGOhDs1ghheKsREuquRCGywg9BMWydxuhwC64noIs18BAvNfI+ewTVlY z5rKjQEsjgaMHTB+CWbvJ3e3Tnr9onosiTpvs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175491; x=1709780291; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h8gBHZBcb2YMWItHZ5v52k75LYEpNNA/oZohojyoTqA=; b=bVkpEj1D0TvCpfJVD1BQAuZl2B3jTHIbCQLjb0cUR9tGapU10PAHW/uojYW0641t7h RAposd/ZjY27lsnPRcK6bnaRbKvVvW4aGyidcljZUODtqRK9ytdxHevxwi0cg/b7uob3 Bg13FByAYrsEkIj5tMdNZLoSVfzgHsviQPVy85XbqKpOnsq5MoV4RoRPJg1kFujUKMG2 Qj3RDSw7vX1kSJ8Gr5ndTYCdim0K9rYfXQInnmO+4t9+4kp5OuyKidLHia0jfTgq9zKN +iySSTiL+3FG6a73kflZmPBkx9ktY03IjBBBLSMOqdDG2jbZK3qbNX3GMbyCpobll+UO Z/Rw== X-Forwarded-Encrypted: i=1; AJvYcCXd5H2DLqORcixF+i0C1P/xvs0o8RorsZogBul8BMgbCZXFOZW131flXi+Qnx/E1YGTd2HI1LU8gEZh2fqsFOsgavdJ X-Gm-Message-State: AOJu0YxhgABPF/YD4XLEwfZUOCffPN0dSasZTfdOMjEBDApOaTPr6ewt C4sj/tb7+AyAUbP20bxEhdbbtt4vU7qs79C+UCEFBkZotfxVIORZqZoFYfw7yg== X-Google-Smtp-Source: AGHT+IEzcIYIV1hTiSnL4EPI23CXk8BeKzYnMBwtI8PGGd225Hl71snn2MsVWt1anzA1zrdbxElYnQ== X-Received: by 2002:a05:6358:6f12:b0:17b:c624:b0a1 with SMTP id r18-20020a0563586f1200b0017bc624b0a1mr1060447rwn.23.1709175491359; Wed, 28 Feb 2024 18:58:11 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id r37-20020a632065000000b005dcc075d5edsm190825pgm.60.2024.02.28.18.58.09 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:10 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH v11 1/8] KVM: Assert that a page's refcount is elevated when marking accessed/dirty Date: Thu, 29 Feb 2024 11:57:52 +0900 Message-ID: <20240229025759.1187910-2-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Sean Christopherson Assert that a page's refcount is elevated, i.e. that _something_ holds a reference to the page, when KVM marks a page as accessed and/or dirty. KVM typically doesn't hold a reference to pages that are mapped into the guest, e.g. to allow page migration, compaction, swap, etc., and instead relies on mmu_notifiers to react to changes in the primary MMU. Incorrect handling of mmu_notifier events (or similar mechanisms) can result in KVM keeping a mapping beyond the lifetime of the backing page, i.e. can (and often does) result in use-after-free. Yelling if KVM marks a freed page as accessed/dirty doesn't prevent badness as KVM usually only does A/D updates when unmapping memory from the guest, i.e. the assertion fires well after an underlying bug has occurred, but yelling does help detect, triage, and debug use-after-free bugs. Note, the assertion must use page_count(), NOT page_ref_count()! For hugepages, the returned struct page may be a tailpage and thus not have its own refcount. Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 10bfc88a69f7..c5e4bf7c48f9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3204,6 +3204,19 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_unmap); static bool kvm_is_ad_tracked_page(struct page *page) { + /* + * Assert that KVM isn't attempting to mark a freed page as Accessed or + * Dirty, i.e. that KVM's MMU doesn't have a use-after-free bug. KVM + * (typically) doesn't pin pages that are mapped in KVM's MMU, and + * instead relies on mmu_notifiers to know when a mapping needs to be + * zapped/invalidated. Unmapping from KVM's MMU must happen _before_ + * KVM returns from its mmu_notifier, i.e. the page should have an + * elevated refcount at this point even though KVM doesn't hold a + * reference of its own. + */ + if (WARN_ON_ONCE(!page_count(page))) + return false; + /* * Per page-flags.h, pages tagged PG_reserved "should in general not be * touched (e.g. set dirty) except by its owner". From patchwork Thu Feb 29 02:57:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 13576444 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAA4538388 for ; Thu, 29 Feb 2024 02:58:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175498; cv=none; b=Bd5SqVHfX5Y/mUpIA+6Wwld8nS7NgpakGkOwL7rsYzq4NbHvu0euO6MemPgMxqjCwrJu6n0zpcSDGNmtHJUtVQecfcMYs5g3NoZHoPVG4lWjmuYakvO//qYrq3v6sunawP7i5QuV9aIYUT2sfcNxlmtNDhrxBNhLeyUv0WkDNc0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175498; c=relaxed/simple; bh=vBxpG3Dz9ZyzQmLOVkEwkMADJ6mdtbcyVLq+Z9tdtMM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tfrj1RQWu0fi7Kb9hkHr9gpIxatN6zPUkEk0cKrYav09l44h0ypd9KB73yfSJ/8nYNDZRLRhPSEVB/q0/UpD5hu2O8lXuPMR1h2IfPfLhTxr0CDjsa1dfnoSMCyTftGcPWvK0ZXlt3uoChnPxlqMR70ce59cCswNbPlBuq9bD4c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=Od541j0o; arc=none smtp.client-ip=209.85.215.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="Od541j0o" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-5ce9555d42eso304558a12.2 for ; Wed, 28 Feb 2024 18:58:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175496; x=1709780296; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DqzBf0lbp1ARHQDBEkHQmEKBBGeWS/SzkM01JYF1z0o=; b=Od541j0o8r+l+813ZW6A63LYTtnUz48u1QrOxZ8ejUtOwR0zg0WrWI/dyD3Z9WuPcr GCPqD5el18IKluc49eM9k2Vs8TAFpxK+7qTE/pYOviYsCCqQkbalD6oQi1oZw/Q83RF8 DoKinC/gJLAs9Mi9xpHEENjFW9V28EBNW54WU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175496; x=1709780296; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DqzBf0lbp1ARHQDBEkHQmEKBBGeWS/SzkM01JYF1z0o=; b=KTlyjgoGEnpLtq7TmupRG8XGMFaPoL+LAY2OGjhF87gvepArNDF1pXuKK8ubLNj4zh zOKQ+qSjMX8RS1P9nUDFbL5LJzLkYnxApHhCMFw1Xk/81QLermOwRjT5CxpdRL4l2bI3 us9EvaAfqAvXecH5TK810z1svjYZ04/V6DPZH/AIbZPFdJCNjscr6XzQNJDF53m7mGgf BfvDclRSBCrZ2EMaNKnKNdpLp3YG9frSS1nJ6RCq6N0yJhhjjawt+VyArUti1T5aykGA k1dtX+gnnEudSOiXlClOtQ6Uw98Tl3l2gYN7ZLAaWYWlO5TC3HW9om/FzUGKRcntQpEk f+/Q== X-Forwarded-Encrypted: i=1; AJvYcCX3GZXZpNfxpndHt0I68K3VnT//9jmzw42J/SF1bCKUYqQNOswxhWXBhmypGnM/aDA0a+WNEfhtbGm8fhhXQlYT3gh3 X-Gm-Message-State: AOJu0Yx/W4Dt1iTpoI8eDYP13oo7SiGbCqj7iHarpZr6ILEGbWBtPW0K 5/PaRpN8b1zn126Lj+xtwAW7BUYEQr3HNoInE3Ve5NKK4KtDe/pAOQPJ4xf03g== X-Google-Smtp-Source: AGHT+IEaA3Yk5H5sm7OqlYuWJldrbQt1FyGNtf9CVSLw7qF8ebCkHsyFpFiViHyqoeQ9pbU7fRjgjg== X-Received: by 2002:a05:6a20:c887:b0:1a0:f713:8317 with SMTP id hb7-20020a056a20c88700b001a0f7138317mr1259104pzb.61.1709175496202; Wed, 28 Feb 2024 18:58:16 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id a1-20020a17090aa50100b002997f192d4esm2055537pjq.1.2024.02.28.18.58.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:15 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v11 2/8] KVM: Relax BUG_ON argument validation Date: Thu, 29 Feb 2024 11:57:53 +0900 Message-ID: <20240229025759.1187910-3-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Stevens hva_to_pfn() includes a check that KVM isn't trying to do an async page fault in a situation where it can't sleep. Downgrade this check from a BUG_ON() to a WARN_ON_ONCE(), since DoS'ing the guest (at worst) is better than bringing down the host. Suggested-by: Sean Christopherson Signed-off-by: David Stevens --- virt/kvm/kvm_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c5e4bf7c48f9..6f37d56fb2fc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2979,7 +2979,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, int npages, r; /* we can do it either atomically or asynchronously, not both */ - BUG_ON(atomic && async); + WARN_ON_ONCE(atomic && async); if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) return pfn; From patchwork Thu Feb 29 02:57:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 13576445 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2DB038DFB for ; Thu, 29 Feb 2024 02:58:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175502; cv=none; b=o/juY+x6id4UuH+1YdRHPxQKYxxseMpgN2yuKxzVQZSj6agPbOFmOm6JDSnZWNS6UlVIsH4Wg0WMCFMf1D2a2scj51OUab8F7XgO7hC+YqO2RBDoCSltn0R5xLQrD209Nwh1RU6hEa5BPjyt1zCsNjMXgBod/osuGfEWsjA2aJY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175502; c=relaxed/simple; bh=zqbO5W+2Y74Xe1DsTvWYol7UEmu3gtWMmT7zTXAW+jM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l0MQFQfu9pQ7gAVqmny634bMgbXtUMVSS4RPHdRCjAQEbHX71Yuc7IdnmFLCGiDSfd0Jc7OK/kcEeMuhkrAzoBrX21PGNNMZKKe0PO+fotNNxE0rqd8Vt5eMSaYa3iuHnVns+g1PGRUjUED5eudAcHYFgF9PyHFCAJxBYHylT0w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=OkUjrea7; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="OkUjrea7" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1dca160163dso4778235ad.3 for ; Wed, 28 Feb 2024 18:58:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175500; x=1709780300; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7pCyadIacDNXpMxBRbQ7ViSuJim7Tld7U5Xm9y5zMV8=; b=OkUjrea7QC0MX7A50vNaqmmoG2EB6A7kCHuwFPwphiLbXARU0EnyOVQfdMN3THSTzS l0rKSBVIkAz4DEBbZDVZi+3YzZwNJjfrZluMddHmCDqX6VSM9AkP5/WVlv51HF6LNHDW BSwcOj/TS4nXBsg1hKLy4/tnIbRVVLNjhlH4Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175500; x=1709780300; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7pCyadIacDNXpMxBRbQ7ViSuJim7Tld7U5Xm9y5zMV8=; b=vCyytPquNcls7v5FnC5R8zNE6iO8zBPDCUBQBop2wv4ZquAJrUSkfdL3ORbmDIN9tF Rj1nAIup3m5oX7fE5R9mwk2R3lFG2u1jGp7pyecA7Zil9JuSvBTqO9QKEPeSDU7wshG6 SsaPBidx/6xKaIGRSFomwbuEscOV8VXSlD/AllDH2N/7h+L/6kEWeBfeFGJhPXQ6T/+r pKUhtI5dk0Lad3sX73rVZvWPYYvwg6y6SeD66R0YqrGv4LJiQgo1ACisSPCQQdgF7lAR TcBt/yXDhM/lh60h6lDITanWYqx9FiuCrCCYSjy4YxCUcKYUTDe2Id6p43wh+o3KvdWi vVGQ== X-Forwarded-Encrypted: i=1; AJvYcCUu+7lC0PlgomWAjhx6AwOjNWqumx03SuvbjsP8mi6S8/gjhs8VvEGODagkNiAchNudf6KCMbeCHQUk0UA5A+CBHZRA X-Gm-Message-State: AOJu0YxpSVQcDbYA84UC7wKtFstDyR0jugaqUp1msZtVnAqVEHY3yoYT r8YxFRA23c3/xxPSR3MdwATKMk8gs+LkozwgE1i3m76/Ol0ll14749UyTsZXuQ== X-Google-Smtp-Source: AGHT+IHzHUL03jg2loekrSGJuWZHzM6ClyGDTDSsmXWEuxx38J3HvBESVEYeAAzaMQYV36JWlfLP2w== X-Received: by 2002:a17:903:234f:b0:1db:f830:c381 with SMTP id c15-20020a170903234f00b001dbf830c381mr860436plh.44.1709175500286; Wed, 28 Feb 2024 18:58:20 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id c1-20020a170903234100b001d9fc6df457sm180902plh.5.2024.02.28.18.58.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:19 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v11 3/8] KVM: mmu: Introduce kvm_follow_pfn() Date: Thu, 29 Feb 2024 11:57:54 +0900 Message-ID: <20240229025759.1187910-4-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Stevens Introduce kvm_follow_pfn(), which will replace __gfn_to_pfn_memslot(). This initial implementation is just a refactor of the existing API which uses a single structure for passing the arguments. The arguments are further refactored as follows: - The write_fault and interruptible boolean flags and the in parameter part of async are replaced by setting FOLL_WRITE, FOLL_INTERRUPTIBLE, and FOLL_NOWAIT respectively in a new flags argument. - The out parameter portion of the async parameter is now a return value. - The writable in/out parameter is split into a separate. try_map_writable in parameter and writable out parameter. - All other parameter are the same. Upcoming changes will add the ability to get a pfn without needing to take a ref to the underlying page. Signed-off-by: David Stevens Reviewed-by: Maxim Levitsky --- include/linux/kvm_host.h | 18 ++++ virt/kvm/kvm_main.c | 191 +++++++++++++++++++++------------------ virt/kvm/kvm_mm.h | 3 +- virt/kvm/pfncache.c | 10 +- 4 files changed, 131 insertions(+), 91 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7e7fd25b09b3..290db5133c36 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -97,6 +97,7 @@ #define KVM_PFN_ERR_HWPOISON (KVM_PFN_ERR_MASK + 1) #define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2) #define KVM_PFN_ERR_SIGPENDING (KVM_PFN_ERR_MASK + 3) +#define KVM_PFN_ERR_NEEDS_IO (KVM_PFN_ERR_MASK + 4) /* * error pfns indicate that the gfn is in slot but faild to @@ -1209,6 +1210,23 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t gfn, void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); +struct kvm_follow_pfn { + const struct kvm_memory_slot *slot; + gfn_t gfn; + /* FOLL_* flags modifying lookup behavior. */ + unsigned int flags; + /* Whether this function can sleep. */ + bool atomic; + /* Try to create a writable mapping even for a read fault. */ + bool try_map_writable; + + /* Outputs of kvm_follow_pfn */ + hva_t hva; + bool writable; +}; + +kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp); + kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6f37d56fb2fc..575756c9c5b0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2791,8 +2791,7 @@ static inline int check_user_page_hwpoison(unsigned long addr) * true indicates success, otherwise false is returned. It's also the * only part that runs if we can in atomic context. */ -static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) +static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { struct page *page[1]; @@ -2801,14 +2800,12 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * or the caller allows to map a writable pfn for a read fault * request. */ - if (!(write_fault || writable)) + if (!((kfp->flags & FOLL_WRITE) || kfp->try_map_writable)) return false; - if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { *pfn = page_to_pfn(page[0]); - - if (writable) - *writable = true; + kfp->writable = true; return true; } @@ -2819,8 +2816,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, - bool interruptible, bool *writable, kvm_pfn_t *pfn) +static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { /* * When a VCPU accesses a page that is not mapped into the secondary @@ -2833,32 +2829,24 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, * Note that get_user_page_fast_only() and FOLL_WRITE for now * implicitly honor NUMA hinting faults and don't need this flag. */ - unsigned int flags = FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT; + unsigned int flags = FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT | kfp->flags; struct page *page; int npages; might_sleep(); - if (writable) - *writable = write_fault; - - if (write_fault) - flags |= FOLL_WRITE; - if (async) - flags |= FOLL_NOWAIT; - if (interruptible) - flags |= FOLL_INTERRUPTIBLE; - - npages = get_user_pages_unlocked(addr, 1, &page, flags); + npages = get_user_pages_unlocked(kfp->hva, 1, &page, flags); if (npages != 1) return npages; - /* map read fault as writable if possible */ - if (unlikely(!write_fault) && writable) { + if (kfp->flags & FOLL_WRITE) { + kfp->writable = true; + } else if (kfp->try_map_writable) { struct page *wpage; - if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { - *writable = true; + /* map read fault as writable if possible */ + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { + kfp->writable = true; put_page(page); page = wpage; } @@ -2889,23 +2877,23 @@ static int kvm_try_get_pfn(kvm_pfn_t pfn) } static int hva_to_pfn_remapped(struct vm_area_struct *vma, - unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *p_pfn) + struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { kvm_pfn_t pfn; pte_t *ptep; pte_t pte; spinlock_t *ptl; + bool write_fault = kfp->flags & FOLL_WRITE; int r; - r = follow_pte(vma->vm_mm, addr, &ptep, &ptl); + r = follow_pte(vma->vm_mm, kfp->hva, &ptep, &ptl); if (r) { /* * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does * not call the fault handler, so do it here. */ bool unlocked = false; - r = fixup_user_fault(current->mm, addr, + r = fixup_user_fault(current->mm, kfp->hva, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked) @@ -2913,7 +2901,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, if (r) return r; - r = follow_pte(vma->vm_mm, addr, &ptep, &ptl); + r = follow_pte(vma->vm_mm, kfp->hva, &ptep, &ptl); if (r) return r; } @@ -2925,8 +2913,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, goto out; } - if (writable) - *writable = pte_write(pte); + kfp->writable = pte_write(pte); pfn = pte_pfn(pte); /* @@ -2957,38 +2944,28 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, } /* - * Pin guest page in memory and return its pfn. - * @addr: host virtual address which maps memory to the guest - * @atomic: whether this function can sleep - * @interruptible: whether the process can be interrupted by non-fatal signals - * @async: whether this function need to wait IO complete if the - * host page is not in the memory - * @write_fault: whether we should get a writable host page - * @writable: whether it allows to map a writable host page for !@write_fault - * - * The function will map a writable host page for these two cases: - * 1): @write_fault = true - * 2): @write_fault = false && @writable, @writable will tell the caller - * whether the mapping is writable. + * Convert a hva to a pfn. + * @kfp: args struct for the conversion */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) { struct vm_area_struct *vma; kvm_pfn_t pfn; int npages, r; - /* we can do it either atomically or asynchronously, not both */ - WARN_ON_ONCE(atomic && async); + /* + * FOLL_NOWAIT is used for async page faults, which don't make sense + * in an atomic context where the caller can't do async resolution. + */ + WARN_ON_ONCE(kfp->atomic && (kfp->flags & FOLL_NOWAIT)); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(kfp, &pfn)) return pfn; - if (atomic) + if (kfp->atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, interruptible, - writable, &pfn); + npages = hva_to_pfn_slow(kfp, &pfn); if (npages == 1) return pfn; if (npages == -EINTR) @@ -2996,83 +2973,123 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, mmap_read_lock(current->mm); if (npages == -EHWPOISON || - (!async && check_user_page_hwpoison(addr))) { + (!(kfp->flags & FOLL_NOWAIT) && check_user_page_hwpoison(kfp->hva))) { pfn = KVM_PFN_ERR_HWPOISON; goto exit; } retry: - vma = vma_lookup(current->mm, addr); + vma = vma_lookup(current->mm, kfp->hva); if (vma == NULL) pfn = KVM_PFN_ERR_FAULT; else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) { - r = hva_to_pfn_remapped(vma, addr, write_fault, writable, &pfn); + r = hva_to_pfn_remapped(vma, kfp, &pfn); if (r == -EAGAIN) goto retry; if (r < 0) pfn = KVM_PFN_ERR_FAULT; } else { - if (async && vma_is_valid(vma, write_fault)) - *async = true; - pfn = KVM_PFN_ERR_FAULT; + if ((kfp->flags & FOLL_NOWAIT) && + vma_is_valid(vma, kfp->flags & FOLL_WRITE)) + pfn = KVM_PFN_ERR_NEEDS_IO; + else + pfn = KVM_PFN_ERR_FAULT; } exit: mmap_read_unlock(current->mm); return pfn; } -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool interruptible, bool *async, - bool write_fault, bool *writable, hva_t *hva) +kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp) { - unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); + kfp->writable = false; + kfp->hva = __gfn_to_hva_many(kfp->slot, kfp->gfn, NULL, + kfp->flags & FOLL_WRITE); - if (hva) - *hva = addr; - - if (addr == KVM_HVA_ERR_RO_BAD) { - if (writable) - *writable = false; + if (kfp->hva == KVM_HVA_ERR_RO_BAD) return KVM_PFN_ERR_RO_FAULT; - } - if (kvm_is_error_hva(addr)) { - if (writable) - *writable = false; + if (kvm_is_error_hva(kfp->hva)) return KVM_PFN_NOSLOT; - } - /* Do not map writable pfn in the readonly memslot. */ - if (writable && memslot_is_readonly(slot)) { - *writable = false; - writable = NULL; - } + if (memslot_is_readonly(kfp->slot)) + kfp->try_map_writable = false; + + return hva_to_pfn(kfp); +} +EXPORT_SYMBOL_GPL(kvm_follow_pfn); + +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, + bool atomic, bool interruptible, bool *async, + bool write_fault, bool *writable, hva_t *hva) +{ + kvm_pfn_t pfn; + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .flags = 0, + .atomic = atomic, + .try_map_writable = !!writable, + }; + + if (write_fault) + kfp.flags |= FOLL_WRITE; + if (async) + kfp.flags |= FOLL_NOWAIT; + if (interruptible) + kfp.flags |= FOLL_INTERRUPTIBLE; - return hva_to_pfn(addr, atomic, interruptible, async, write_fault, - writable); + pfn = kvm_follow_pfn(&kfp); + if (pfn == KVM_PFN_ERR_NEEDS_IO) { + *async = true; + pfn = KVM_PFN_ERR_FAULT; + } + if (hva) + *hva = kfp.hva; + if (writable) + *writable = kfp.writable; + return pfn; } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - NULL, write_fault, writable, NULL); + kvm_pfn_t pfn; + struct kvm_follow_pfn kfp = { + .slot = gfn_to_memslot(kvm, gfn), + .gfn = gfn, + .flags = write_fault ? FOLL_WRITE : 0, + .try_map_writable = !!writable, + }; + pfn = kvm_follow_pfn(&kfp); + if (writable) + *writable = kfp.writable; + return pfn; } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true, - NULL, NULL); + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .flags = FOLL_WRITE, + }; + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true, - NULL, NULL); + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .flags = FOLL_WRITE, + .atomic = true, + }; + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index ecefc7ec51af..9ba61fbb727c 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,7 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll); #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 2d6aba677830..1fb21c2ced5d 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -144,6 +144,12 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT; void *new_khva = NULL; unsigned long mmu_seq; + struct kvm_follow_pfn kfp = { + .slot = gpc->memslot, + .gfn = gpa_to_gfn(gpc->gpa), + .flags = FOLL_WRITE, + .hva = gpc->uhva, + }; lockdep_assert_held(&gpc->refresh_lock); @@ -182,8 +188,8 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) cond_resched(); } - /* We always request a writeable mapping */ - new_pfn = hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL); + /* We always request a writable mapping */ + new_pfn = hva_to_pfn(&kfp); if (is_error_noslot_pfn(new_pfn)) goto out_error; From patchwork Thu Feb 29 02:57:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 13576446 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C76B73AC1E for ; Thu, 29 Feb 2024 02:58:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175507; cv=none; b=QkX8V4W/MVNGtAMS5978d0hkRRR/kp+Du1yM7vbu2asCPS6wXJx9f2JejYnx8+QUOuZy06dim9j3nv2gKe7VNJ/X1Nc2srTcpBM9wcqCmNtL8OCtlwNkcWR32Bu4+NdaPMInddrN5/dvP2HGZ9T5771tUxnFBswVKCXQR1y+x+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175507; c=relaxed/simple; bh=wTBT90vLQAt7jPRgA9QAvNLDpWe+6cmuHUKhvPwYsbY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c6aRx2JSW4o3g/VURvtdHcZtFc1bSBIAg78bdAmT6jTK/PMpWigrMiWUZ4DQkNRAv0XEgXpKiFmPKQAZPp33chg0aR5+b2EfSXIqL2m/Wsowph4zm4tJRVY/iZSA/3aeJtjfLMTikye1JoNCR6FjhSIGmwg0i+tD720XL5FwKdA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=TUm3QJqU; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="TUm3QJqU" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-6e4d48a5823so322854b3a.1 for ; Wed, 28 Feb 2024 18:58:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175505; x=1709780305; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=guFGcwgeQ9q19T0MpQj3XFS0S0+Vp/ZSlvHqGbok+lw=; b=TUm3QJqUmYeb/qZSBMmtBKc+whEjzhK5xu3GdtOo5gOuv0atgLwyiINVQXQu/E4ikU cDYUdPovJvpCQZQwPN+FaG7Sp3vOzekYuDzQDEt/na8DBWK8NW7idxY4PDSVhPTevjBl yum3+0ibwYWr/7X95ZBJ/0YFMDoDh+esPK3Pc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175505; x=1709780305; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=guFGcwgeQ9q19T0MpQj3XFS0S0+Vp/ZSlvHqGbok+lw=; b=IvPajg1U9PTPN0UKIKcC2Y/Nj4nVooPoucaGM2sKuLVoWSXQ0g9SC2hiu0GZIbEhb2 +0DLRl+laFWZaBT43CyVAKrqvIlzAKsILybELW1qxmag90BYNhql4/BBDAg/pjbcHipd 4QcR931Gw3xxxIG2hh8d4lpvsvX7/o9MjOjFNEWR6qfIY46wI3P1By+QyYNf/CkOBkFM bj7TJs9YMfg7wES4fprzRV0yCX07BlpJB4pUNC0PIfxkboZEzsptAo5MglsvOGhfvezl 0eqy7Yt5mDPWhAqQQJYPYwC8hU565978QD2Jl+sEuVr7pkDfXDwGpvGIoJJ3nv8XVcmx dk6w== X-Forwarded-Encrypted: i=1; AJvYcCUU0AxAa79RIdCfmb7n8ZwcKFtZ29ZcRX05bqZBnvSdjkTN3sCY0AbawCBqbXQtHhgk0fiwmCiS95FtfUUpX3iLL40m X-Gm-Message-State: AOJu0Yw9Fa9jNCBAsjRz2SUAXy53E2FwTa4JL/V8hGznrPIw1KhUQpP+ P5Uu8QZ43lXbctdmTSxqzSzpVGD5xQuF57jzEs2BR7GoxfISdaRZFm1Mo1p3Xg== X-Google-Smtp-Source: AGHT+IG/RLoeouoKaD41YPHYi3DnsdUmf+ZF9Y41z3ijesgZgEI1arDthFnKjFggH35c9NSPvQ8V0w== X-Received: by 2002:a05:6a00:c87:b0:6e5:3b0e:9f14 with SMTP id a7-20020a056a000c8700b006e53b0e9f14mr1271659pfv.13.1709175505093; Wed, 28 Feb 2024 18:58:25 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id u32-20020a056a0009a000b006e144ec8eafsm153448pfg.119.2024.02.28.18.58.22 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:24 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v11 4/8] KVM: mmu: Improve handling of non-refcounted pfns Date: Thu, 29 Feb 2024 11:57:55 +0900 Message-ID: <20240229025759.1187910-5-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Stevens KVM's handling of non-refcounted pfns has two problems: - pfns without struct pages can be accessed without the protection of a mmu notifier. This is unsafe because KVM cannot monitor or control the lifespan of such pfns, so it may continue to access the pfns after they are freed. - struct pages without refcounting (e.g. tail pages of non-compound higher order pages) cannot be used at all, as gfn_to_pfn does not provide enough information for callers to be able to avoid underflowing the refcount. This patch extends the kvm_follow_pfn() API to properly handle these cases: - First, it adds FOLL_GET to the list of supported flags, to indicate whether or not the caller actually wants to take a refcount. - Second, it adds a guarded_by_mmu_notifier parameter that is used to avoid returning non-refcounted pages when the caller cannot safely use them. - Third, it adds an is_refcounted_page output parameter so that callers can tell whether or not a pfn has a struct page that needs to be passed to put_page. Since callers need to be updated on a case-by-case basis to pay attention to is_refcounted_page, the new behavior of returning non-refcounted pages is opt-in via the allow_non_refcounted_struct_page parameter. Once all callers have been updated, this parameter should be removed. The fact that non-refcounted pfns can no longer be accessed without mmu notifier protection by default is a breaking change. This patch provides a module parameter that system admins can use to re-enable the previous unsafe behavior when userspace is trusted not to migrate/free/etc non-refcounted pfns that are mapped into the guest. There is no timeline for updating everything in KVM to use mmu notifiers to alleviate the need for this module parameter. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 29 +++++++++++ virt/kvm/kvm_main.c | 104 +++++++++++++++++++++++++-------------- virt/kvm/pfncache.c | 3 +- 3 files changed, 99 insertions(+), 37 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 290db5133c36..66516088bb0a 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1219,10 +1219,39 @@ struct kvm_follow_pfn { bool atomic; /* Try to create a writable mapping even for a read fault. */ bool try_map_writable; + /* + * Usage of the returned pfn will be guared by a mmu notifier. If + * FOLL_GET is not set, this must be true. + */ + bool guarded_by_mmu_notifier; + /* + * When false, do not return pfns for non-refcounted struct pages. + * + * This allows callers to continue to rely on the legacy behavior + * where pfs returned by gfn_to_pfn can be safely passed to + * kvm_release_pfn without worrying about corrupting the refcount of + * non-refcounted pages. + * + * Callers that opt into non-refcount struct pages need to track + * whether or not the returned pages are refcounted and avoid touching + * them when they are not. Some architectures may not have enough + * free space in PTEs to do this. + */ + bool allow_non_refcounted_struct_page; /* Outputs of kvm_follow_pfn */ hva_t hva; bool writable; + /* + * Non-NULL if the returned pfn is for a page with a valid refcount, + * NULL if the returned pfn has no struct page or if the struct page is + * not being refcounted (e.g. tail pages of non-compound higher order + * allocations from IO/PFNMAP mappings). + * + * NOTE: This will still be set if FOLL_GET is not specified, but the + * returned page will not have an elevated refcount. + */ + struct page *refcounted_page; }; kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 575756c9c5b0..984bcf8511e7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -96,6 +96,13 @@ unsigned int halt_poll_ns_shrink; module_param(halt_poll_ns_shrink, uint, 0644); EXPORT_SYMBOL_GPL(halt_poll_ns_shrink); +/* + * Allow non-refcounted struct pages and non-struct page memory to + * be mapped without MMU notifier protection. + */ +static bool allow_unsafe_mappings; +module_param(allow_unsafe_mappings, bool, 0444); + /* * Ordering of locks: * @@ -2786,6 +2793,24 @@ static inline int check_user_page_hwpoison(unsigned long addr) return rc == -EHWPOISON; } +static kvm_pfn_t kvm_follow_refcounted_pfn(struct kvm_follow_pfn *kfp, + struct page *page) +{ + kvm_pfn_t pfn = page_to_pfn(page); + + /* + * FIXME: Ideally, KVM wouldn't pass FOLL_GET to gup() when the caller + * doesn't want to grab a reference, but gup() doesn't support getting + * just the pfn, i.e. FOLL_GET is effectively mandatory. If that ever + * changes, drop this and simply don't pass FOLL_GET to gup(). + */ + if (!(kfp->flags & FOLL_GET)) + put_page(page); + + kfp->refcounted_page = page; + return pfn; +} + /* * The fast path to get the writable pfn which will be stored in @pfn, * true indicates success, otherwise false is returned. It's also the @@ -2804,7 +2829,7 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) return false; if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { - *pfn = page_to_pfn(page[0]); + *pfn = kvm_follow_refcounted_pfn(kfp, page[0]); kfp->writable = true; return true; } @@ -2851,7 +2876,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) page = wpage; } } - *pfn = page_to_pfn(page); + *pfn = kvm_follow_refcounted_pfn(kfp, page); return npages; } @@ -2866,16 +2891,6 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault) return true; } -static int kvm_try_get_pfn(kvm_pfn_t pfn) -{ - struct page *page = kvm_pfn_to_refcounted_page(pfn); - - if (!page) - return 1; - - return get_page_unless_zero(page); -} - static int hva_to_pfn_remapped(struct vm_area_struct *vma, struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { @@ -2884,6 +2899,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, pte_t pte; spinlock_t *ptl; bool write_fault = kfp->flags & FOLL_WRITE; + struct page *page; int r; r = follow_pte(vma->vm_mm, kfp->hva, &ptep, &ptl); @@ -2908,37 +2924,40 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, pte = ptep_get(ptep); + kfp->writable = pte_write(pte); + pfn = pte_pfn(pte); + + page = kvm_pfn_to_refcounted_page(pfn); + if (write_fault && !pte_write(pte)) { pfn = KVM_PFN_ERR_RO_FAULT; goto out; } - kfp->writable = pte_write(pte); - pfn = pte_pfn(pte); + if (!page) + goto out; /* - * Get a reference here because callers of *hva_to_pfn* and - * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the - * returned pfn. This is only needed if the VMA has VM_MIXEDMAP - * set, but the kvm_try_get_pfn/kvm_release_pfn_clean pair will - * simply do nothing for reserved pfns. - * - * Whoever called remap_pfn_range is also going to call e.g. - * unmap_mapping_range before the underlying pages are freed, - * causing a call to our MMU notifier. - * - * Certain IO or PFNMAP mappings can be backed with valid - * struct pages, but be allocated without refcounting e.g., - * tail pages of non-compound higher order allocations, which - * would then underflow the refcount when the caller does the - * required put_page. Don't allow those pages here. + * IO or PFNMAP mappings can be backed with valid struct pages but be + * allocated without refcounting. We need to detect that to make sure we + * only pass refcounted pages to kvm_follow_refcounted_pfn. */ - if (!kvm_try_get_pfn(pfn)) - r = -EFAULT; + if (get_page_unless_zero(page)) + WARN_ON_ONCE(kvm_follow_refcounted_pfn(kfp, page) != pfn); out: pte_unmap_unlock(ptep, ptl); - *p_pfn = pfn; + + if (page && !kfp->refcounted_page && + !kfp->allow_non_refcounted_struct_page) { + r = -EFAULT; + } else if (!kfp->refcounted_page && + !kfp->guarded_by_mmu_notifier && + !allow_unsafe_mappings) { + r = -EFAULT; + } else { + *p_pfn = pfn; + } return r; } @@ -3004,6 +3023,11 @@ kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp) { kfp->writable = false; + kfp->refcounted_page = NULL; + + if (WARN_ON_ONCE(!(kfp->flags & FOLL_GET) && !kfp->guarded_by_mmu_notifier)) + return KVM_PFN_ERR_FAULT; + kfp->hva = __gfn_to_hva_many(kfp->slot, kfp->gfn, NULL, kfp->flags & FOLL_WRITE); @@ -3028,9 +3052,10 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, struct kvm_follow_pfn kfp = { .slot = slot, .gfn = gfn, - .flags = 0, + .flags = FOLL_GET, .atomic = atomic, .try_map_writable = !!writable, + .allow_non_refcounted_struct_page = false, }; if (write_fault) @@ -3060,8 +3085,9 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, struct kvm_follow_pfn kfp = { .slot = gfn_to_memslot(kvm, gfn), .gfn = gfn, - .flags = write_fault ? FOLL_WRITE : 0, + .flags = FOLL_GET | (write_fault ? FOLL_WRITE : 0), .try_map_writable = !!writable, + .allow_non_refcounted_struct_page = false, }; pfn = kvm_follow_pfn(&kfp); if (writable) @@ -3075,7 +3101,8 @@ kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) struct kvm_follow_pfn kfp = { .slot = slot, .gfn = gfn, - .flags = FOLL_WRITE, + .flags = FOLL_GET | FOLL_WRITE, + .allow_non_refcounted_struct_page = false, }; return kvm_follow_pfn(&kfp); } @@ -3086,8 +3113,13 @@ kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gf struct kvm_follow_pfn kfp = { .slot = slot, .gfn = gfn, - .flags = FOLL_WRITE, + .flags = FOLL_GET | FOLL_WRITE, .atomic = true, + /* + * Setting atomic means __kvm_follow_pfn will never make it + * to hva_to_pfn_remapped, so this is vacuously true. + */ + .allow_non_refcounted_struct_page = true, }; return kvm_follow_pfn(&kfp); } diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 1fb21c2ced5d..6e82062ea203 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -147,8 +147,9 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) struct kvm_follow_pfn kfp = { .slot = gpc->memslot, .gfn = gpa_to_gfn(gpc->gpa), - .flags = FOLL_WRITE, + .flags = FOLL_GET | FOLL_WRITE, .hva = gpc->uhva, + .allow_non_refcounted_struct_page = false, }; lockdep_assert_held(&gpc->refresh_lock); From patchwork Thu Feb 29 02:57:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 13576447 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 864443D388 for ; Thu, 29 Feb 2024 02:58:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175512; cv=none; b=M4smkG7G5Es42F22lO4+enSGzOqlNSqctTiFzoWWnDAWTGrJ9y5Z3u09dda/z72ohPq2UUfqdOxzMU2Kb74ugrzA3WAgU5AfqjQXYq4M2Tlj6cb7Y4XJXlYD8/tkdFnLU4pjbQulNoQFqA06lczv33wUaexL4U5GxdMjQ74WSdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175512; c=relaxed/simple; bh=ntmqS432o2IecDkbveS88A7jvkU9H2eMDxulzMNm4uA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SR2RoSp9JnWYjLbL5ySciOUrm0RvGPmk3ZqvozJFHp9tNFKA0xPx8T/XoTm2jL3o+cC95o6BmERbSyu2nwFQ6OY2BCQ43xX4zXiPiYgosMVblF30Sqrdk3fd+qfUP/xX2SQIm5+h9of164ViP3DkSKF5NXCw44je+soYtgwLkO4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=J0QDgUnR; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="J0QDgUnR" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-6e581ba25e1so334853b3a.1 for ; Wed, 28 Feb 2024 18:58:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175510; x=1709780310; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ni2jT8j4ual03uOxDJftxXgB5iCIPPNSKtQ7Mr+vnE0=; b=J0QDgUnR/OeRtU1ByzEXvqoTR5PYVh3RUK9jBS1tOfws8S1szIMaZRyDiVs0rr0iH/ nwpSRRKmq79Fo67RgPTkzk1X8T0ksPR5oCvPFX9gLjJahSEEOahzeZRcjoaz2hdZYU9e hdgEB1OZIYjHwfILSifxU9lPSTLXxfy1R35oQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175510; x=1709780310; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ni2jT8j4ual03uOxDJftxXgB5iCIPPNSKtQ7Mr+vnE0=; b=qWSHrBDndpFtNXIeSqEvBmnbe+kVm37+HgYepQBlB7PLrC0UDs3vUhk/GPRsZ3WkSO 6TS7pZRxMtt22OQcNnnyvwy2bUcb3hR9Wn73X557G+6Bm3RhiRgd1QX35OHBUf1TW4DC ewvuwn7x4s4fTWV3qoEmIKthZg9vNy7kA2asC//Hfz9EsaGfWy2dDxYajFkAM996uGvE XNC8g98GGitElfjeo0GmhhfDLTt8FWaiutY8CiEUpb2QSoGctE0Q/3waDVgE8GCfcGOZ 13DqEOhqePQgf/lVs5Xr9g+harnP+hGcFqNWYfMnrcmnszHW3rIaJyfyIXsw+FSocC5J g7rg== X-Forwarded-Encrypted: i=1; AJvYcCWdRvZp3rD7jlBJjNIyJNxQl1aCUWQVBEhEtDJp2T0+OD9DPNu7YRbvuICsh0yNGtAK/Gtf56oXtKaAIVDz5POJnP9Y X-Gm-Message-State: AOJu0Yz8o/7VnVX1CCeEPmscBoGzI+1PgwoutUzysEv5FqPewxCiCNn8 l4FfScEbLCmoBwYNPNju7A+dmpgQg4rUUI0ParJpRJ/q8hE2FUrZC4RN6KRlkw== X-Google-Smtp-Source: AGHT+IFrTrtIz8ZueShqadVtdqzR++6dqKD8QrlfV9UFyrfW9Ry/CvODvFWbe40fOMGg2kbfHMju+Q== X-Received: by 2002:a05:6a00:cc4:b0:6e5:37cb:64c4 with SMTP id b4-20020a056a000cc400b006e537cb64c4mr952284pfv.9.1709175509796; Wed, 28 Feb 2024 18:58:29 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id x65-20020a626344000000b006da96503d9fsm167784pfb.109.2024.02.28.18.58.27 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:29 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v11 5/8] KVM: Migrate kvm_vcpu_map() to kvm_follow_pfn() Date: Thu, 29 Feb 2024 11:57:56 +0900 Message-ID: <20240229025759.1187910-6-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Stevens Migrate kvm_vcpu_map() to kvm_follow_pfn(). Track is_refcounted_page so that kvm_vcpu_unmap() know whether or not it needs to release the page. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 24 ++++++++++++++---------- 2 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 66516088bb0a..59dc9fbafc08 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -295,6 +295,7 @@ struct kvm_host_map { void *hva; kvm_pfn_t pfn; kvm_pfn_t gfn; + bool is_refcounted_page; }; /* @@ -1270,7 +1271,6 @@ void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty); int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 984bcf8511e7..17bf9fd6774e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3184,24 +3184,22 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) } EXPORT_SYMBOL_GPL(gfn_to_page); -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty) -{ - if (dirty) - kvm_release_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); -} - int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) { kvm_pfn_t pfn; void *hva = NULL; struct page *page = KVM_UNMAPPED_PAGE; + struct kvm_follow_pfn kfp = { + .slot = gfn_to_memslot(vcpu->kvm, gfn), + .gfn = gfn, + .flags = FOLL_GET | FOLL_WRITE, + .allow_non_refcounted_struct_page = true, + }; if (!map) return -EINVAL; - pfn = gfn_to_pfn(vcpu->kvm, gfn); + pfn = kvm_follow_pfn(&kfp); if (is_error_noslot_pfn(pfn)) return -EINVAL; @@ -3221,6 +3219,7 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) map->hva = hva; map->pfn = pfn; map->gfn = gfn; + map->is_refcounted_page = !!kfp.refcounted_page; return 0; } @@ -3244,7 +3243,12 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty) if (dirty) kvm_vcpu_mark_page_dirty(vcpu, map->gfn); - kvm_release_pfn(map->pfn, dirty); + if (map->is_refcounted_page) { + if (dirty) + kvm_release_page_dirty(map->page); + else + kvm_release_page_clean(map->page); + } map->hva = NULL; map->page = NULL; From patchwork Thu Feb 29 02:57:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 13576448 Received: from mail-oi1-f174.google.com (mail-oi1-f174.google.com [209.85.167.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69EB242053 for ; Thu, 29 Feb 2024 02:58:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175517; cv=none; b=qxgqLMRkv3q6kGWkqeJYtjsemk9Ix3+rQtOke7xM6BWMEyECzlpcw2iH+vxgujvbdhRhPe1h+OsXWb/TKIUU9WDaIJg/t5qJoYgMEDKbBvvyrFmCQroZA2zwYBKeM7v7TDnYysJ/QWg/dUTpEOzCN1YtCnXCdqlpS7D9v3uTFb4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175517; c=relaxed/simple; bh=UIJzBxDI/Inn6eJziAqaeOpmUvLyyhxfpWjO2dCmE1Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gJJuXJc2pmJJspq+CveRvmU+9PuA4UvKtqLw290Py2NAOelHS2IHacFv+lHDCJYUsiKFJPfafBhkaQRodMaTi5cvUAP2I3xj7wsf4Z19tDAzlnMs4JCkenCbjAjN36ovc0W0fhbi12v2T8TJA0MOHKm+ruOtmJcLtvAFtGIAMwk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=iMgLyVET; arc=none smtp.client-ip=209.85.167.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="iMgLyVET" Received: by mail-oi1-f174.google.com with SMTP id 5614622812f47-3bb9d54575cso256067b6e.2 for ; Wed, 28 Feb 2024 18:58:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175514; x=1709780314; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rjuEwew1c9XXdIDCEsgjgWaQEbnELox0pWUx+WG8IvI=; b=iMgLyVETxWC8KRYKo/7f74QL9LDcZBAMlVPkBKvtTNYj/ZL14ZW7CKnWrEJVkO1PKA mBcVirTjXDxpOt84VuuDZLrF/Z3h5ayEsGwJvGsfhp0e9xPGn9eNwv7r2Kovve3L+KVF odJU3QEC08tnBddJm/oQYWU46oyOvELb6P/jI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175514; x=1709780314; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rjuEwew1c9XXdIDCEsgjgWaQEbnELox0pWUx+WG8IvI=; b=dD8zEODzihFiWMVNoAVNHH79iIbivPf1HCrSKOuRwGSAbQXVKAc+ARLgqW6oi2pZHK J9OYH3gTpJ/yeTLOotGOsvzG3twHACH8NqdPY1/d4Q2O8LpVrppS92z824VbDLHA2jz6 LoCQ8QaPZgQXuZG5e1v5vJK45FD9Pul0iKQc0KkVq5rYQbQrDP0U/4lw8MBuGmctULNw xXvmhdjCgI+3mwGqNFrcWg3v6WgSE22toe2OlOCM8yD8BCS/p0JPjkYmmzn12pHH8Mdz UaIpfrYz3Tf0DC4zIpY8tSrHfysdeEr1r8rZtq/HDxxbf8GEm7o10LTP+6YhlBl/vw3a sutw== X-Forwarded-Encrypted: i=1; AJvYcCXfp5AnN73J1ox/abwxOAFh+ADNK7unznzfeWZW18mqhxaJKwdC+PfHA3EXpDqeB2zkYiBQVZWjzVeQyPky00Q9Bjqp X-Gm-Message-State: AOJu0YzlGAJ7ujDzsZSKGZj9l24SNUjWC186J4wD99RVwb7mtjD1RQ0n fgoZIv2dGUKUf8VO5XFwjEkjpRuOrwUhBk1VhePJ6aHETzqZZ+QN0o2IYcqHjQ== X-Google-Smtp-Source: AGHT+IHv5f1wWy5Mteh5mYoqzd0nZksGuXKcqFrO3o6Kt/eptOM26RonYJOAOu65xJX16drRYjcd3w== X-Received: by 2002:a05:6808:2196:b0:3c1:acc3:99ce with SMTP id be22-20020a056808219600b003c1acc399cemr1050071oib.37.1709175514558; Wed, 28 Feb 2024 18:58:34 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id e25-20020a62aa19000000b006e45dce37basm153830pff.220.2024.02.28.18.58.32 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:34 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v11 6/8] KVM: x86: Migrate to kvm_follow_pfn() Date: Thu, 29 Feb 2024 11:57:57 +0900 Message-ID: <20240229025759.1187910-7-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Stevens Migrate functions which need to be able to map non-refcounted struct pages to kvm_follow_pfn(). These functions are kvm_faultin_pfn() and reexecute_instruction(). The former requires replacing the async in/out parameter with FOLL_NOWAIT parameter and the KVM_PFN_ERR_NEEDS_IO return value (actually handling non-refcounted pages is complicated, so it will be done in a followup). The latter is a straightforward refactor. APIC related callers do not need to migrate because KVM controls the memslot, so it will always be regular memory. Prefetch related callers do not need to be migrated because atomic gfn_to_pfn() calls can never make it to hva_to_pfn_remapped(). Signed-off-by: David Stevens Reviewed-by: Maxim Levitsky --- arch/x86/kvm/mmu/mmu.c | 43 ++++++++++++++++++++++++++++++++---------- arch/x86/kvm/x86.c | 11 +++++++++-- virt/kvm/kvm_main.c | 11 ++++------- 3 files changed, 46 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2d6cdeab1f8a..bbeb0f6783d7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4331,7 +4331,14 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; - bool async; + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = fault->gfn, + .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0), + .try_map_writable = true, + .guarded_by_mmu_notifier = true, + .allow_non_refcounted_struct_page = false, + }; /* * Retry the page fault if the gfn hit a memslot that is being deleted @@ -4368,12 +4375,20 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (fault->is_private) return kvm_faultin_pfn_private(vcpu, fault); - async = false; - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async, - fault->write, &fault->map_writable, - &fault->hva); - if (!async) - return RET_PF_CONTINUE; /* *pfn has correct page already */ + kfp.flags |= FOLL_NOWAIT; + fault->pfn = kvm_follow_pfn(&kfp); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + /* + * If kvm_follow_pfn() failed because I/O is needed to fault in the + * page, then either set up an asynchronous #PF to do the I/O, or if + * doing an async #PF isn't possible, retry kvm_follow_pfn() with + * I/O allowed. All other failures are fatal, i.e. retrying won't help. + */ + if (fault->pfn != KVM_PFN_ERR_NEEDS_IO) + return RET_PF_CONTINUE; if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); @@ -4391,9 +4406,17 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, true, NULL, - fault->write, &fault->map_writable, - &fault->hva); + kfp.flags |= FOLL_INTERRUPTIBLE; + kfp.flags &= ~FOLL_NOWAIT; + fault->pfn = kvm_follow_pfn(&kfp); + + if (!is_error_noslot_pfn(fault->pfn)) + goto success; + + return RET_PF_CONTINUE; +success: + fault->hva = kfp.hva; + fault->map_writable = kfp.writable; return RET_PF_CONTINUE; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 363b1c080205..f4a20e9bc7a6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8747,6 +8747,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, { gpa_t gpa = cr2_or_gpa; kvm_pfn_t pfn; + struct kvm_follow_pfn kfp; if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -8776,7 +8777,13 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * retry instruction -> write #PF -> emulation fail -> retry * instruction -> ... */ - pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); + kfp = (struct kvm_follow_pfn) { + .slot = gfn_to_memslot(vcpu->kvm, gpa_to_gfn(gpa)), + .gfn = gpa_to_gfn(gpa), + .flags = FOLL_GET | FOLL_WRITE, + .allow_non_refcounted_struct_page = true, + }; + pfn = kvm_follow_pfn(&kfp); /* * If the instruction failed on the error pfn, it can not be fixed, @@ -8785,7 +8792,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (is_error_noslot_pfn(pfn)) return false; - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(kfp.refcounted_page); /* The instructions are well-emulated on direct mmu. */ if (vcpu->arch.mmu->root_role.direct) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 17bf9fd6774e..24e2269339cb 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3293,6 +3293,9 @@ void kvm_release_page_clean(struct page *page) { WARN_ON(is_error_page(page)); + if (!page) + return; + kvm_set_page_accessed(page); put_page(page); } @@ -3300,16 +3303,10 @@ EXPORT_SYMBOL_GPL(kvm_release_page_clean); void kvm_release_pfn_clean(kvm_pfn_t pfn) { - struct page *page; - if (is_error_noslot_pfn(pfn)) return; - page = kvm_pfn_to_refcounted_page(pfn); - if (!page) - return; - - kvm_release_page_clean(page); + kvm_release_page_clean(kvm_pfn_to_refcounted_page(pfn)); } EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); From patchwork Thu Feb 29 02:57:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 13576449 Received: from mail-oa1-f51.google.com (mail-oa1-f51.google.com [209.85.160.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D8C8446AC for ; Thu, 29 Feb 2024 02:58:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175522; cv=none; b=og48cPdG6VBV9klOQ9AX8UwMBAHBcPYqBygyj0GoX0q2WNE/AYHTghXdpyJXo+12gdqogL7jcq+Ew4zN4z+c0RHiaMxWqUgys9+3f/kbmiSTNOnPPpgVA0dPCunm1MnInEdM4MZlUqEWT43mQ4+kLSFYXDu6E/DEq/t5mbX+tV0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175522; c=relaxed/simple; bh=mx4ftk9A2eIm9XAiiyU38OIODZ8JYeMn5dt8m6qb2xg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OlBv0OFvQjbYVS/8XndpBUrMP2wbYIr57zoxN2jztVebWM1BXLN0Q1ma+xg3Ah0ODMc3Oi8lvNHpIMNpV+LRnkyRN4AiYjtl2AWGO6+KNjYkj5iRHgyCwCkxW6NVLoWaILp24mTTdUwDCN+C/TIvoJj8Gk9PtyLIEyyuOP5bU8o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=OSJZ1f5M; arc=none smtp.client-ip=209.85.160.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="OSJZ1f5M" Received: by mail-oa1-f51.google.com with SMTP id 586e51a60fabf-214def5da12so218030fac.2 for ; Wed, 28 Feb 2024 18:58:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175519; x=1709780319; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mkoJpJo0GmazuamQk6RjcdpeI7xXFyOUp8sC9NxHbRI=; b=OSJZ1f5MVL/wYvFWJSp5mZ9WtNfkduU/eKx0FTVD/QunRr+FZRNvWTq2zQzQouwyd1 KSIsxDkRZ0/TpAWFeBQpcKPHn++cMNm0WNjM4+wNb8AnoS76o5SPDXPmfnpjurLdnXP4 1IbSyyIJkQpdQe1UhUfu6brQN7F4EPP5SKV7E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175519; x=1709780319; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mkoJpJo0GmazuamQk6RjcdpeI7xXFyOUp8sC9NxHbRI=; b=iZiGHatW+xTpwE1jhMtXnw+um9oTM29UbI7q4dB9AReDUbSb8flmjWxRRZDtmwbohC 36ffbBjvNPvwUs343rUZYtY42rV+C/IUBMZUSFI++L0sSPARIOKE4Lmxt/rUwTMta6mG Fr0cZeFPvoVNZEaUJezF91RHLHq6sTNIL7nlrcg3cZ7ad+ehqmyb2p4gqtj7b83XURl/ ynxX/hXFETDGHtw1GhTNsWBGutY4t67tgKyxQT6RH9a44w+NleD1eMC7EJZKJJrJ9rmq oei3laPNeItJu8BzJi6DJcMr2NZq3XSuoZc3V/dL1hhRGccTbFt6ajpZR7IpORr3ZhQ3 NUaA== X-Forwarded-Encrypted: i=1; AJvYcCVXD9vO9ICE82WV9r5LfwIrWyrKnkk+5qyD88fCeBKiq/hmuBL4PIbtpI8qpjPp8cHc0kAjHXh7rlMlqXIKGUT4GtwI X-Gm-Message-State: AOJu0YxECU+H5WMMAW9ENY3XUc2zr9aoTZnXFI95YZJKEQHfD/Ba1fu7 chQ9i+zVupmv4pZftjQTciE+vz++a6aPSpiNYRFbFJqROXywupcVYvuB7jMgsQ== X-Google-Smtp-Source: AGHT+IHgSlYOUqHE4Ztt8RlztYFhsUc43C3tIJE3BmbRfEvMcUwAl07lFXU547r4/LOxkwdG5eTJ8Q== X-Received: by 2002:a05:6870:a79e:b0:21f:cd31:f051 with SMTP id x30-20020a056870a79e00b0021fcd31f051mr796695oao.11.1709175519257; Wed, 28 Feb 2024 18:58:39 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id z12-20020aa785cc000000b006e56e5c09absm166699pfn.14.2024.02.28.18.58.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:38 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v11 7/8] KVM: x86/mmu: Track if sptes refer to refcounted pages Date: Thu, 29 Feb 2024 11:57:58 +0900 Message-ID: <20240229025759.1187910-8-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Stevens Use one of the unused bits in EPT sptes to track whether or not an spte refers to a struct page that has a valid refcount, in preparation for adding support for mapping such pages into guests. The new bit is used to avoid triggering a page_count() == 0 warning and to avoid touching A/D bits of unknown usage. Non-EPT sptes don't have any free bits to use, so this tracking is not possible when TDP is disabled or on 32-bit x86. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 47 ++++++++++++++++++++-------------- arch/x86/kvm/mmu/paging_tmpl.h | 5 ++-- arch/x86/kvm/mmu/spte.c | 5 +++- arch/x86/kvm/mmu/spte.h | 16 +++++++++++- arch/x86/kvm/mmu/tdp_mmu.c | 21 ++++++++------- include/linux/kvm_host.h | 3 +++ virt/kvm/kvm_main.c | 6 +++-- 7 files changed, 69 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bbeb0f6783d7..4936a8c5829b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -541,12 +541,14 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) { flush = true; - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); + if (is_refcounted_page_spte(old_spte)) + kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); } if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) { flush = true; - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + if (is_refcounted_page_spte(old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte))); } return flush; @@ -578,20 +580,23 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) pfn = spte_to_pfn(old_spte); - /* - * KVM doesn't hold a reference to any pages mapped into the guest, and - * instead uses the mmu_notifier to ensure that KVM unmaps any pages - * before they are reclaimed. Sanity check that, if the pfn is backed - * by a refcounted page, the refcount is elevated. - */ - page = kvm_pfn_to_refcounted_page(pfn); - WARN_ON_ONCE(page && !page_count(page)); + if (is_refcounted_page_spte(old_spte)) { + /* + * KVM doesn't hold a reference to any pages mapped into the + * guest, and instead uses the mmu_notifier to ensure that KVM + * unmaps any pages before they are reclaimed. Sanity check + * that, if the pfn is backed by a refcounted page, the + * refcount is elevated. + */ + page = kvm_pfn_to_refcounted_page(pfn); + WARN_ON_ONCE(!page || !page_count(page)); - if (is_accessed_spte(old_spte)) - kvm_set_pfn_accessed(pfn); + if (is_accessed_spte(old_spte)) + kvm_set_page_accessed(pfn_to_page(pfn)); - if (is_dirty_spte(old_spte)) - kvm_set_pfn_dirty(pfn); + if (is_dirty_spte(old_spte)) + kvm_set_page_dirty(pfn_to_page(pfn)); + } return old_spte; } @@ -627,8 +632,8 @@ static bool mmu_spte_age(u64 *sptep) * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. */ - if (is_writable_pte(spte)) - kvm_set_pfn_dirty(spte_to_pfn(spte)); + if (is_writable_pte(spte) && is_refcounted_page_spte(spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(spte))); spte = mark_spte_for_access_track(spte); mmu_spte_update_no_track(sptep, spte); @@ -1267,8 +1272,8 @@ static bool spte_wrprot_for_clear_dirty(u64 *sptep) { bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT, (unsigned long *)sptep); - if (was_writable && !spte_ad_enabled(*sptep)) - kvm_set_pfn_dirty(spte_to_pfn(*sptep)); + if (was_writable && !spte_ad_enabled(*sptep) && is_refcounted_page_spte(*sptep)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(*sptep))); return was_writable; } @@ -2946,7 +2951,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, - true, host_writable, &spte); + true, host_writable, true, &spte); if (*sptep == spte) { ret = RET_PF_SPURIOUS; @@ -5999,6 +6004,10 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_X86_64 tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled; + + /* The SPTE_MMU_PAGE_REFCOUNTED bit is only available with EPT. */ + if (enable_tdp) + shadow_refcounted_mask = SPTE_MMU_PAGE_REFCOUNTED; #endif /* * max_huge_page_level reflects KVM's MMU capabilities irrespective diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 4d4e98fe4f35..c965f77ac4d5 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -902,7 +902,7 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, */ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i) { - bool host_writable; + bool host_writable, is_refcounted; gpa_t first_pte_gpa; u64 *sptep, spte; struct kvm_memory_slot *slot; @@ -959,10 +959,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int sptep = &sp->spt[i]; spte = *sptep; host_writable = spte & shadow_host_writable_mask; + is_refcounted = is_refcounted_page_spte(spte); slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); make_spte(vcpu, sp, slot, pte_access, gfn, spte_to_pfn(spte), spte, true, false, - host_writable, &spte); + host_writable, is_refcounted, &spte); return mmu_spte_update(sptep, spte); } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4a599130e9c9..e4a458b7e185 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -39,6 +39,7 @@ u64 __read_mostly shadow_memtype_mask; u64 __read_mostly shadow_me_value; u64 __read_mostly shadow_me_mask; u64 __read_mostly shadow_acc_track_mask; +u64 __read_mostly shadow_refcounted_mask; u64 __read_mostly shadow_nonpresent_or_rsvd_mask; u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask; @@ -138,7 +139,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte) + bool host_writable, bool is_refcounted, u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -188,6 +189,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (level > PG_LEVEL_4K) spte |= PT_PAGE_SIZE_MASK; + if (is_refcounted) + spte |= shadow_refcounted_mask; if (shadow_memtype_mask) spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn, diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index a129951c9a88..6bf0069d8db6 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -96,6 +96,13 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK)); /* Defined only to keep the above static asserts readable. */ #undef SHADOW_ACC_TRACK_SAVED_MASK +/* + * Indicates that the SPTE refers to a page with a valid refcount. Only + * available for TDP SPTEs, since bits 62:52 are reserved for PAE paging, + * including NPT PAE. + */ +#define SPTE_MMU_PAGE_REFCOUNTED BIT_ULL(59) + /* * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of * the memslots generation and is derived as follows: @@ -345,6 +352,13 @@ static inline bool is_dirty_spte(u64 spte) return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; } +extern u64 __read_mostly shadow_refcounted_mask; + +static inline bool is_refcounted_page_spte(u64 spte) +{ + return !shadow_refcounted_mask || (spte & shadow_refcounted_mask); +} + static inline u64 get_rsvd_bits(struct rsvd_bits_validate *rsvd_check, u64 pte, int level) { @@ -475,7 +489,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, const struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, - bool host_writable, u64 *new_spte); + bool host_writable, bool is_refcounted, u64 *new_spte); u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page_role role, int index); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6ae19b4ee5b1..ee497fb78d90 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -414,6 +414,7 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, bool was_leaf = was_present && is_last_spte(old_spte, level); bool is_leaf = is_present && is_last_spte(new_spte, level); bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + bool is_refcounted = is_refcounted_page_spte(old_spte); WARN_ON_ONCE(level > PT64_ROOT_MAX_LEVEL); WARN_ON_ONCE(level < PG_LEVEL_4K); @@ -478,9 +479,9 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, if (is_leaf != was_leaf) kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1); - if (was_leaf && is_dirty_spte(old_spte) && + if (was_leaf && is_dirty_spte(old_spte) && is_refcounted && (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte))); /* * Recursively handle child PTs if the change removed a subtree from @@ -492,9 +493,9 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared); - if (was_leaf && is_accessed_spte(old_spte) && + if (was_leaf && is_accessed_spte(old_spte) && is_refcounted && (!is_present || !is_accessed_spte(new_spte) || pfn_changed)) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); + kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); } /* @@ -956,8 +957,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + fault->pfn, iter->old_spte, fault->prefetch, true, + fault->map_writable, true, &new_spte); if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; @@ -1178,8 +1179,9 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. */ - if (is_writable_pte(iter->old_spte)) - kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); + if (is_writable_pte(iter->old_spte) && + is_refcounted_page_spte(iter->old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter->old_spte))); new_spte = mark_spte_for_access_track(iter->old_spte); iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep, @@ -1602,7 +1604,8 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level, iter.old_spte, iter.old_spte & ~dbit); - kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte)); + if (is_refcounted_page_spte(iter.old_spte)) + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter.old_spte))); } rcu_read_unlock(); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 59dc9fbafc08..d19a418df04b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1211,6 +1211,9 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t gfn, void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); +void kvm_set_page_accessed(struct page *page); +void kvm_set_page_dirty(struct page *page); + struct kvm_follow_pfn { const struct kvm_memory_slot *slot; gfn_t gfn; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 24e2269339cb..235c92830cdc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3277,17 +3277,19 @@ static bool kvm_is_ad_tracked_page(struct page *page) return !PageReserved(page); } -static void kvm_set_page_dirty(struct page *page) +void kvm_set_page_dirty(struct page *page) { if (kvm_is_ad_tracked_page(page)) SetPageDirty(page); } +EXPORT_SYMBOL_GPL(kvm_set_page_dirty); -static void kvm_set_page_accessed(struct page *page) +void kvm_set_page_accessed(struct page *page) { if (kvm_is_ad_tracked_page(page)) mark_page_accessed(page); } +EXPORT_SYMBOL_GPL(kvm_set_page_accessed); void kvm_release_page_clean(struct page *page) { From patchwork Thu Feb 29 02:57:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 13576450 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CD3944C71 for ; Thu, 29 Feb 2024 02:58:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175525; cv=none; b=BJErCNJvHFd3PPkq26pnxy+3pewZKEnfVDwfnvsR4h3dXflRJZUWe32msdgL4mRqzTDdRe6XOipWAZ9dV1OF7U/EMbItTlRIISl5xuWb4z1Uq4khRYwkwsJLxlVE/7kpG6v+Noxbm/VbXMvschrM5vbhmH06uS7kWQpvwd0S/8Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175525; c=relaxed/simple; bh=CHA75S6snq8UA20Y4BdKK2SwtCegWvLmcyn13Xk/r2Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J3/3lLEMWxbgFttExIXNrNQ4i1aBBjsUR7Wmb9kptChvCYRAYtVqxOoeNnHTvYefk0/SfDlM8Cq6C9tJ5GLXAhWnx3ybZAmHdZqVpas69hGTyku1qpKEsQB+7wJ6UzxDwEn2ae8gr0QS+oJEQEsm8IHEpjE9eUie5QOpHmHLhCQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=mp1HfUoV; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="mp1HfUoV" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-1dba177c596so3133035ad.0 for ; Wed, 28 Feb 2024 18:58:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175523; x=1709780323; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jpTtMKeJSamVSvzUBrIb8hYRwduZxS9ewHlGqezhwx0=; b=mp1HfUoV8R1444WIRrFqUzYsztJIyvqm0ZOmqXBbcmLPwO5W+7I0kcV3s/V+A5voQl WGioLd8r/JBgnvkQFknWgPHmvi/NPAIZSdyK/E4gQvp2FjQJW+JlJBwbTBSi0DcuJuC2 MrwIJJcJHby6lJRxN52BzPSs2EGtqaSBp13X4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175523; x=1709780323; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jpTtMKeJSamVSvzUBrIb8hYRwduZxS9ewHlGqezhwx0=; b=IuaNoU5iodQPPiCCJHg2F73/iTqSxjQD6OrEh6hTRVswZKTRIw0r7ZoGMXiZu0nXuK se4W7rd0HFgb1A/pqxhSF/qA7U4S0tKc4PGqQuze7usYFJsYO4TpyvdtvkOit9ZSeVvc CnuMUwms9zXogCCJzJzVKuClLV4blG1id9OAdT3GgD6kGG8QEO4FkHXcpLT1WfjrbZMe zJj4v9LKz3DpGWNcfN+l46Lw1PjiDSxt8iut5rHD4UWo6BqVYPmumJh0i+qhckzWZSFr TqAe1ePELpSuUexNEg5sAcE5N+3rTx7c6rUgYrtzT3oQqfsWv12sb7muBHwrRoAXChvM mYaA== X-Forwarded-Encrypted: i=1; AJvYcCVpFAZxr4y2JpBFrNuGDGYSTAjegq8WTxdSyvo3RrbzrkcB56qOWzoSfPX5NVHlc8BQpdaxCB+JqhPd797FRpo2ETnl X-Gm-Message-State: AOJu0Ywbji4OrLn8hiEFkp9Agn78NuAHZCgsr+IlpGA226CoFPAXDqoT rj8hfaBEo5e1uJ4S5z0uG9NUP74yKVOPc9fNQ8R2C45WA/39pioXxYZ8sZ9V2Q== X-Google-Smtp-Source: AGHT+IGM1EDN7WbxlDlaAFvUVqxxOhGfcdRwSEFUSUMiVweNLOqnXEi8yEI66S2thaSNOdr2AlN3GA== X-Received: by 2002:a17:903:1c3:b0:1dc:b887:35bd with SMTP id e3-20020a17090301c300b001dcb88735bdmr1034209plh.5.1709175523466; Wed, 28 Feb 2024 18:58:43 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id x10-20020a170902ec8a00b001d5f1005096sm181559plg.55.2024.02.28.18.58.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:43 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v11 8/8] KVM: x86/mmu: Handle non-refcounted pages Date: Thu, 29 Feb 2024 11:57:59 +0900 Message-ID: <20240229025759.1187910-9-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Stevens Handle non-refcounted pages in __kvm_faultin_pfn. This allows the host to map memory into the guest that is backed by non-refcounted struct pages - for example, the tail pages of higher order non-compound pages allocated by the amdgpu driver via ttm_pool_alloc_page. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 24 +++++++++++++++++------- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- include/linux/kvm_host.h | 6 ++++-- virt/kvm/guest_memfd.c | 8 ++++---- virt/kvm/kvm_main.c | 10 ++++++++-- 7 files changed, 38 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4936a8c5829b..f9046912bb43 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2924,6 +2924,11 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, bool host_writable = !fault || fault->map_writable; bool prefetch = !fault || fault->prefetch; bool write_fault = fault && fault->write; + /* + * Prefetching uses gfn_to_page_many_atomic, which never gets + * non-refcounted pages. + */ + bool is_refcounted = !fault || !!fault->accessed_page; if (unlikely(is_noslot_pfn(pfn))) { vcpu->stat.pf_mmio_spte_created++; @@ -2951,7 +2956,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, - true, host_writable, true, &spte); + true, host_writable, is_refcounted, &spte); if (*sptep == spte) { ret = RET_PF_SPURIOUS; @@ -4319,8 +4324,8 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, return -EFAULT; } - r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &max_order); + r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, + &fault->pfn, &fault->accessed_page, &max_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; @@ -4330,6 +4335,9 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, fault->max_level); fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); + /* kvm_gmem_get_pfn takes a refcount, but accessed_page doesn't need it. */ + put_page(fault->accessed_page); + return RET_PF_CONTINUE; } @@ -4339,10 +4347,10 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault struct kvm_follow_pfn kfp = { .slot = slot, .gfn = fault->gfn, - .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0), + .flags = fault->write ? FOLL_WRITE : 0, .try_map_writable = true, .guarded_by_mmu_notifier = true, - .allow_non_refcounted_struct_page = false, + .allow_non_refcounted_struct_page = shadow_refcounted_mask, }; /* @@ -4359,6 +4367,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault fault->slot = NULL; fault->pfn = KVM_PFN_NOSLOT; fault->map_writable = false; + fault->accessed_page = NULL; return RET_PF_CONTINUE; } /* @@ -4422,6 +4431,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault success: fault->hva = kfp.hva; fault->map_writable = kfp.writable; + fault->accessed_page = kfp.refcounted_page; return RET_PF_CONTINUE; } @@ -4510,8 +4520,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault r = direct_map(vcpu, fault); out_unlock: + kvm_set_page_accessed(fault->accessed_page); write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } @@ -4586,8 +4596,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, r = kvm_tdp_mmu_map(vcpu, fault); out_unlock: + kvm_set_page_accessed(fault->accessed_page); read_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } #endif diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 0669a8a668ca..0b05183600af 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -240,6 +240,8 @@ struct kvm_page_fault { kvm_pfn_t pfn; hva_t hva; bool map_writable; + /* Does NOT have an elevated refcount */ + struct page *accessed_page; /* * Indicates the guest is trying to write a gfn that contains one or diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index c965f77ac4d5..b39dce802394 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -847,8 +847,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault r = FNAME(fetch)(vcpu, fault, &walker); out_unlock: + kvm_set_page_accessed(fault->accessed_page); write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); return r; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ee497fb78d90..0524be7c0796 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -958,7 +958,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, else wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, true, &new_spte); + fault->map_writable, !!fault->accessed_page, + &new_spte); if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d19a418df04b..ea34eae6cfa4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2426,11 +2426,13 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) #ifdef CONFIG_KVM_PRIVATE_MEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order); + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, - kvm_pfn_t *pfn, int *max_order) + kvm_pfn_t *pfn, struct page **page, + int *max_order) { KVM_BUG_ON(1, kvm); return -EIO; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 0f4e0cf4f158..dabcca2ecc37 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -483,12 +483,12 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) } int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, int *max_order) + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order) { pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff; struct kvm_gmem *gmem; struct folio *folio; - struct page *page; struct file *file; int r; @@ -514,9 +514,9 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, goto out_unlock; } - page = folio_file_page(folio, index); + *page = folio_file_page(folio, index); - *pfn = page_to_pfn(page); + *pfn = page_to_pfn(*page); if (max_order) *max_order = 0; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 235c92830cdc..1f5d2a1e63a9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3284,11 +3284,17 @@ void kvm_set_page_dirty(struct page *page) } EXPORT_SYMBOL_GPL(kvm_set_page_dirty); -void kvm_set_page_accessed(struct page *page) +static void __kvm_set_page_accessed(struct page *page) { if (kvm_is_ad_tracked_page(page)) mark_page_accessed(page); } + +void kvm_set_page_accessed(struct page *page) +{ + if (page) + __kvm_set_page_accessed(page); +} EXPORT_SYMBOL_GPL(kvm_set_page_accessed); void kvm_release_page_clean(struct page *page) @@ -3298,7 +3304,7 @@ void kvm_release_page_clean(struct page *page) if (!page) return; - kvm_set_page_accessed(page); + __kvm_set_page_accessed(page); put_page(page); } EXPORT_SYMBOL_GPL(kvm_release_page_clean);