From patchwork Tue May 26 06:22:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11569947 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22A9D913 for ; Tue, 26 May 2020 06:22:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06FCF207D8 for ; Tue, 26 May 2020 06:22:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="dpvKsqJh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729016AbgEZGWO (ORCPT ); Tue, 26 May 2020 02:22:14 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:13893 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725271AbgEZGWN (ORCPT ); Tue, 26 May 2020 02:22:13 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 25 May 2020 23:20:42 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 25 May 2020 23:22:10 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 25 May 2020 23:22:10 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 May 2020 06:22:10 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 26 May 2020 06:22:10 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.58.199]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 25 May 2020 23:22:10 -0700 From: John Hubbard To: LKML CC: Souptick Joarder , John Hubbard , Ingo Molnar , Borislav Petkov , Thomas Gleixner , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H . Peter Anvin" , , Subject: [PATCH 1/2] KVM: SVM: fix svn_pin_memory()'s use of get_user_pages_fast() Date: Mon, 25 May 2020 23:22:06 -0700 Message-ID: <20200526062207.1360225-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200526062207.1360225-1-jhubbard@nvidia.com> References: <20200526062207.1360225-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590474042; bh=AzbBhbmfNLGU9QQd3KrM41vIMJiGiViVOVN10y9DMkc=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=dpvKsqJh4h5Z2BhpqEVc3v6UcgU5o/vCA95fTXvUOxYoScXTrr9Lg4O3ueM29978W 4BLIV+gymq+4pv+u32F3pE98YnnwMlbCBi8nUAfIBb5BQ2srnsx2E/De5o3rUbTN0r yqiQDfmNk//62pRKdG76kAB2cKscT0ZjPabQkHn2lxzGmQ8J9912mcvZ8Ou9ENo/j7 X1129T1qjZClMHyuBgxwQpv9HMJ2SDURPlXWVxKX4QEsNb6f4HZRk3folnOwMTaKDY ncopQzo4p78OypaWdCb9t4C+hrm3gwJj9tRxpZTI5yQBMI0ECuIsdQDwuSEsJCw/BF sXOuL3NvR/fGg== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There are two problems in svn_pin_memory(): 1) The return value of get_user_pages_fast() is stored in an unsigned long, although the declared return value is of type int. This will not cause any symptoms, but it is misleading. Fix this by changing the type of npinned to "int". 2) The number of pages passed into get_user_pages_fast() is stored in an unsigned long, even though get_user_pages_fast() accepts an int. This means that it is possible to silently overflow the number of pages. Fix this by adding a WARN_ON_ONCE() and an early error return. The npages variable is left as an unsigned long for convenience in checking for overflow. Fixes: 89c505809052 ("KVM: SVM: Add support for KVM_SEV_LAUNCH_UPDATE_DATA command") Cc: Ingo Molnar Cc: Borislav Petkov Cc: Thomas Gleixner Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: H. Peter Anvin Cc: x86@kernel.org Cc: kvm@vger.kernel.org Signed-off-by: John Hubbard Reviewed-by: Vitaly Kuznetsov --- arch/x86/kvm/svm/sev.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 89f7f3aebd31..9693db1af57c 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -313,7 +313,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr, int write) { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; - unsigned long npages, npinned, size; + unsigned long npages, size; + int npinned; unsigned long locked, lock_limit; struct page **pages; unsigned long first, last; @@ -333,6 +334,9 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr, return NULL; } + if (WARN_ON_ONCE(npages > INT_MAX)) + return NULL; + /* Avoid using vmalloc for smaller buffers. */ size = npages * sizeof(struct page *); if (size > PAGE_SIZE) From patchwork Tue May 26 06:22:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11569949 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83EE2913 for ; Tue, 26 May 2020 06:22:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B350207D8 for ; Tue, 26 May 2020 06:22:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="en5+07K3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726961AbgEZGWL (ORCPT ); Tue, 26 May 2020 02:22:11 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19460 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725271AbgEZGWL (ORCPT ); Tue, 26 May 2020 02:22:11 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 25 May 2020 23:21:58 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 25 May 2020 23:22:10 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 25 May 2020 23:22:10 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 May 2020 06:22:10 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 26 May 2020 06:22:10 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.58.199]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 25 May 2020 23:22:10 -0700 From: John Hubbard To: LKML CC: Souptick Joarder , John Hubbard , Ingo Molnar , Borislav Petkov , Thomas Gleixner , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , "H . Peter Anvin" , , Subject: [PATCH 2/2] KVM: SVM: convert get_user_pages() --> pin_user_pages() Date: Mon, 25 May 2020 23:22:07 -0700 Message-ID: <20200526062207.1360225-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200526062207.1360225-1-jhubbard@nvidia.com> References: <20200526062207.1360225-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590474118; bh=tz89/U9nJLhJJR5Sd/v7QdEjaWAsrMmrR1p0WWS02cw=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=en5+07K3K87dvSFosQl8nq+VlAY/WLVi2X9m15JN/jOsp2TBg7wd5uBMEAq/biCpW ooihrLQ0OmJ1rFtObCiJJLgaqipnnZuUD8+7dXpAaNLAeamQDHPpSa1xpiRmVCDcQB zGwQXr6EDy9MvhQmAEEMwD/2R8amSNPQbil2PvvFyQzGeEFXVWU7F1ajUbxfEcgvNf S6i8MfZoTeeoaKNfhfQnjH3hnY9A1Me4seLrFIbSJRmPK8JXSUcTGhYEc5i1Fuo9lk wEVIRmn5nzy/IVYlArynEoHQx7BDp7yHIiBze1LR66kR3iKe2ez2rYliTbbVJQOmWz qsD6gzrlmqHSA== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This code was using get_user_pages*(), in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ Cc: Ingo Molnar Cc: Borislav Petkov Cc: Thomas Gleixner Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: H. Peter Anvin Cc: x86@kernel.org Cc: kvm@vger.kernel.org Signed-off-by: John Hubbard --- arch/x86/kvm/svm/sev.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 9693db1af57c..a83f2e73bcbb 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -349,7 +349,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr, return NULL; /* Pin the user virtual address. */ - npinned = get_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages); + npinned = pin_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages); if (npinned != npages) { pr_err("SEV: Failure locking %lu pages.\n", npages); goto err; @@ -362,7 +362,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr, err: if (npinned > 0) - release_pages(pages, npinned); + unpin_user_pages(pages, npinned); kvfree(pages); return NULL; @@ -373,7 +373,7 @@ static void sev_unpin_memory(struct kvm *kvm, struct page **pages, { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; - release_pages(pages, npages); + unpin_user_pages(pages, npages); kvfree(pages); sev->pages_locked -= npages; }