From patchwork Fri Jul 10 19:48:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11657343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97FE913BD for ; Fri, 10 Jul 2020 19:48:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F21A20853 for ; Fri, 10 Jul 2020 19:48:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ga6KBvPW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726832AbgGJTsy (ORCPT ); Fri, 10 Jul 2020 15:48:54 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:8953 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728135AbgGJTsx (ORCPT ); Fri, 10 Jul 2020 15:48:53 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 10 Jul 2020 12:47:57 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 10 Jul 2020 12:48:52 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 10 Jul 2020 12:48:52 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 10 Jul 2020 19:48:44 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 10 Jul 2020 19:48:44 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Fri, 10 Jul 2020 12:48:43 -0700 From: Ralph Campbell To: , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Bharata B Rao" , Shuah Khan , Andrew Morton , Ralph Campbell Subject: [PATCH v2 1/2] mm/migrate: optimize migrate_vma_setup() for holes Date: Fri, 10 Jul 2020 12:48:39 -0700 Message-ID: <20200710194840.7602-2-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200710194840.7602-1-rcampbell@nvidia.com> References: <20200710194840.7602-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594410477; bh=fLw8+ET8EX18iQaLOuhjIJoogqKfyzo8y2yP1A5lMaw=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=ga6KBvPWBeVAdpmR/8OGEVWtb6iWJgldqB2dmWrazl0p1FDBkEOWDednMS+wIIoF4 Ryx27RouC8U2kaNRyiBWcb/xC5ESOundzlKOEPrnnvuHB9Y+3L/0W8wMDwV5iBLDd8 f6JJ9rbAeB+BM4iAa44NkKzLOcKbj/1O+AqOzPmORHi3rpZ2CDUxgEph+SyMJ6zt2X kH46qwF2N8KCa1FrlrW1o2SEzAhfvHUPn3GlVBuI/T3W585/m9CeV9CMeXZnGqQuNW I9ZSusRFgaUBeOjyLAF4AZ2gS9kAZxj+0e4EInB2bmXLo9/ESeFwUZ9GUYC/SjlJds I6BecbTCZti5A== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When migrating system memory to device private memory, if the source address range is a valid VMA range and there is no memory or a zero page, the source PFN array is marked as valid but with no PFN. This lets the device driver allocate private memory and clear it, then insert the new device private struct page into the CPU's page tables when migrate_vma_pages() is called. migrate_vma_pages() only inserts the new page if the VMA is an anonymous range. There is no point in telling the device driver to allocate device private memory and then not migrate the page. Instead, mark the source PFN array entries as not migrating to avoid this overhead. Signed-off-by: Ralph Campbell --- mm/migrate.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index b0125c082549..ec00b7a6ea2a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2205,6 +2205,16 @@ static int migrate_vma_collect_hole(unsigned long start, struct migrate_vma *migrate = walk->private; unsigned long addr; + /* Only allow populating anonymous memory. */ + if (!vma_is_anonymous(walk->vma)) { + for (addr = start; addr < end; addr += PAGE_SIZE) { + migrate->src[migrate->npages] = 0; + migrate->dst[migrate->npages] = 0; + migrate->npages++; + } + return 0; + } + for (addr = start; addr < end; addr += PAGE_SIZE) { migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE; migrate->dst[migrate->npages] = 0; @@ -2297,8 +2307,10 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, pte = *ptep; if (pte_none(pte)) { - mpfn = MIGRATE_PFN_MIGRATE; - migrate->cpages++; + if (vma_is_anonymous(vma)) { + mpfn = MIGRATE_PFN_MIGRATE; + migrate->cpages++; + } goto next; } From patchwork Fri Jul 10 19:48:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11657347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 347B813BD for ; Fri, 10 Jul 2020 19:49:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 186FA20748 for ; Fri, 10 Jul 2020 19:49:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="GdZuurl2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728521AbgGJTs5 (ORCPT ); Fri, 10 Jul 2020 15:48:57 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:11372 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728404AbgGJTsy (ORCPT ); Fri, 10 Jul 2020 15:48:54 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 10 Jul 2020 12:48:41 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 10 Jul 2020 12:48:54 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 10 Jul 2020 12:48:54 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 10 Jul 2020 19:48:44 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 10 Jul 2020 19:48:44 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Fri, 10 Jul 2020 12:48:44 -0700 From: Ralph Campbell To: , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Bharata B Rao" , Shuah Khan , Andrew Morton , Ralph Campbell Subject: [PATCH v2 2/2] mm/migrate: add migrate-shared test for migrate_vma_*() Date: Fri, 10 Jul 2020 12:48:40 -0700 Message-ID: <20200710194840.7602-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200710194840.7602-1-rcampbell@nvidia.com> References: <20200710194840.7602-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594410521; bh=7bNg4+W9LUQA56P48nlUXj5006Ok1vcN7L9AinD3FBk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=GdZuurl2c1vcX7Bo7E3MmL+tKh8sqjTCxeafBVEKNXdGhoxRKXCs1zUSd2WjHIZ4M 2jEMhfQZ/rohsR75Pra4m19ZfrtVRCxfYRbv1wS7EVFNHlD1zLi4P92swbjK3hC1Ms iIKd4DCEUvHiLqing5Ff6sf9ftQM1Yf6H8j7UR1pdg50BzKa28K7JZmwnwp2vxjiyb kH8k6nf8dF/woJlVBnOoVbZTGBrbx6J1rAevOrBa/Kh1qnaAdy3J873BCSq03Anzt+ UTyxOIMzUooHtJCea1d5bIDFZVPuBjUHUwkExANruA10lT8gxU83QCfOehKeAgoAqt LMJZmrSYwfd2A== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a migrate_vma_*() self test for mmap(MAP_SHARED) to verify that !vma_anonymous() ranges won't be migrated. Signed-off-by: Ralph Campbell --- tools/testing/selftests/vm/hmm-tests.c | 35 ++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 79db22604019..e83d3ab37697 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -931,6 +931,41 @@ TEST_F(hmm, migrate_fault) hmm_buffer_free(buffer); } +/* + * Migrate anonymous shared memory to device private memory. + */ +TEST_F(hmm, migrate_shared) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Migrate memory to device. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages); + ASSERT_EQ(ret, -ENOENT); + + hmm_buffer_free(buffer); +} + /* * Try to migrate various memory types to device private memory. */