From patchwork Mon Jun 7 07:58:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12302775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65458C4743F for ; Mon, 7 Jun 2021 07:59:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DEE5B611C0 for ; Mon, 7 Jun 2021 07:59:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DEE5B611C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 844686B0078; Mon, 7 Jun 2021 03:59:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7F6B86B007B; Mon, 7 Jun 2021 03:59:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AD496B007D; Mon, 7 Jun 2021 03:59:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 263586B0078 for ; Mon, 7 Jun 2021 03:59:42 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BA896180AD801 for ; Mon, 7 Jun 2021 07:59:41 +0000 (UTC) X-FDA: 78226178562.15.475B40E Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2044.outbound.protection.outlook.com [40.107.94.44]) by imf28.hostedemail.com (Postfix) with ESMTP id 04A1A2001097 for ; Mon, 7 Jun 2021 07:59:38 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j5WB71bgSlVFNdc/qZZizGCXxfQ6sCGAphFJXVNbuHcIwkL0o0g13LHQePSRvhi6h4nEdWvAxzSgrPvPoAjBIWMj/umkOVJ9i1Po9zZaiXv/i0PAL2iH2BoD/SbABaLtY3KjeN69jqpj7ee0Q0qRICRQRCxOXYe9Vq5sI9tkyvYpUBR7HY8+n7mpT0kAXGknY4BSvg5meb3Ydr7sVBcBfCCujXHX7Dv07ml1aSeMEXVqOUJPOcqj4wP7gYsDZ0qDvwfYV80Ma3UshWekd2aLsvfpM3Ji2zcyc5fMZ6hCPkVi6EFMHcOd3ZiFpNRplbiC2DTLVW/r6yisuD6Mq0cbGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f1PlBuDFv+fkoDCHxutPb13euniKAWFWXRYTXtMR9iI=; b=Hrd27TFi3gna1w3lR1JLXQd95kF3P99vlXkIT6miGeUQYV65UV6eV/jJZPaLkhw0usRGGBRCWOaeop7JrJ6RTkb1aAzEEYixQCOcxtjnPT1aF0qJvvEnZq1cthDaqx/+WpibCm8BFnPf83PoSHn/wwN8bewCzpySeibmsbSjfCifinVzTN08WWXLVOxgdhdGlxZ6mXmhSTvLXcdvGdYk5NYn5WNA22V19Qng0uEYy3uFxiXEEUVVHVyF3wx0LGczuxaRJahw9ZROnb39P928f7b/J2zNWxWjITbuNoizppAF/krzUw4o/myzMHdqKqFbXuvzgnUokxtsbGJZu8W4ow== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f1PlBuDFv+fkoDCHxutPb13euniKAWFWXRYTXtMR9iI=; b=p4H3YnG4x2RPFdaTCGUvPHrPciV6I1qFJGtRYlVAlZY6fiXi6PlnXkekH7CPCdnaC6709YrzoSxU/8e2z7/oAw+EayhD0b8UElTGYox72a9vc4wHQmiz+i9zz+lmERNAIBofAhLc48rhZpK3iI4DkL9Y+eo/GM9jGOphPRLUg+16AO/Q57xYbAnrCkGe9f1SsYubxVA2ntdfX37HELLTfcE0HsAlTlDRqVKmuLIc3FXY13K3m1KvFNe9X0dkzHjSA9lOiKjVeGEyRdw14V8Npajh0V/FhsozKY/z94x+wTVIiQ3frpUJFUn2MN3MuazvmuldbLlLmxmXLCshxjft1Q== Received: from DM6PR13CA0032.namprd13.prod.outlook.com (2603:10b6:5:bc::45) by BN8PR12MB3586.namprd12.prod.outlook.com (2603:10b6:408:47::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.25; Mon, 7 Jun 2021 07:59:38 +0000 Received: from DM6NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:5:bc:cafe::21) by DM6PR13CA0032.outlook.office365.com (2603:10b6:5:bc::45) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.12 via Frontend Transport; Mon, 7 Jun 2021 07:59:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT034.mail.protection.outlook.com (10.13.173.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4195.22 via Frontend Transport; Mon, 7 Jun 2021 07:59:38 +0000 Received: from localhost (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 7 Jun 2021 07:59:37 +0000 From: Alistair Popple To: , CC: , , , , , , , , , , , , , Alistair Popple Subject: [PATCH v10 08/10] mm: Selftests for exclusive device memory Date: Mon, 7 Jun 2021 17:58:53 +1000 Message-ID: <20210607075855.5084-9-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210607075855.5084-1-apopple@nvidia.com> References: <20210607075855.5084-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 897acbf6-f4a7-48a1-31d8-08d9298a32e5 X-MS-TrafficTypeDiagnostic: BN8PR12MB3586: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YOW1CwzqTCGng22KTKheO22NcNRhmUayFSUoukX5yN60XJQ+M4s6t2mn5SO6U0/jZjElxpD+FRVLANbk9ePHrbsGtIJ++KAWlyO+kfISPJIzt++OWwbaN+Y7cfhDz1LOui6uItc+JbjjksUJPiXDvVxCeNvI6xXuADTjWboTmZXq8dX2/L3WPcR/MoT/AUa64QevThiZbmyovDA8emO0ghD5PusuS8zvVNjDQajqF++akquQeg6xEpkiVqj5MB+E24Y/qsj4/OIJ0eCzT2ZnxUuIgOsqcCRVA4XTELwt+9X1YuW7YluhePK9m5D27hqdPkSm8UAz6d7z0gLDdoqG1gTtsbrZJV4Z2pLdVGGN1DD/Zs8AL+VYWibv4g1m3uJAmPDPOrm97xcR535Tpc3D+vglEELh1WO8j+gEqeBPWPn55oyxJl5TyCCKlwRvb4GVDZShpKvjpvBV8cxxjQv6qymnReYh6Ui69kHsHOJHojMwX1goBafqc04Mp9NdfsUpGMEmVkqgTDUYp6a7nPvoAi7/xlUxBesgr0oTMppVhV9uUst7XuBxxA/3i+NpCqsc/BIRHlVifWmx14VKHRf4A+4UqOJtJCV5D6328gqEhJiF1Iy9JXgu9GtfuybhydMlmr2Tgxf3JSZT1WEwV24/mw== X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(39860400002)(346002)(46966006)(36840700001)(1076003)(7636003)(336012)(426003)(356005)(316002)(2616005)(82310400003)(107886003)(36860700001)(82740400003)(70586007)(7416002)(36756003)(86362001)(36906005)(47076005)(6666004)(5660300002)(54906003)(8936002)(110136005)(83380400001)(478600001)(8676002)(2906002)(186003)(26005)(16526019)(70206006)(4326008);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 07:59:38.4173 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 897acbf6-f4a7-48a1-31d8-08d9298a32e5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3586 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=p4H3YnG4; dmarc=pass (policy=none) header.from=nvidia.com; spf=none (imf28.hostedemail.com: domain of apopple@nvidia.com has no SPF policy when checking 40.107.94.44) smtp.mailfrom=apopple@nvidia.com X-Stat-Signature: z69cn841n4hpauqiaq5uq84xfhzwuibt X-Rspamd-Queue-Id: 04A1A2001097 X-Rspamd-Server: rspam02 X-HE-Tag: 1623052778-789454 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds some selftests for exclusive device memory. Signed-off-by: Alistair Popple Acked-by: Jason Gunthorpe Tested-by: Ralph Campbell Reviewed-by: Ralph Campbell --- lib/test_hmm.c | 124 +++++++++++++++++++ lib/test_hmm_uapi.h | 2 + tools/testing/selftests/vm/hmm-tests.c | 158 +++++++++++++++++++++++++ 3 files changed, 284 insertions(+) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 5c9f5a020c1d..305a9d9e2b4c 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -25,6 +25,7 @@ #include #include #include +#include #include "test_hmm_uapi.h" @@ -46,6 +47,7 @@ struct dmirror_bounce { unsigned long cpages; }; +#define DPT_XA_TAG_ATOMIC 1UL #define DPT_XA_TAG_WRITE 3UL /* @@ -619,6 +621,54 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args, } } +static int dmirror_check_atomic(struct dmirror *dmirror, unsigned long start, + unsigned long end) +{ + unsigned long pfn; + + for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) { + void *entry; + struct page *page; + + entry = xa_load(&dmirror->pt, pfn); + page = xa_untag_pointer(entry); + if (xa_pointer_tag(entry) == DPT_XA_TAG_ATOMIC) + return -EPERM; + } + + return 0; +} + +static int dmirror_atomic_map(unsigned long start, unsigned long end, + struct page **pages, struct dmirror *dmirror) +{ + unsigned long pfn, mapped = 0; + int i; + + /* Map the migrated pages into the device's page tables. */ + mutex_lock(&dmirror->mutex); + + for (i = 0, pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++, i++) { + void *entry; + + if (!pages[i]) + continue; + + entry = pages[i]; + entry = xa_tag_pointer(entry, DPT_XA_TAG_ATOMIC); + entry = xa_store(&dmirror->pt, pfn, entry, GFP_ATOMIC); + if (xa_is_err(entry)) { + mutex_unlock(&dmirror->mutex); + return xa_err(entry); + } + + mapped++; + } + + mutex_unlock(&dmirror->mutex); + return mapped; +} + static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, struct dmirror *dmirror) { @@ -661,6 +711,71 @@ static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, return 0; } +static int dmirror_exclusive(struct dmirror *dmirror, + struct hmm_dmirror_cmd *cmd) +{ + unsigned long start, end, addr; + unsigned long size = cmd->npages << PAGE_SHIFT; + struct mm_struct *mm = dmirror->notifier.mm; + struct page *pages[64]; + struct dmirror_bounce bounce; + unsigned long next; + int ret; + + start = cmd->addr; + end = start + size; + if (end < start) + return -EINVAL; + + /* Since the mm is for the mirrored process, get a reference first. */ + if (!mmget_not_zero(mm)) + return -EINVAL; + + mmap_read_lock(mm); + for (addr = start; addr < end; addr = next) { + int i, mapped; + + if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT)) + next = end; + else + next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT); + + ret = make_device_exclusive_range(mm, addr, next, pages, NULL); + mapped = dmirror_atomic_map(addr, next, pages, dmirror); + for (i = 0; i < ret; i++) { + if (pages[i]) { + unlock_page(pages[i]); + put_page(pages[i]); + } + } + + if (addr + (mapped << PAGE_SHIFT) < next) { + mmap_read_unlock(mm); + mmput(mm); + return -EBUSY; + } + } + mmap_read_unlock(mm); + mmput(mm); + + /* Return the migrated data for verification. */ + ret = dmirror_bounce_init(&bounce, start, size); + if (ret) + return ret; + mutex_lock(&dmirror->mutex); + ret = dmirror_do_read(dmirror, start, end, &bounce); + mutex_unlock(&dmirror->mutex); + if (ret == 0) { + if (copy_to_user(u64_to_user_ptr(cmd->ptr), bounce.ptr, + bounce.size)) + ret = -EFAULT; + } + + cmd->cpages = bounce.cpages; + dmirror_bounce_fini(&bounce); + return ret; +} + static int dmirror_migrate(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd) { @@ -949,6 +1064,15 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp, ret = dmirror_migrate(dmirror, &cmd); break; + case HMM_DMIRROR_EXCLUSIVE: + ret = dmirror_exclusive(dmirror, &cmd); + break; + + case HMM_DMIRROR_CHECK_EXCLUSIVE: + ret = dmirror_check_atomic(dmirror, cmd.addr, + cmd.addr + (cmd.npages << PAGE_SHIFT)); + break; + case HMM_DMIRROR_SNAPSHOT: ret = dmirror_snapshot(dmirror, &cmd); break; diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index 670b4ef2a5b6..f14dea5dcd06 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -33,6 +33,8 @@ struct hmm_dmirror_cmd { #define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) #define HMM_DMIRROR_MIGRATE _IOWR('H', 0x02, struct hmm_dmirror_cmd) #define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x03, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x04, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd) /* * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT. diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 5d1ac691b9f4..864f126ffd78 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -1485,4 +1485,162 @@ TEST_F(hmm2, double_map) hmm_buffer_free(buffer); } +/* + * Basic check of exclusive faulting. + */ +TEST_F(hmm, exclusive) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i]++, i); + + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i+1); + + /* Check atomic access revoked */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_CHECK_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + + hmm_buffer_free(buffer); +} + +TEST_F(hmm, exclusive_mprotect) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + ret = mprotect(buffer->ptr, size, PROT_READ); + ASSERT_EQ(ret, 0); + + /* Simulate a device writing system memory. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_WRITE, buffer, npages); + ASSERT_EQ(ret, -EPERM); + + hmm_buffer_free(buffer); +} + +/* + * Check copy-on-write works. + */ +TEST_F(hmm, exclusive_cow) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + fork(); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i]++, i); + + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i+1); + + hmm_buffer_free(buffer); +} + TEST_HARNESS_MAIN