From patchwork Mon Oct 9 06:42:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13412983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFBF2E95A95 for ; Mon, 9 Oct 2023 06:42:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09C216B019A; Mon, 9 Oct 2023 02:42:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3FA96B019C; Mon, 9 Oct 2023 02:42:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E56F96B019F; Mon, 9 Oct 2023 02:42:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D95856B019A for ; Mon, 9 Oct 2023 02:42:39 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A79A340146 for ; Mon, 9 Oct 2023 06:42:39 +0000 (UTC) X-FDA: 81324979638.17.E71CBC0 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf27.hostedemail.com (Postfix) with ESMTP id DE7A940003 for ; Mon, 9 Oct 2023 06:42:37 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=xvaooVcG; spf=pass (imf27.hostedemail.com: domain of 33aAjZQYKCNwQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=33aAjZQYKCNwQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696833757; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GMzA+bY5iQ55icqv7/c58xVjlf8IiONnVGODeN5A6AA=; b=XblMGsUB/KyzfG9XxFwRLA0+S/hjzNrcdUamNI7ZBs2qEuCeMKX4V4QecOqXRVV4EGd8KU 3qqON/Pe33DpU5Y7TSZeXAl6pOtDpVO0BsUjTEwT9Cfa85DNmh+FeF6oh2PE6GGmdPXvi1 QJyan5Nuzl1RirXYXMxrSXWeL2MWkyw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696833757; a=rsa-sha256; cv=none; b=0BeM3AlDMdG31oSdfRlSpohiAb9ksJPuM0XWv7XptmdfN9Ozjj4AgkrsIFgUO0bpt/oWZy IglgtTPBUWJ54ZTHtT35NmXBI/48qafZzPL5T7e0ho+MTcJRJKHE7xIhs9/UuYUHl4oEl3 h9tbKqBhPX2sUGDhKRbcunbns9YytUk= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=xvaooVcG; spf=pass (imf27.hostedemail.com: domain of 33aAjZQYKCNwQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=33aAjZQYKCNwQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-59f6902dc8bso66810397b3.0 for ; Sun, 08 Oct 2023 23:42:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696833757; x=1697438557; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GMzA+bY5iQ55icqv7/c58xVjlf8IiONnVGODeN5A6AA=; b=xvaooVcG8QWkK4a52nXE1sYz1SE1RyY+2BI4yh6k5RAvy4SyqqYaWAbHuSv/NQ4+AQ mezf5q/rUeyuQB38b2x/Fis648SuH3wfmW4xKVPy9mZfJ1Lo2j/0yumK5tPe39WxHMw9 EZFkJfk6sTl3+SEg2c6n9zzz4OhZpDhoeBfDBLHd3kMcBEIHg/CGOMUIEH463AK/z3a2 ikCpJGPex/0vvNt/qOut7FbIBVeFFnIVKwTBKStdKi6cGZKpm5RK4jvasK+B5N00xoeH 5F7zq4Pt5anxXrvy9IsBmoVkO8xfFNK/fLgBaxsOO1KHVfyfTa9aXrRFSnSLU9F1+E/e m0QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696833757; x=1697438557; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GMzA+bY5iQ55icqv7/c58xVjlf8IiONnVGODeN5A6AA=; b=c/HRXiygcCGz3dEVZ3As5GF1woG/7iKW0U7YkzuNLgbLRHW+W+xWAv4eD/ng36jpCx UDneomUTYIJyCgZ+l3WcmHR8WeiTKwc5UsHDWREuQ7bnFxY1ai8fTVyeG1yEjrMvyc12 rmQqmE5+ZHE+Y3ZEj4s82ew6Jet7hry3L5/B7+AeiQhh05h/PMBktSfLBAYjHcSWHCCL hlH4r0SlnMOFxEleSXVP+SOUe25niZs+t00BvWaCdwSyAQfdEhEqDFbm2GlBuZ6pnR/Q 6tvPPHj4A/N3PAL8FMNFv6oDE3jkpgXAaEA/G31vma4evpvDVdSF8OkBI2dD+GbZ67fQ LgEQ== X-Gm-Message-State: AOJu0Yw9uiXD7fIGaCQVNd6ZnvRY9ljJ8fb8+lMANEI1VOXP5DqElVKR NWRfyQrSvUHyKXHTgqH/muiI8wyKs8M= X-Google-Smtp-Source: AGHT+IFtPbkNFD9Rxrba570q5f/tqYHiGOqoU9c9uHOquBYgh1LiUto5yoJwsPheCUdVazB3Qxt4pauoLww= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:3e83:601e:dc42:a455]) (user=surenb job=sendgmr) by 2002:a81:a809:0:b0:59b:ebe0:9fce with SMTP id f9-20020a81a809000000b0059bebe09fcemr261242ywh.5.1696833757037; Sun, 08 Oct 2023 23:42:37 -0700 (PDT) Date: Sun, 8 Oct 2023 23:42:26 -0700 In-Reply-To: <20231009064230.2952396-1-surenb@google.com> Mime-Version: 1.0 References: <20231009064230.2952396-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.609.gbb76f46606-goog Message-ID: <20231009064230.2952396-2-surenb@google.com> Subject: [PATCH v3 1/3] mm/rmap: support move to different root anon_vma in folio_move_anon_rmap() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, lokeshgidra@google.com, peterx@redhat.com, david@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com X-Stat-Signature: xbjzbi5tcscua9xj3is5a5kd4s5iykr5 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: DE7A940003 X-Rspam-User: X-HE-Tag: 1696833757-857915 X-HE-Meta: U2FsdGVkX1/wK5xJET4G4JU1e4p3BaZJ1KbfQcqUdU1c8R3L2/maGC17Yy1o8RFxoSNq908WYfVVNpk6ckz9/WlMd2bpGYBKpazTgSxSPvY+uojpKApAyjwF27GOytaemG5mzWlw6XENCgZ+OaiLxS9ijl44K/ne/a4KnoNwR6X9KhoH9UKjlwPyKUZ9931+z+UOw6aXK/g+xwBZNrz+xRElhIsca2FYIsclv/yKJhu1eOJmfDVamqahoYJ1jfv3NCRB2rBUFDhig3P4zDAjOiLHO6m7n2itrgDYlEOmphu7XROocJl3oIXdVq5FO2hpcj/NvuQe4GAiRId+tzi/GImiw/vqNRQBA7ME1H7xAyD6GAuNVAVDGKo+3nb+jcTEJgo49rVRxa0SPkOiiSw/5Zj9YofaZ0sYPTwCP4uRbU6K4n6BCPQNkU4BYfN9FSDNgBlQEVo6+CZM8vHEliqTB13Eske4qAGZO9EMdFX2ggtD2iH9yejy+ZDekdfA2Uadk/ILkJCASEANNTQ5KvK4DvR7yVPiuoKUPQRV9VFw2vsNVtlFQiQ74T5tdrAdRgmEoc9m/jjNz9vPRef/4q1rL8AAHJCR51EKzl4cuYrGZw3Cl8sTaq+KdhRVitxbgMTazDJ17UjnPirOrB9YnrZRGoUiW/ynAGTE9f7P3x9GPLVFGFrwSZzEPpU0Hl7DEu/ahIcevY+uthK6p+urXW4gEA0nBYrnv4clRcy9ruDQOkOzV4AN1eV4Ks1UJ/UJW6+j//RJ8MUkwtVAHMQ33BLp18vZdFxPHBE2ANOZM8/ZDQO0Z9lr7lxM/4n3YumHFmvOLeRX0Y9o5sHg8zcUuUIv9/KqdQHCmb/7QcT6j6dwcoYbtybGzRAsbe9YrwuH0Rdbj0LVsxfO3D5XwptHTlcpuxRkkcNkDpM7mO0B0gDy806UqlHdoZHOiBVjSC2XqP3jHjfIVXa9ja+sTTv1kEu VRwweqtG XMDjiZzVN5LuXOlun8ghTfo324I5NlsHlqbmDiCcrWyKqy31NXLf2FacwrR7dGiuu/ocuLrr0Gv21kQMlgX/P5czctBYz4D2oawanpaB54T9aSw6LKquObRC8a9hr1iW1td5ytSrCxKm8+ilVRAE5s8eE5Un3MAL6LRPrKNGhfrfan70qkgMfH2kQkkPGEqLm67zEGVsFDMJ6Sw90kChNnNB5L3Vob57EB0UKuMv0nMjvEb6o5iplc5v8dTlCie1XDpu3lTyGASFHPk19ktX/CVJnJIKz5sxG8+03q+yL0VMbiJyXo7Quxl3dRvnqIRxRRNQAFlUMo2d07vT7Qyk2sQh50yC1HLjsJ2kEFpz7EdvB8Iz3GWUilEqHTMaRs/XUQwoOupRegJ2kUa/dyuYmvKHZKx8RPpfCbbQ0LrTkO0Zb24fKxuyH5+g/wm62p/mJiohmI0hwfiO4DfPcqaD5KMQn0UihK28TzgmOtURPmz+O4B0MP++nKzqnE+uwpJcfgfvslySYmFewD+njFuOm6dmMw0TyK3UCn3P5y8nL2ZFfPdGWKZAEm8qHyoOozAC3Aj1isltCFKiIL3c3jDabv0XDZvgaDwwyi5psU0qmu15pzZboJTrJOTS+6iCBwyN10MQgxSbQYkELCcQilZ9Q9IN5dTxGSteAU0/sQo6u2Ozssq358qKPMJ7bWQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrea Arcangeli For now, folio_move_anon_rmap() was only used to move a folio to a different anon_vma after fork(), whereby the root anon_vma stayed unchanged. For that, it was sufficient to hold the folio lock when calling folio_move_anon_rmap(). However, we want to make use of folio_move_anon_rmap() to move folios between VMAs that have a different root anon_vma. As folio_referenced() performs an RMAP walk without holding the folio lock but only holding the anon_vma in read mode, holding the folio lock is insufficient. When moving to an anon_vma with a different root anon_vma, we'll have to hold both, the folio lock and the anon_vma lock in write mode. Consequently, whenever we succeeded in folio_lock_anon_vma_read() to read-lock the anon_vma, we have to re-check if the mapping was changed in the meantime. If that was the case, we have to retry. Note that folio_move_anon_rmap() must only be called if the anon page is exclusive to a process, and must not be called on KSM folios. This is a preparation for UFFDIO_MOVE, which will hold the folio lock, the anon_vma lock in write mode, and the mmap_lock in read mode. Signed-off-by: Andrea Arcangeli Signed-off-by: Suren Baghdasaryan --- mm/rmap.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/mm/rmap.c b/mm/rmap.c index c1f11c9dbe61..f9ddc50269d2 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -542,7 +542,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, struct anon_vma *root_anon_vma; unsigned long anon_mapping; +retry: rcu_read_lock(); +retry_under_rcu: anon_mapping = (unsigned long)READ_ONCE(folio->mapping); if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) goto out; @@ -552,6 +554,16 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON); root_anon_vma = READ_ONCE(anon_vma->root); if (down_read_trylock(&root_anon_vma->rwsem)) { + /* + * folio_move_anon_rmap() might have changed the anon_vma as we + * might not hold the folio lock here. + */ + if (unlikely((unsigned long)READ_ONCE(folio->mapping) != + anon_mapping)) { + up_read(&root_anon_vma->rwsem); + goto retry_under_rcu; + } + /* * If the folio is still mapped, then this anon_vma is still * its anon_vma, and holding the mutex ensures that it will @@ -586,6 +598,18 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, rcu_read_unlock(); anon_vma_lock_read(anon_vma); + /* + * folio_move_anon_rmap() might have changed the anon_vma as we might + * not hold the folio lock here. + */ + if (unlikely((unsigned long)READ_ONCE(folio->mapping) != + anon_mapping)) { + anon_vma_unlock_read(anon_vma); + put_anon_vma(anon_vma); + anon_vma = NULL; + goto retry; + } + if (atomic_dec_and_test(&anon_vma->refcount)) { /* * Oops, we held the last refcount, release the lock From patchwork Mon Oct 9 06:42:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13412984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57887E95A99 for ; Mon, 9 Oct 2023 06:42:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0DD36B019F; Mon, 9 Oct 2023 02:42:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BBD4B6B01A1; Mon, 9 Oct 2023 02:42:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E9926B01A3; Mon, 9 Oct 2023 02:42:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 864DA6B019F for ; Mon, 9 Oct 2023 02:42:42 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5C54DB43CB for ; Mon, 9 Oct 2023 06:42:42 +0000 (UTC) X-FDA: 81324979764.01.49766ED Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf10.hostedemail.com (Postfix) with ESMTP id 61C49C0020 for ; Mon, 9 Oct 2023 06:42:40 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=TbnRA0MN; spf=pass (imf10.hostedemail.com: domain of 336AjZQYKCN4SURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=336AjZQYKCN4SURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696833760; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kWZCkvFRnkfBRmdCNGFT1qU14pi4xgyHc8Qi//DFslY=; b=eInLKW/E0Rq/JWehITujtT+13JOwmaxYaVHbxQTl/1dR1PPa5YgN6e/BYSpfgMUSHRwsXC fkc5Tln9rOs3ZnNy0Zeosq68KfCNRIT0zmZ1IfbK1klinnXqlQVjlZ2LVDy7FrLmhmUWyx 4Zf1UTBxyALtVXoN75KE3u6aIyXE5TY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696833760; a=rsa-sha256; cv=none; b=3FRubeiQvtrjddN2Hp2dxGKfcBsR5h/vWuyWNchUIpZcZD7UdTGhlRHAxZaGFgPnZ3sFDc 6jVVLlPwqUDpain8ZkDU4BAx36jsB5cFNhvb11EetCEf5TE+meYkDIbitEMlbGJ6u6YI1t IipXj/Cs5LG+G9m6OfJmC/IRtT4cM/0= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=TbnRA0MN; spf=pass (imf10.hostedemail.com: domain of 336AjZQYKCN4SURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=336AjZQYKCN4SURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-d99ec34829aso1194311276.1 for ; Sun, 08 Oct 2023 23:42:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696833759; x=1697438559; darn=kvack.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=kWZCkvFRnkfBRmdCNGFT1qU14pi4xgyHc8Qi//DFslY=; b=TbnRA0MN1F/e2bbYaE0kBMLeBDcaOpkn5f8mtIn+5sZns6Kv6kyx7egOZt7ze9mspk GSlmLge0sR3ZOPBP3gjHLdgGLzoqBhGVL2vwlhz3OXJ9iITeBNcblik0Fhev5Qk+Dkd1 5smTI3Ktu2tzQq2TpE0UO3gNB1nH8ZjMy6KvQotIl9oRc6GipEC3jRInb43i0M4nsdXt KGxogY3RvhKjntNW/iFgtHwP6W7OgEHkP2O+8T5cdVoPxcy4OP70tob1qkTGLHe0LwHh tCcLaCZdKN2gi0vriLaRmSJC6FJVelS+gHpNtpq/ol5TScN0yFLTJGStuNdmA/SDVUZW Jm+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696833759; x=1697438559; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=kWZCkvFRnkfBRmdCNGFT1qU14pi4xgyHc8Qi//DFslY=; b=weT5IPCppo7dihPVC59R4eWyZb5cz0X5FZIPjK4yshR1pjv/ZBB95PrJi60bk65I1r RGN/K107BlfXdO7qHTU4znCjJy0Jv/iCat3IfwIYfCA0fDQT5lJqp9knjU0DzQSJYeyV yBn7nDkYoDNFeuMGeyU3QgK8uZ2Z/zLahbZW6I+sDISnQVFKNRSHluxaULHgmcSrIx+C 0lSti9UAC6ncuJXBIoOpcaNlGdZ6YTAT8seXXABA+u05bXdiRnA5CjUVzI75E62Q673a DNyozBTWmJa4T/vxiXgnKCp+lrPga9LxCoOeBdNoCjvA7zdT+mEPJ9TsVhVixhkdt8y2 dniA== X-Gm-Message-State: AOJu0YyrA7CAQhhBRKv5OcvXyJsB2dPuspksn0cZBfrEVkh3tYiYdDNx TU5qUOMVjV4cwL4tUWKYT2SnBf1uVOE= X-Google-Smtp-Source: AGHT+IGte0QQqPFng6loP+lc1aYa+D26chr23RRonE/Q36GjLpx0Kc/7BDlFQtqsop3SBMu5W7EXjWFn/Yo= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:3e83:601e:dc42:a455]) (user=surenb job=sendgmr) by 2002:a25:6881:0:b0:d89:4776:5d6b with SMTP id d123-20020a256881000000b00d8947765d6bmr224161ybc.5.1696833759490; Sun, 08 Oct 2023 23:42:39 -0700 (PDT) Date: Sun, 8 Oct 2023 23:42:27 -0700 In-Reply-To: <20231009064230.2952396-1-surenb@google.com> Mime-Version: 1.0 References: <20231009064230.2952396-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.609.gbb76f46606-goog Message-ID: <20231009064230.2952396-3-surenb@google.com> Subject: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, lokeshgidra@google.com, peterx@redhat.com, david@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com X-Stat-Signature: b5mm4ewntcfkijfp6dqg4oaeqm7eu1q9 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 61C49C0020 X-Rspam-User: X-HE-Tag: 1696833760-654480 X-HE-Meta: U2FsdGVkX1/wHy5gfwUUWAPX+vQQkski7t9GCT15B9ijugMCjPs84HU5ClrM9TFQAnrNFwhkM2pJBM7rsETkxsim8LK4PvmCn3iiDXw9+7DLxB0WRAcO+1t32d1YPwm3xu9st9OpQlhMiQBHKeWOH8lxxfp/ABmkIYkpq35IniPLCmt6wHcPCVbvqGmMb0SeSJsar412/hwRCoGVUa/V3MQKrgivZtuPqZA7X9u2Kj2eyTHARxNJWJju5pUwSKldzHsapt0XRFo4QhJRmGj9NBt7PhfRL6Jjw+F152uf6yzSdHOO3oNtUqm4e5pzJrLMrx6nOelk5+Hp0ff/Z6zMddhNUpgTeR/DBSTkm2fmcBVIHKsHnD58HhXnQrDg2XBeFt3lMCC1toZF0HHCUVqmVX7zxJySMwFWxTPQW6eh/f9Bj+7F1Lz+AriWheSnFZyH+mhzhcJlDZIGd9T1qkmcOCC2uOWLeMKiIok2h8R0A/156vtRlIhOJ+V2Me4KXeaF1DNEzQ3br6XJUP2W69LnrFtfjl/j4DjizqRiPvAYa+r/EHIccs/szX6JiC+nbBZrqm5BUuO3wYXzQsoAgieGejRsX2sKziiV2nQe4G5E/MxA5HyyO4+EkDPaIGJ/VQsfQZWy77pev9IoqDkCcUEGBNoZr3v4dEC+3vSzXncLzyeSb/Iv+FKJ86IKynXiI5Qkz/6wJKJttM0CxgA1Cdw49sI1nf5fOGZ9FW90H7Y0gy/OFJGK/qr0+GHRr6nnMR3kIWYs8bxZVH3ghnFLzE8cH+yGw8bshv2E0GjFXrm9ASZYRMnIV27noWOM7Tf2iLLhDNDZ7H6aj27hDh6KNQwYU4hGuJJyHwzw4HQIVJNPc7WB5sgYbIp+jvoJfvDQFM+rNEJmEx5rkAfIaxs0DFaa6zAQGHiXyEv3QQDr71nsrgqY7P5rkEJbGwAxGqRz6lHMHLxyI37JKETLohz6+R9 GuZDCjXv jxCNoq1e3zkWM4j2nDI5Uvq9ME+qs4NuVErNUpDlte1SG1UIEz7DEnC3Ol5L269Ry8hum/Xv+1C4sIrhM8/pwrcCejBtGxNQLNvYgr/iXwKPd0PDKj/uD/bb8b88Ifda707J6N29pXoiw6ztHyAB8QeDIJQ/5D1R9ZV/ckDUxEkqWLY+csmZ0U/45wYXzMu2a4q/d4gFywRbtm+kVvqPF7yFpmHk+LnKLj4Jv8JnyJiXi6KOl6xfUBa1+kZcZm9lI+7d/Lqh9MEzOSe3iSHCrfy4B+vWzaYoF9bNHDzBO/5EtzlSbu0wrgEr8mPo2BY9vv9YX6g/BaR5pPVD7DUZapvQYCWw0K4uc85Eqw4jKYo4LmdYvwH1HgIFYCtG3BvcoYvAD/nKlNww9xplq7O761lssEvnIinlQk8pcfyvz0psBSIsRS1+KQ34HZMoM5UlIagStI2Vp+bD96DR4ZpzEiQFhm958Nc/c8MqU0uUgUQkiQYa+C4MRJ6r7Szr2u51WTO/saMycXWKmtudytkNjAMsKN+niqiRX4Q7yOR91nGT78fT7nJiTV70Ta+8HOOd0CeAFi4LpUw3Uj06RTgHC/yDLAkbKcyn97RjsECPG1H1hH8d3FtLTXdpusIFCkqCu1mn55pgsXqX0QoDcc4q8J8CnYVqMvuN9dYZFng431pSaFCfyAW6+lkC5jq1HeQd3oIAIN4PHMQ5orWXicDhTCFZI08iftIuY32JsEZxZEUK5MtTHcWeF9hd1iNhga40JX/2zjr5QBJ01rBw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrea Arcangeli Implement the uABI of UFFDIO_MOVE ioctl. UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are available (in userspace) for recycling, as is usually the case in heap compaction algorithms, then we can avoid the page allocation and memcpy (done by UFFDIO_COPY). Also, since the pages are recycled in the userspace, we avoid the need to release (via madvise) the pages back to the kernel [2]. We see over 40% reduction (on a Google pixel 6 device) in the compacting thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was measured using a benchmark that emulates a heap compaction implementation using userfaultfd (to allow concurrent accesses by application threads). More details of the usecase are explained in [2]. Furthermore, UFFDIO_MOVE enables moving swapped-out pages without touching them within the same vma. Today, it can only be done by mremap, however it forces splitting the vma. [1] https://lore.kernel.org/all/1425575884-2574-1-git-send-email-aarcange@redhat.com/ [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/ Update for the ioctl_userfaultfd(2) manpage: UFFDIO_MOVE (Since Linux xxx) Move a continuous memory chunk into the userfault registered range and optionally wake up the blocked thread. The source and destination addresses and the number of bytes to move are specified by the src, dst, and len fields of the uffdio_move structure pointed to by argp: struct uffdio_move { __u64 dst; /* Destination of move */ __u64 src; /* Source of move */ __u64 len; /* Number of bytes to move */ __u64 mode; /* Flags controlling behavior of move */ __s64 move; /* Number of bytes moved, or negated error */ }; The following value may be bitwise ORed in mode to change the behavior of the UFFDIO_MOVE operation: UFFDIO_MOVE_MODE_DONTWAKE Do not wake up the thread that waits for page-fault resolution UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES Allow holes in the source virtual range that is being moved. When not specified, the holes will result in ENOENT error. When specified, the holes will be accounted as successfully moved memory. This is mostly useful to move hugepage aligned virtual regions without knowing if there are transparent hugepages in the regions or not, but preventing the risk of having to split the hugepage during the operation. The move field is used by the kernel to return the number of bytes that was actually moved, or an error (a negated errno- style value). If the value returned in move doesn't match the value that was specified in len, the operation fails with the error EAGAIN. The move field is output-only; it is not read by the UFFDIO_MOVE operation. The operation may fail for various reasons. Usually, remapping of pages that are not exclusive to the given process fail; once KSM might deduplicate pages or fork() COW-shares pages during fork() with child processes, they are no longer exclusive. Further, the kernel might only perform lightweight checks for detecting whether the pages are exclusive, and return -EBUSY in case that check fails. To make the operation more likely to succeed, KSM should be disabled, fork() should be avoided or MADV_DONTFORK should be configured for the source VMA before fork(). This ioctl(2) operation returns 0 on success. In this case, the entire area was moved. On error, -1 is returned and errno is set to indicate the error. Possible errors include: EAGAIN The number of bytes moved (i.e., the value returned in the move field) does not equal the value that was specified in the len field. EINVAL Either dst or len was not a multiple of the system page size, or the range specified by src and len or dst and len was invalid. EINVAL An invalid bit was specified in the mode field. ENOENT The source virtual memory range has unmapped holes and UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set. EEXIST The destination virtual memory range is fully or partially mapped. EBUSY The pages in the source virtual memory range are not exclusive to the process. The kernel might only perform lightweight checks for detecting whether the pages are exclusive. To make the operation more likely to succeed, KSM should be disabled, fork() should be avoided or MADV_DONTFORK should be configured for the source virtual memory area before fork(). ENOMEM Allocating memory needed for the operation failed. ESRCH The faulting process has exited at the time of a UFFDIO_MOVE operation. Signed-off-by: Andrea Arcangeli Signed-off-by: Suren Baghdasaryan --- Documentation/admin-guide/mm/userfaultfd.rst | 3 + fs/userfaultfd.c | 63 ++ include/linux/rmap.h | 5 + include/linux/userfaultfd_k.h | 12 + include/uapi/linux/userfaultfd.h | 29 +- mm/huge_memory.c | 138 +++++ mm/khugepaged.c | 3 + mm/rmap.c | 6 + mm/userfaultfd.c | 602 +++++++++++++++++++ 9 files changed, 860 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/mm/userfaultfd.rst b/Documentation/admin-guide/mm/userfaultfd.rst index 203e26da5f92..e5cc8848dcb3 100644 --- a/Documentation/admin-guide/mm/userfaultfd.rst +++ b/Documentation/admin-guide/mm/userfaultfd.rst @@ -113,6 +113,9 @@ events, except page fault notifications, may be generated: areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating support for shmem virtual memory areas. +- ``UFFD_FEATURE_MOVE`` indicates that the kernel supports moving an + existing page contents from userspace. + The userland application should set the feature flags it intends to use when invoking the ``UFFDIO_API`` ioctl, to request that those features be enabled if supported. diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index a7c6ef764e63..ac52e0f99a69 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2039,6 +2039,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features) return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED; } +static int userfaultfd_remap(struct userfaultfd_ctx *ctx, + unsigned long arg) +{ + __s64 ret; + struct uffdio_move uffdio_move; + struct uffdio_move __user *user_uffdio_move; + struct userfaultfd_wake_range range; + + user_uffdio_move = (struct uffdio_move __user *) arg; + + ret = -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) + goto out; + + ret = -EFAULT; + if (copy_from_user(&uffdio_move, user_uffdio_move, + /* don't copy "remap" last field */ + sizeof(uffdio_move)-sizeof(__s64))) + goto out; + + ret = validate_range(ctx->mm, uffdio_move.dst, uffdio_move.len); + if (ret) + goto out; + + ret = validate_range(current->mm, uffdio_move.src, uffdio_move.len); + if (ret) + goto out; + + ret = -EINVAL; + if (uffdio_move.mode & ~(UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES| + UFFDIO_MOVE_MODE_DONTWAKE)) + goto out; + + if (mmget_not_zero(ctx->mm)) { + ret = remap_pages(ctx->mm, current->mm, + uffdio_move.dst, uffdio_move.src, + uffdio_move.len, uffdio_move.mode); + mmput(ctx->mm); + } else { + return -ESRCH; + } + + if (unlikely(put_user(ret, &user_uffdio_move->move))) + return -EFAULT; + if (ret < 0) + goto out; + + /* len == 0 would wake all */ + BUG_ON(!ret); + range.len = ret; + if (!(uffdio_move.mode & UFFDIO_MOVE_MODE_DONTWAKE)) { + range.start = uffdio_move.dst; + wake_userfault(ctx, &range); + } + ret = range.len == uffdio_move.len ? 0 : -EAGAIN; + +out: + return ret; +} + /* * userland asks for a certain API version and we return which bits * and ioctl commands are implemented in this kernel for such API @@ -2131,6 +2191,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd, case UFFDIO_ZEROPAGE: ret = userfaultfd_zeropage(ctx, arg); break; + case UFFDIO_MOVE: + ret = userfaultfd_remap(ctx, arg); + break; case UFFDIO_WRITEPROTECT: ret = userfaultfd_writeprotect(ctx, arg); break; diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b26fe858fd44..8034eda972e5 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma) down_write(&anon_vma->root->rwsem); } +static inline int anon_vma_trylock_write(struct anon_vma *anon_vma) +{ + return down_write_trylock(&anon_vma->root->rwsem); +} + static inline void anon_vma_unlock_write(struct anon_vma *anon_vma) { up_write(&anon_vma->root->rwsem); diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index f2dc19f40d05..ce8d20b57e8c 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm, extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); +/* remap_pages */ +void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); +void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_start, unsigned long src_start, + unsigned long len, __u64 flags); +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr); + /* mm helpers */ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma, struct vm_userfaultfd_ctx vm_ctx) diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index 0dbc81015018..2841e4ea8f2c 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -41,7 +41,8 @@ UFFD_FEATURE_WP_HUGETLBFS_SHMEM | \ UFFD_FEATURE_WP_UNPOPULATED | \ UFFD_FEATURE_POISON | \ - UFFD_FEATURE_WP_ASYNC) + UFFD_FEATURE_WP_ASYNC | \ + UFFD_FEATURE_MOVE) #define UFFD_API_IOCTLS \ ((__u64)1 << _UFFDIO_REGISTER | \ (__u64)1 << _UFFDIO_UNREGISTER | \ @@ -50,6 +51,7 @@ ((__u64)1 << _UFFDIO_WAKE | \ (__u64)1 << _UFFDIO_COPY | \ (__u64)1 << _UFFDIO_ZEROPAGE | \ + (__u64)1 << _UFFDIO_MOVE | \ (__u64)1 << _UFFDIO_WRITEPROTECT | \ (__u64)1 << _UFFDIO_CONTINUE | \ (__u64)1 << _UFFDIO_POISON) @@ -73,6 +75,7 @@ #define _UFFDIO_WAKE (0x02) #define _UFFDIO_COPY (0x03) #define _UFFDIO_ZEROPAGE (0x04) +#define _UFFDIO_MOVE (0x05) #define _UFFDIO_WRITEPROTECT (0x06) #define _UFFDIO_CONTINUE (0x07) #define _UFFDIO_POISON (0x08) @@ -92,6 +95,8 @@ struct uffdio_copy) #define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \ struct uffdio_zeropage) +#define UFFDIO_MOVE _IOWR(UFFDIO, _UFFDIO_MOVE, \ + struct uffdio_move) #define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \ struct uffdio_writeprotect) #define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \ @@ -222,6 +227,9 @@ struct uffdio_api { * asynchronous mode is supported in which the write fault is * automatically resolved and write-protection is un-set. * It implies UFFD_FEATURE_WP_UNPOPULATED. + * + * UFFD_FEATURE_MOVE indicates that the kernel supports moving an + * existing page contents from userspace. */ #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0) #define UFFD_FEATURE_EVENT_FORK (1<<1) @@ -239,6 +247,7 @@ struct uffdio_api { #define UFFD_FEATURE_WP_UNPOPULATED (1<<13) #define UFFD_FEATURE_POISON (1<<14) #define UFFD_FEATURE_WP_ASYNC (1<<15) +#define UFFD_FEATURE_MOVE (1<<16) __u64 features; __u64 ioctls; @@ -347,6 +356,24 @@ struct uffdio_poison { __s64 updated; }; +struct uffdio_move { + __u64 dst; + __u64 src; + __u64 len; + /* + * Especially if used to atomically remove memory from the + * address space the wake on the dst range is not needed. + */ +#define UFFDIO_MOVE_MODE_DONTWAKE ((__u64)1<<0) +#define UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES ((__u64)1<<1) + __u64 mode; + /* + * "move" is written by the ioctl and must be at the end: the + * copy_from_user will not read the last 8 bytes. + */ + __s64 move; +}; + /* * Flags for the userfaultfd(2) system call itself. */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9656be95a542..6fac5c3d66e6 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2086,6 +2086,144 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return ret; } +#ifdef CONFIG_USERFAULTFD +/* + * The PT lock for src_pmd and the mmap_lock for reading are held by + * the caller, but it must return after releasing the + * page_table_lock. Just move the page from src_pmd to dst_pmd if possible. + * Return zero if succeeded in moving the page, -EAGAIN if it needs to be + * repeated by the caller, or other errors in case of failure. + */ +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr) +{ + pmd_t _dst_pmd, src_pmdval; + struct page *src_page; + struct folio *src_folio; + struct anon_vma *src_anon_vma; + spinlock_t *src_ptl, *dst_ptl; + pgtable_t src_pgtable, dst_pgtable; + struct mmu_notifier_range range; + int err = 0; + + src_pmdval = *src_pmd; + src_ptl = pmd_lockptr(src_mm, src_pmd); + + lockdep_assert_held(src_ptl); + mmap_assert_locked(src_mm); + mmap_assert_locked(dst_mm); + + BUG_ON(!pmd_none(dst_pmdval)); + BUG_ON(src_addr & ~HPAGE_PMD_MASK); + BUG_ON(dst_addr & ~HPAGE_PMD_MASK); + + if (!pmd_trans_huge(src_pmdval)) { + spin_unlock(src_ptl); + if (is_pmd_migration_entry(src_pmdval)) { + pmd_migration_entry_wait(src_mm, &src_pmdval); + return -EAGAIN; + } + return -ENOENT; + } + + src_page = pmd_page(src_pmdval); + if (unlikely(!PageAnonExclusive(src_page))) { + spin_unlock(src_ptl); + return -EBUSY; + } + + src_folio = page_folio(src_page); + folio_get(src_folio); + spin_unlock(src_ptl); + + /* preallocate dst_pgtable if needed */ + if (dst_mm != src_mm) { + dst_pgtable = pte_alloc_one(dst_mm); + if (unlikely(!dst_pgtable)) { + err = -ENOMEM; + goto put_folio; + } + } else { + dst_pgtable = NULL; + } + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr, + src_addr + HPAGE_PMD_SIZE); + mmu_notifier_invalidate_range_start(&range); + + folio_lock(src_folio); + + /* + * split_huge_page walks the anon_vma chain without the page + * lock. Serialize against it with the anon_vma lock, the page + * lock is not enough. + */ + src_anon_vma = folio_get_anon_vma(src_folio); + if (!src_anon_vma) { + err = -EAGAIN; + goto unlock_folio; + } + anon_vma_lock_write(src_anon_vma); + + dst_ptl = pmd_lockptr(dst_mm, dst_pmd); + double_pt_lock(src_ptl, dst_ptl); + if (unlikely(!pmd_same(*src_pmd, src_pmdval) || + !pmd_same(*dst_pmd, dst_pmdval))) { + double_pt_unlock(src_ptl, dst_ptl); + err = -EAGAIN; + goto put_anon_vma; + } + if (!PageAnonExclusive(&src_folio->page)) { + double_pt_unlock(src_ptl, dst_ptl); + err = -EBUSY; + goto put_anon_vma; + } + + BUG_ON(!folio_test_head(src_folio)); + BUG_ON(!folio_test_anon(src_folio)); + + folio_move_anon_rmap(src_folio, dst_vma); + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); + + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd); + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot); + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma); + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd); + + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd); + if (dst_pgtable) { + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable); + pte_free(src_mm, src_pgtable); + dst_pgtable = NULL; + + mm_inc_nr_ptes(dst_mm); + mm_dec_nr_ptes(src_mm); + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR); + } else { + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable); + } + double_pt_unlock(src_ptl, dst_ptl); + +put_anon_vma: + anon_vma_unlock_write(src_anon_vma); + put_anon_vma(src_anon_vma); +unlock_folio: + /* unblock rmap walks */ + folio_unlock(src_folio); + mmu_notifier_invalidate_range_end(&range); + if (dst_pgtable) + pte_free(dst_mm, dst_pgtable); +put_folio: + folio_put(src_folio); + + return err; +} +#endif /* CONFIG_USERFAULTFD */ + /* * Returns page table lock pointer if a given pmd maps a thp, NULL otherwise. * diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2b5c0321d96b..0c1ee7172852 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1136,6 +1136,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * Prevent all access to pagetables with the exception of * gup_fast later handled by the ptep_clear_flush and the VM * handled by the anon_vma lock + PG_lock. + * + * UFFDIO_MOVE is prevented to race as well thanks to the + * mmap_lock. */ mmap_write_lock(mm); result = hugepage_vma_revalidate(mm, address, true, &vma, cc); diff --git a/mm/rmap.c b/mm/rmap.c index f9ddc50269d2..a5919cac9a08 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -490,6 +490,12 @@ void __init anon_vma_init(void) * page_remove_rmap() that the anon_vma pointer from page->mapping is valid * if there is a mapcount, we can dereference the anon_vma after observing * those. + * + * NOTE: the caller should normally hold folio lock when calling this. If + * not, the caller needs to double check the anon_vma didn't change after + * taking the anon_vma lock for either read or write (UFFDIO_MOVE can modify it + * concurrently without folio lock protection). See folio_lock_anon_vma_read() + * which has already covered that, and comment above remap_pages(). */ struct anon_vma *folio_get_anon_vma(struct folio *folio) { diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 96d9eae5c7cc..45ce1a8b8ab9 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -842,3 +842,605 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, mmap_read_unlock(dst_mm); return err; } + + +void double_pt_lock(spinlock_t *ptl1, + spinlock_t *ptl2) + __acquires(ptl1) + __acquires(ptl2) +{ + spinlock_t *ptl_tmp; + + if (ptl1 > ptl2) { + /* exchange ptl1 and ptl2 */ + ptl_tmp = ptl1; + ptl1 = ptl2; + ptl2 = ptl_tmp; + } + /* lock in virtual address order to avoid lock inversion */ + spin_lock(ptl1); + if (ptl1 != ptl2) + spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING); + else + __acquire(ptl2); +} + +void double_pt_unlock(spinlock_t *ptl1, + spinlock_t *ptl2) + __releases(ptl1) + __releases(ptl2) +{ + spin_unlock(ptl1); + if (ptl1 != ptl2) + spin_unlock(ptl2); + else + __release(ptl2); +} + + +static int remap_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr, + pte_t *dst_pte, pte_t *src_pte, + pte_t orig_dst_pte, pte_t orig_src_pte, + spinlock_t *dst_ptl, spinlock_t *src_ptl, + struct folio *src_folio) +{ + double_pt_lock(dst_ptl, src_ptl); + + if (!pte_same(*src_pte, orig_src_pte) || + !pte_same(*dst_pte, orig_dst_pte)) { + double_pt_unlock(dst_ptl, src_ptl); + return -EAGAIN; + } + if (folio_test_large(src_folio) || + !PageAnonExclusive(&src_folio->page)) { + double_pt_unlock(dst_ptl, src_ptl); + return -EBUSY; + } + + BUG_ON(!folio_test_anon(src_folio)); + + folio_move_anon_rmap(src_folio, dst_vma); + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); + + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte); + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot); + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma); + + set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte); + + if (dst_mm != src_mm) { + inc_mm_counter(dst_mm, MM_ANONPAGES); + dec_mm_counter(src_mm, MM_ANONPAGES); + } + + double_pt_unlock(dst_ptl, src_ptl); + + return 0; +} + +static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_addr, unsigned long src_addr, + pte_t *dst_pte, pte_t *src_pte, + pte_t orig_dst_pte, pte_t orig_src_pte, + spinlock_t *dst_ptl, spinlock_t *src_ptl) +{ + if (!pte_swp_exclusive(orig_src_pte)) + return -EBUSY; + + double_pt_lock(dst_ptl, src_ptl); + + if (!pte_same(*src_pte, orig_src_pte) || + !pte_same(*dst_pte, orig_dst_pte)) { + double_pt_unlock(dst_ptl, src_ptl); + return -EAGAIN; + } + + orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte); + set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte); + + if (dst_mm != src_mm) { + inc_mm_counter(dst_mm, MM_SWAPENTS); + dec_mm_counter(src_mm, MM_SWAPENTS); + } + + double_pt_unlock(dst_ptl, src_ptl); + + return 0; +} + +/* + * The mmap_lock for reading is held by the caller. Just move the page + * from src_pmd to dst_pmd if possible, and return true if succeeded + * in moving the page. + */ +static int remap_pages_pte(struct mm_struct *dst_mm, + struct mm_struct *src_mm, + pmd_t *dst_pmd, + pmd_t *src_pmd, + struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, + unsigned long dst_addr, + unsigned long src_addr, + __u64 mode) +{ + swp_entry_t entry; + pte_t orig_src_pte, orig_dst_pte; + pte_t src_folio_pte; + spinlock_t *src_ptl, *dst_ptl; + pte_t *src_pte = NULL; + pte_t *dst_pte = NULL; + + struct folio *src_folio = NULL; + struct anon_vma *src_anon_vma = NULL; + struct mmu_notifier_range range; + int err = 0; + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, + src_addr, src_addr + PAGE_SIZE); + mmu_notifier_invalidate_range_start(&range); +retry: + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl); + + /* Retry if a huge pmd materialized from under us */ + if (unlikely(!dst_pte)) { + err = -EAGAIN; + goto out; + } + + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl); + + /* + * We held the mmap_lock for reading so MADV_DONTNEED + * can zap transparent huge pages under us, or the + * transparent huge page fault can establish new + * transparent huge pages under us. + */ + if (unlikely(!src_pte)) { + err = -EAGAIN; + goto out; + } + + BUG_ON(pmd_none(*dst_pmd)); + BUG_ON(pmd_none(*src_pmd)); + BUG_ON(pmd_trans_huge(*dst_pmd)); + BUG_ON(pmd_trans_huge(*src_pmd)); + + spin_lock(dst_ptl); + orig_dst_pte = *dst_pte; + spin_unlock(dst_ptl); + if (!pte_none(orig_dst_pte)) { + err = -EEXIST; + goto out; + } + + spin_lock(src_ptl); + orig_src_pte = *src_pte; + spin_unlock(src_ptl); + if (pte_none(orig_src_pte)) { + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) + err = -ENOENT; + else /* nothing to do to remap a hole */ + err = 0; + goto out; + } + + /* If PTE changed after we locked the folio them start over */ + if (src_folio && unlikely(!pte_same(src_folio_pte, orig_src_pte))) { + err = -EAGAIN; + goto out; + } + + if (pte_present(orig_src_pte)) { + /* + * Pin and lock both source folio and anon_vma. Since we are in + * RCU read section, we can't block, so on contention have to + * unmap the ptes, obtain the lock and retry. + */ + if (!src_folio) { + struct folio *folio; + + /* + * Pin the page while holding the lock to be sure the + * page isn't freed under us + */ + spin_lock(src_ptl); + if (!pte_same(orig_src_pte, *src_pte)) { + spin_unlock(src_ptl); + err = -EAGAIN; + goto out; + } + + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte); + if (!folio || folio_test_large(folio) || + !PageAnonExclusive(&folio->page)) { + spin_unlock(src_ptl); + err = -EBUSY; + goto out; + } + + folio_get(folio); + src_folio = folio; + src_folio_pte = orig_src_pte; + spin_unlock(src_ptl); + + if (!folio_trylock(src_folio)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + /* now we can block and wait */ + folio_lock(src_folio); + goto retry; + } + } + + if (!src_anon_vma) { + /* + * folio_referenced walks the anon_vma chain + * without the folio lock. Serialize against it with + * the anon_vma lock, the folio lock is not enough. + */ + src_anon_vma = folio_get_anon_vma(src_folio); + if (!src_anon_vma) { + /* page was unmapped from under us */ + err = -EAGAIN; + goto out; + } + if (!anon_vma_trylock_write(src_anon_vma)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + /* now we can block and wait */ + anon_vma_lock_write(src_anon_vma); + goto retry; + } + } + + err = remap_present_pte(dst_mm, src_mm, dst_vma, src_vma, + dst_addr, src_addr, dst_pte, src_pte, + orig_dst_pte, orig_src_pte, + dst_ptl, src_ptl, src_folio); + } else { + entry = pte_to_swp_entry(orig_src_pte); + if (non_swap_entry(entry)) { + if (is_migration_entry(entry)) { + pte_unmap(&orig_src_pte); + pte_unmap(&orig_dst_pte); + src_pte = dst_pte = NULL; + migration_entry_wait(src_mm, src_pmd, + src_addr); + err = -EAGAIN; + } else + err = -EFAULT; + goto out; + } + + err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr, + dst_pte, src_pte, + orig_dst_pte, orig_src_pte, + dst_ptl, src_ptl); + } + +out: + if (src_anon_vma) { + anon_vma_unlock_write(src_anon_vma); + put_anon_vma(src_anon_vma); + } + if (src_folio) { + folio_unlock(src_folio); + folio_put(src_folio); + } + if (dst_pte) + pte_unmap(dst_pte); + if (src_pte) + pte_unmap(src_pte); + mmu_notifier_invalidate_range_end(&range); + + return err; +} + +static int validate_remap_areas(struct vm_area_struct *src_vma, + struct vm_area_struct *dst_vma) +{ + /* Only allow remapping if both have the same access and protection */ + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) || + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot)) + return -EINVAL; + + /* Only allow remapping if both are mlocked or both aren't */ + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED)) + return -EINVAL; + + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE)) + return -EINVAL; + + /* + * Be strict and only allow remap_pages if either the src or + * dst range is registered in the userfaultfd to prevent + * userland errors going unnoticed. As far as the VM + * consistency is concerned, it would be perfectly safe to + * remove this check, but there's no useful usage for + * remap_pages ouside of userfaultfd registered ranges. This + * is after all why it is an ioctl belonging to the + * userfaultfd and not a syscall. + * + * Allow both vmas to be registered in the userfaultfd, just + * in case somebody finds a way to make such a case useful. + * Normally only one of the two vmas would be registered in + * the userfaultfd. + */ + if (!dst_vma->vm_userfaultfd_ctx.ctx && + !src_vma->vm_userfaultfd_ctx.ctx) + return -EINVAL; + + /* + * FIXME: only allow remapping across anonymous vmas, + * tmpfs should be added. + */ + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) + return -EINVAL; + + /* + * Ensure the dst_vma has a anon_vma or this page + * would get a NULL anon_vma when moved in the + * dst_vma. + */ + if (unlikely(anon_vma_prepare(dst_vma))) + return -ENOMEM; + + return 0; +} + +/** + * remap_pages - remap arbitrary anonymous pages of an existing vma + * @dst_start: start of the destination virtual memory range + * @src_start: start of the source virtual memory range + * @len: length of the virtual memory range + * + * remap_pages() remaps arbitrary anonymous pages atomically in zero + * copy. It only works on non shared anonymous pages because those can + * be relocated without generating non linear anon_vmas in the rmap + * code. + * + * It provides a zero copy mechanism to handle userspace page faults. + * The source vma pages should have mapcount == 1, which can be + * enforced by using madvise(MADV_DONTFORK) on src vma. + * + * The thread receiving the page during the userland page fault + * will receive the faulting page in the source vma through the network, + * storage or any other I/O device (MADV_DONTFORK in the source vma + * avoids remap_pages() to fail with -EBUSY if the process forks before + * remap_pages() is called), then it will call remap_pages() to map the + * page in the faulting address in the destination vma. + * + * This userfaultfd command works purely via pagetables, so it's the + * most efficient way to move physical non shared anonymous pages + * across different virtual addresses. Unlike mremap()/mmap()/munmap() + * it does not create any new vmas. The mapping in the destination + * address is atomic. + * + * It only works if the vma protection bits are identical from the + * source and destination vma. + * + * It can remap non shared anonymous pages within the same vma too. + * + * If the source virtual memory range has any unmapped holes, or if + * the destination virtual memory range is not a whole unmapped hole, + * remap_pages() will fail respectively with -ENOENT or -EEXIST. This + * provides a very strict behavior to avoid any chance of memory + * corruption going unnoticed if there are userland race conditions. + * Only one thread should resolve the userland page fault at any given + * time for any given faulting address. This means that if two threads + * try to both call remap_pages() on the same destination address at the + * same time, the second thread will get an explicit error from this + * command. + * + * The command retval will return "len" is successful. The command + * however can be interrupted by fatal signals or errors. If + * interrupted it will return the number of bytes successfully + * remapped before the interruption if any, or the negative error if + * none. It will never return zero. Either it will return an error or + * an amount of bytes successfully moved. If the retval reports a + * "short" remap, the remap_pages() command should be repeated by + * userland with src+retval, dst+reval, len-retval if it wants to know + * about the error that interrupted it. + * + * The UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES flag can be specified to + * prevent -ENOENT errors to materialize if there are holes in the + * source virtual range that is being remapped. The holes will be + * accounted as successfully remapped in the retval of the + * command. This is mostly useful to remap hugepage naturally aligned + * virtual regions without knowing if there are transparent hugepage + * in the regions or not, but preventing the risk of having to split + * the hugepmd during the remap. + * + * If there's any rmap walk that is taking the anon_vma locks without + * first obtaining the folio lock (the only current instance is + * folio_referenced), they will have to verify if the folio->mapping + * has changed after taking the anon_vma lock. If it changed they + * should release the lock and retry obtaining a new anon_vma, because + * it means the anon_vma was changed by remap_pages() before the lock + * could be obtained. This is the only additional complexity added to + * the rmap code to provide this anonymous page remapping functionality. + */ +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_start, unsigned long src_start, + unsigned long len, __u64 mode) +{ + struct vm_area_struct *src_vma, *dst_vma; + unsigned long src_addr, dst_addr; + pmd_t *src_pmd, *dst_pmd; + long err = -EINVAL; + ssize_t moved = 0; + + /* + * Sanitize the command parameters: + */ + BUG_ON(src_start & ~PAGE_MASK); + BUG_ON(dst_start & ~PAGE_MASK); + BUG_ON(len & ~PAGE_MASK); + + /* Does the address range wrap, or is the span zero-sized? */ + BUG_ON(src_start + len <= src_start); + BUG_ON(dst_start + len <= dst_start); + + /* + * Because these are read sempahores there's no risk of lock + * inversion. + */ + mmap_read_lock(dst_mm); + if (dst_mm != src_mm) + mmap_read_lock(src_mm); + + /* + * Make sure the vma is not shared, that the src and dst remap + * ranges are both valid and fully within a single existing + * vma. + */ + src_vma = find_vma(src_mm, src_start); + if (!src_vma || (src_vma->vm_flags & VM_SHARED)) + goto out; + if (src_start < src_vma->vm_start || + src_start + len > src_vma->vm_end) + goto out; + + dst_vma = find_vma(dst_mm, dst_start); + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) + goto out; + if (dst_start < dst_vma->vm_start || + dst_start + len > dst_vma->vm_end) + goto out; + + err = validate_remap_areas(src_vma, dst_vma); + if (err) + goto out; + + for (src_addr = src_start, dst_addr = dst_start; + src_addr < src_start + len;) { + spinlock_t *ptl; + pmd_t dst_pmdval; + unsigned long step_size; + + BUG_ON(dst_addr >= dst_start + len); + /* + * Below works because anonymous area would not have a + * transparent huge PUD. If file-backed support is added, + * that case would need to be handled here. + */ + src_pmd = mm_find_pmd(src_mm, src_addr); + if (unlikely(!src_pmd)) { + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) { + err = -ENOENT; + break; + } + src_pmd = mm_alloc_pmd(src_mm, src_addr); + if (unlikely(!src_pmd)) { + err = -ENOMEM; + break; + } + } + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr); + if (unlikely(!dst_pmd)) { + err = -ENOMEM; + break; + } + + dst_pmdval = pmdp_get_lockless(dst_pmd); + /* + * If the dst_pmd is mapped as THP don't override it and just + * be strict. If dst_pmd changes into TPH after this check, the + * remap_pages_huge_pmd() will detect the change and retry + * while remap_pages_pte() will detect the change and fail. + */ + if (unlikely(pmd_trans_huge(dst_pmdval))) { + err = -EEXIST; + break; + } + + ptl = pmd_trans_huge_lock(src_pmd, src_vma); + if (ptl) { + if (pmd_devmap(*src_pmd)) { + spin_unlock(ptl); + err = -ENOENT; + break; + } + + /* + * Check if we can move the pmd without + * splitting it. First check the address + * alignment to be the same in src/dst. These + * checks don't actually need the PT lock but + * it's good to do it here to optimize this + * block away at build time if + * CONFIG_TRANSPARENT_HUGEPAGE is not set. + */ + if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) || + src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) { + spin_unlock(ptl); + split_huge_pmd(src_vma, src_pmd, src_addr); + continue; + } + + err = remap_pages_huge_pmd(dst_mm, src_mm, + dst_pmd, src_pmd, + dst_pmdval, + dst_vma, src_vma, + dst_addr, src_addr); + step_size = HPAGE_PMD_SIZE; + } else { + if (pmd_none(*src_pmd)) { + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) { + err = -ENOENT; + break; + } + if (unlikely(__pte_alloc(src_mm, src_pmd))) { + err = -ENOMEM; + break; + } + } + + if (unlikely(pte_alloc(dst_mm, dst_pmd))) { + err = -ENOMEM; + break; + } + + err = remap_pages_pte(dst_mm, src_mm, + dst_pmd, src_pmd, + dst_vma, src_vma, + dst_addr, src_addr, + mode); + step_size = PAGE_SIZE; + } + + cond_resched(); + + if (fatal_signal_pending(current)) { + /* Do not override an error */ + if (!err || err == -EAGAIN) + err = -EINTR; + break; + } + + if (err) { + if (err == -EAGAIN) + continue; + break; + } + + /* Proceed to the next page */ + dst_addr += step_size; + src_addr += step_size; + moved += step_size; + } + +out: + mmap_read_unlock(dst_mm); + if (dst_mm != src_mm) + mmap_read_unlock(src_mm); + BUG_ON(moved < 0); + BUG_ON(err > 0); + BUG_ON(!moved && !err); + return moved ? moved : err; +} From patchwork Mon Oct 9 06:42:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13412985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26557E95A8E for ; Mon, 9 Oct 2023 06:42:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 299466B01A3; Mon, 9 Oct 2023 02:42:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FB116B01A5; Mon, 9 Oct 2023 02:42:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 026196B01A8; Mon, 9 Oct 2023 02:42:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E1C536B01A3 for ; Mon, 9 Oct 2023 02:42:44 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C1B83B42A5 for ; Mon, 9 Oct 2023 06:42:44 +0000 (UTC) X-FDA: 81324979848.16.EE187E7 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf14.hostedemail.com (Postfix) with ESMTP id 0B185100020 for ; Mon, 9 Oct 2023 06:42:42 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=NoRwV03W; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 34qAjZQYKCOEVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=34qAjZQYKCOEVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696833763; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=50i/QdEI5D071LaRqB1M3KzWwtbx6nE7wmO2EXtVHvA=; b=Gc5Tj8T8tnOc5/iHoIJgJdZbJaj6j9edQjCt8AiG9EjMfrJohNr6ZxoVD3gR858PBQxRsR TRmvjrH+oUUiG5rL8fN8PIJr7OYogKwOVnbJC21WOTluuRV/0+I41cgVGc28SP0lKIcRMV H9bMhT4qCajLmpkOdwV9j06HEl0Wgzs= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=NoRwV03W; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 34qAjZQYKCOEVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=34qAjZQYKCOEVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696833763; a=rsa-sha256; cv=none; b=tiC9O7vBKk5fX96PQ6ukPST+WQ6gfq4GJsxBskwVph2NDd4RDqf78q56eBCXA8aES/JqiG 0Db1iItL7X0oRDAI2obaHguApq4EDLQA7RrrP8DzPRs9/+1Ofd9SCCKjuq3JjQez1MrTK1 bjqgQDjCFXqUEu0a87FAAmxTBSsaVtw= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-d86dac81f8fso5890091276.1 for ; Sun, 08 Oct 2023 23:42:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696833762; x=1697438562; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=50i/QdEI5D071LaRqB1M3KzWwtbx6nE7wmO2EXtVHvA=; b=NoRwV03W3cHOMXZuviXkBvhKlDTYM179n8aGUXdBuehN8Q5aRyHdQel60erNMlegEQ DU+PRCFY30SC6Zi5Vf4spZNljGB8qh7A5U4XjzIA/Bg+LvxSx0r0Uk7S0i96Xjc3dPrW OpJ2BjOHE0F256PYEPZD50WY6vHg9cSU4KR23BdTgZHDCHlNS1o+QZqnaXgO6R3dHccQ bIhRs00ZP5+dxlcMHN/QRM8tfW9I4IkjChUofWOyjPg23uk7R/5VsJfslhjvn5diAhZM 4rA420cW8urpTvmo1U6IaSSOnX/8EaLe3C9RaiiGMU8xY8yPSrwPgOtzNNp05tgkdnoo ADAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696833762; x=1697438562; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=50i/QdEI5D071LaRqB1M3KzWwtbx6nE7wmO2EXtVHvA=; b=UVNMmZLI1t2FZujfoPcazR470B2WiuBSCk4AlEgKi0btlvB9Chhv5ufZ6razifYYGv SqpnaQyCJkxVH8RpS4iFQq08HvdWAwFuGRElmo8G83UbcWKy0WJCSNQd8uA/LkE/03/b zYC7HOVsNYjext66YTnN76TM7scVz1anl3fviuxaVf5jBzcJD2URxbmUyYe49W+wMmgv chdEK52vTFtCAUcf08oQeWdUpHXSm+xGiwNvUfkqGtXbe5V4+mM0QpjsYEBHOCxgpasj uG/VDr8v6fhqkVRUP37Qqr2or1KoJjaJuba3qHs1lB8TxtcEK3gFEatflhiTW2ZScZ4d ttCA== X-Gm-Message-State: AOJu0YwK7w52mG72AQCdvIUfhTI8ng6GpFWyg14+6oryvUOI0o+mLNLi k8B8ATYtV1vPs1QM96qwYJIdWsBWK0Y= X-Google-Smtp-Source: AGHT+IE3ENVpgSSY9jHiu1FOcPq/GRVi/eysZgrxroKYd6Fm4a1cG8w7FQlArfhqzdMAUqw7rMWUDxE9tZI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:3e83:601e:dc42:a455]) (user=surenb job=sendgmr) by 2002:a25:68ce:0:b0:d86:56bc:e289 with SMTP id d197-20020a2568ce000000b00d8656bce289mr225181ybc.4.1696833762182; Sun, 08 Oct 2023 23:42:42 -0700 (PDT) Date: Sun, 8 Oct 2023 23:42:28 -0700 In-Reply-To: <20231009064230.2952396-1-surenb@google.com> Mime-Version: 1.0 References: <20231009064230.2952396-1-surenb@google.com> X-Mailer: git-send-email 2.42.0.609.gbb76f46606-goog Message-ID: <20231009064230.2952396-4-surenb@google.com> Subject: [PATCH v3 3/3] selftests/mm: add UFFDIO_MOVE ioctl test From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, lokeshgidra@google.com, peterx@redhat.com, david@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com X-Rspamd-Queue-Id: 0B185100020 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: pnq1ykhedsrtkrcfh6hcqg6w8pi15wth X-HE-Tag: 1696833762-388301 X-HE-Meta: U2FsdGVkX1+j7FwAB1HA33Uj0bDjRmhsJ2LyePsGtOP/IRK2Pl48u8Xjplej8x/8WMZDWMlyeTqdejPXB0eS8dX7wBXcnRFAx2NFU1aYGoFGPO0L+S0hJDINYRjBpHkp7phhP1TgbjiKKJVQdu6n2Qbliori+0oR4u21l80Runexh6FTtRlUz2ubF87O8LykSGbdnKlj+nTXeH9WeamkbvDqi3BJ8+xVM+UZkjCwQFa5mSSTYI57giQ3lg3xad5MpNPsraSAnq0jyUeoACtWK1iZ5P2rNkuuZaLiSrw4Pl+q6ZQKf7ttSo63tx0FAJNOHkjPHOa6FimfQXqAc85fvipWpmrBPHhhHbyO9vBjSOMtuxyumQc0Gh/w2eMctv7XylR7obKtv+KH0cqSdmIIncGc4Ng1gbn4oC+6rpWTu3B+U7qiubS0+F4kGw2gQ2MLYfcSv0N8JwOm/NUip5fEJmW9SnS1lW9i8X1kSUJjLrL/fixeqFPoo5EtfPnU6sVcxlnH/buglDjV/woHlI0MmzZbtjW3zQeDU2tRmEGO83gfVFg2N22gjpBxCRlBNzsITImfqmiGvT6J1eSvwxmGvWhr+pUg1Q0vwCwm0wH7rt/nWNel1kixCKTEPGToP88nCT2QruRQyDMAbwdHFCTP4FXWWOpYlyt8XFSJ+EXbxHxPlA7BRJmKj3ocEzcAqFZ7f96nC3LvQF3Zpr05vh4H7XoqIMrq6lYjLcMBN97vNZpKTSuTK1SsNAm2hwWQ22A6UP9DBJ8RFc6h+Qa1vsouA/Skt1EI1+pVVDo3mJkeigzHVc2BSMeQwJGahNwF2vabBzHcwa/Ln6JzOeTtaAmJMT+aaQwQSzcUYZyaTeO5dGgE7pR+ZLyPYfuX0fsupj4xiswNHo/vCSaQq5WSFPW0yQvhpUEkOFK8cEVrxoNbV2Da2twirkQXU5S9PYLjg3QnL4xh/OZ/ykuJOwoWb/I Q23gMJL4 3YF/F+6/062pxAqyQiqmq0h+HXOi9J7Kom1pKfTBF8RGUWo1cpZzq8Z03P7u+FgFfQocWZf8LmesP4Xzya5oiyPCWWCpb4Ta8J8ob+n8wnPyNORn8XNMRrW/A9hzvN5NzbdXTm2W8ctKg+hsfoM8kbKQMXyVSrSqWzFTL57DcJdxh0QqZvwyo5RCwlTQVrL7lpotc9yjpwjELXP/PSn1CRdER/qA5t3hj82jKPh/8098HnDSiWNXSURU9UfztGe8BwCeCy6tX3n+kVZdwqnSFdFnU7+O+OGfd5Whb0np7RwMa0x3UYy/5v/EqFBD0uDi9avYh17RzFTgufSaXKbiJnKOc5IsTQI3MSWK/meqZbm6DeaZx2nI3FUhTvPNZDfJt6KfZX+iTq+CrbGY8T4UooBs+OoWxwF2rNwaRQyu9TT5jbxsg2GXk+XNlsefcuGp6jDgQGezxzTY32hUSepRCIP1N9zOmmTUvzJoef5+7jFDWmA4I9FF0fRIWz9lYasTW9+ggo79w2jW+zQe/EfpZ1+HQk/UjJF3N9aKIYUjmxRFMXnkzP2ciOfYr1EezjXULoBof X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a test for new UFFDIO_MOVE ioctl which uses uffd to move source into destination buffer while checking the contents of both after remapping. After the operation the content of the destination buffer should match the original source buffer's content while the source buffer should be zeroed. Signed-off-by: Suren Baghdasaryan --- tools/testing/selftests/mm/uffd-common.c | 41 ++++++++++++- tools/testing/selftests/mm/uffd-common.h | 1 + tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++++++++++++++++++++ 3 files changed, 102 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c index 02b89860e193..ecc1244f1c2b 100644 --- a/tools/testing/selftests/mm/uffd-common.c +++ b/tools/testing/selftests/mm/uffd-common.c @@ -52,6 +52,13 @@ static int anon_allocate_area(void **alloc_area, bool is_src) *alloc_area = NULL; return -errno; } + + /* Prevent source pages from collapsing into THPs */ + if (madvise(*alloc_area, nr_pages * page_size, MADV_NOHUGEPAGE)) { + *alloc_area = NULL; + return -errno; + } + return 0; } @@ -484,8 +491,14 @@ void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args) offset = (char *)(unsigned long)msg->arg.pagefault.address - area_dst; offset &= ~(page_size-1); - if (copy_page(uffd, offset, args->apply_wp)) - args->missing_faults++; + /* UFFD_MOVE is supported for anon non-shared mappings. */ + if (uffd_test_ops == &anon_uffd_test_ops && !map_shared) { + if (move_page(uffd, offset)) + args->missing_faults++; + } else { + if (copy_page(uffd, offset, args->apply_wp)) + args->missing_faults++; + } } } @@ -620,6 +633,30 @@ int copy_page(int ufd, unsigned long offset, bool wp) return __copy_page(ufd, offset, false, wp); } +int move_page(int ufd, unsigned long offset) +{ + struct uffdio_move uffdio_move; + + if (offset >= nr_pages * page_size) + err("unexpected offset %lu\n", offset); + uffdio_move.dst = (unsigned long) area_dst + offset; + uffdio_move.src = (unsigned long) area_src + offset; + uffdio_move.len = page_size; + uffdio_move.mode = UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES; + uffdio_move.move = 0; + if (ioctl(ufd, UFFDIO_MOVE, &uffdio_move)) { + /* real retval in uffdio_move.move */ + if (uffdio_move.move != -EEXIST) + err("UFFDIO_MOVE error: %"PRId64, + (int64_t)uffdio_move.move); + wake_range(ufd, uffdio_move.dst, page_size); + } else if (uffdio_move.move != page_size) { + err("UFFDIO_MOVE error: %"PRId64, (int64_t)uffdio_move.move); + } else + return 1; + return 0; +} + int uffd_open_dev(unsigned int flags) { int fd, uffd; diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h index 7c4fa964c3b0..f4d79e169a3d 100644 --- a/tools/testing/selftests/mm/uffd-common.h +++ b/tools/testing/selftests/mm/uffd-common.h @@ -111,6 +111,7 @@ void wp_range(int ufd, __u64 start, __u64 len, bool wp); void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args); int __copy_page(int ufd, unsigned long offset, bool retry, bool wp); int copy_page(int ufd, unsigned long offset, bool wp); +int move_page(int ufd, unsigned long offset); void *uffd_poll_thread(void *arg); int uffd_open_dev(unsigned int flags); diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c index 2709a34a39c5..f0ded3b34367 100644 --- a/tools/testing/selftests/mm/uffd-unit-tests.c +++ b/tools/testing/selftests/mm/uffd-unit-tests.c @@ -824,6 +824,10 @@ static void uffd_events_test_common(bool wp) char c; struct uffd_args args = { 0 }; + /* Prevent source pages from being mapped more than once */ + if (madvise(area_src, nr_pages * page_size, MADV_DONTFORK)) + err("madvise(MADV_DONTFORK) failed"); + fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK); if (uffd_register(uffd, area_dst, nr_pages * page_size, true, wp, false)) @@ -1062,6 +1066,58 @@ static void uffd_poison_test(uffd_test_args_t *targs) uffd_test_pass(); } +static void uffd_move_test(uffd_test_args_t *targs) +{ + unsigned long nr; + pthread_t uffd_mon; + char c; + unsigned long long count; + struct uffd_args args = { 0 }; + + if (uffd_register(uffd, area_dst, nr_pages * page_size, + true, false, false)) + err("register failure"); + + if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args)) + err("uffd_poll_thread create"); + + /* + * Read each of the pages back using the UFFD-registered mapping. We + * expect that the first time we touch a page, it will result in a missing + * fault. uffd_poll_thread will resolve the fault by remapping source + * page to destination. + */ + for (nr = 0; nr < nr_pages; nr++) { + /* Check area_src content */ + count = *area_count(area_src, nr); + if (count != count_verify[nr]) + err("nr %lu source memory invalid %llu %llu\n", + nr, count, count_verify[nr]); + + /* Faulting into area_dst should remap the page */ + count = *area_count(area_dst, nr); + if (count != count_verify[nr]) + err("nr %lu memory corruption %llu %llu\n", + nr, count, count_verify[nr]); + + /* Re-check area_src content which should be empty */ + count = *area_count(area_src, nr); + if (count != 0) + err("nr %lu move failed %llu %llu\n", + nr, count, count_verify[nr]); + } + + if (write(pipefd[1], &c, sizeof(c)) != sizeof(c)) + err("pipe write"); + if (pthread_join(uffd_mon, NULL)) + err("join() failed"); + + if (args.missing_faults != nr_pages || args.minor_faults != 0) + uffd_test_fail("stats check error"); + else + uffd_test_pass(); +} + /* * Test the returned uffdio_register.ioctls with different register modes. * Note that _UFFDIO_ZEROPAGE is tested separately in the zeropage test. @@ -1139,6 +1195,12 @@ uffd_test_case_t uffd_tests[] = { .mem_targets = MEM_ALL, .uffd_feature_required = 0, }, + { + .name = "move", + .uffd_fn = uffd_move_test, + .mem_targets = MEM_ANON, + .uffd_feature_required = UFFD_FEATURE_MOVE, + }, { .name = "wp-fork", .uffd_fn = uffd_wp_fork_test,