From patchwork Tue May 2 16:34:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13229152 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEBF4C77B73 for ; Tue, 2 May 2023 16:34:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4EE21900005; Tue, 2 May 2023 12:34:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 49E1B900002; Tue, 2 May 2023 12:34:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38CF3900005; Tue, 2 May 2023 12:34:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44]) by kanga.kvack.org (Postfix) with ESMTP id DE94C900002 for ; Tue, 2 May 2023 12:34:14 -0400 (EDT) Received: by mail-wr1-f44.google.com with SMTP id ffacd0b85a97d-3062db220a3so1655697f8f.0 for ; Tue, 02 May 2023 09:34:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683045254; x=1685637254; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GEh6qe0LJpI9PjPri6EZKbTs+kciBcl89x6Vk+ogtyI=; b=rV/tg4sPWK0Yd5OAkHs3STp3vCv7XgjCWJKFSObTKLQwJ4/9Q77hsil9WydXeZHB8N v/K4g3gaf34OKPjdwMCu5WCiclwfzN5ATZfm/F2jhrty05dMIUQSGStWMVl7S77m7TEE J5pXhpkUigoLyt8Mk6/SKM8OrczLwSN+hOEJ8mnXfR/s0cmOsz3xwJccQSnjnSeQ0F1F Cg/yMDM66fjILlrEcRKv1/l1ZYMSvc1SYzeK0nhuq5+Ivysgn0FW4NS/gCFrXddz26R6 E2kfzXErHsRCoRDeOL6Ot9Va095idmF0z/w1XEMx6AfyYDsehDMjx5gncp0yaexJMinu uTmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683045254; x=1685637254; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GEh6qe0LJpI9PjPri6EZKbTs+kciBcl89x6Vk+ogtyI=; b=SON16OQYt7MWWCdjgQgX/2UqrQgPhYynriIcy7nx4sOur39kmY19rey1PB3AZEKPbC QvsHPJCSqDUkr8bfdObKpFs6ZOk29KuCqJZJ4Uy0tzWz5Gtgl/K67oDtLbtN0u1/BFXS KvI6zPtLBE82k/6Gll9KV7HIK++0EAtUm8c/SLywrb0mZSc2IEmv6gPwKOAUtYdGNpdJ iQScW6RqJH2SNe86gHLOFIeE2m12THayKTQhR8dEtM7ydMZ3Lz38fpbCIiue13xIQYYJ +O3aPtoCtFt0ps8l9bIFIOWy+KIKyZp8Hq5paXq7b9pyWpRCvblqa8ry0YnGDP2rmcSN bUiA== X-Gm-Message-State: AC+VfDzMCXAkf64vLbEzQu4aA0sCIDduLMrKxMLLKxoXWk10nL9UYtWu J39GUB0ab2xyv9mVlui45vBC0AyNjDAH2g== X-Google-Smtp-Source: ACHHUZ50KL5lKzLS1EKe+OSGwWFCRqra//gPVIbku//UXjOKTsNJNyCy4Y3dFoKnwvlRlHT6/gbFig== X-Received: by 2002:a5d:54d1:0:b0:306:2ff6:5cbf with SMTP id x17-20020a5d54d1000000b003062ff65cbfmr5310165wrv.24.1683045253445; Tue, 02 May 2023 09:34:13 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id b10-20020a5d550a000000b0030639a86f9dsm1789919wrv.51.2023.05.02.09.34.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 May 2023 09:34:12 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu , Matthew Rosato , "Paul E . McKenney" , Christian Borntraeger , Lorenzo Stoakes Subject: [PATCH v7 1/3] mm/mmap: separate writenotify and dirty tracking logic Date: Tue, 2 May 2023 17:34:03 +0100 Message-Id: <72a90af5a9e4445a33ae44efa710f112c2694cb1.1683044162.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vma_wants_writenotify() is specifically intended for setting PTE page table flags, accounting for existing PTE flag state and whether that might already be read-only while mixing this check with a check whether the filesystem performs dirty tracking. Separate out the notions of dirty tracking and a PTE write notify checking in order that we can invoke the dirty tracking check from elsewhere. Note that this change introduces a very small duplicate check of the separated out vm_ops_needs_writenotify(). This is necessary to avoid making vma_needs_dirty_tracking() needlessly complicated (e.g. passing a check_writenotify flag or having it assume this check was already performed). This is such a small check that it doesn't seem too egregious to do this. Signed-off-by: Lorenzo Stoakes Reviewed-by: John Hubbard Reviewed-by: Mika Penttilä Reviewed-by: Jan Kara Reviewed-by: Jason Gunthorpe --- include/linux/mm.h | 1 + mm/mmap.c | 36 +++++++++++++++++++++++++++--------- 2 files changed, 28 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 27ce77080c79..7b1d4e7393ef 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2422,6 +2422,7 @@ extern unsigned long move_page_tables(struct vm_area_struct *vma, #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ MM_CP_UFFD_WP_RESOLVE) +bool vma_needs_dirty_tracking(struct vm_area_struct *vma); int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot); static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma) { diff --git a/mm/mmap.c b/mm/mmap.c index 5522130ae606..295c5f2e9bd9 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1475,6 +1475,31 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg) } #endif /* __ARCH_WANT_SYS_OLD_MMAP */ +/* Do VMA operations imply write notify is required? */ +static bool vm_ops_needs_writenotify(const struct vm_operations_struct *vm_ops) +{ + return vm_ops && (vm_ops->page_mkwrite || vm_ops->pfn_mkwrite); +} + +/* + * Does this VMA require the underlying folios to have their dirty state + * tracked? + */ +bool vma_needs_dirty_tracking(struct vm_area_struct *vma) +{ + /* Does the filesystem need to be notified? */ + if (vm_ops_needs_writenotify(vma->vm_ops)) + return true; + + /* Specialty mapping? */ + if (vma->vm_flags & VM_PFNMAP) + return false; + + /* Can the mapping track the dirty pages? */ + return vma->vm_file && vma->vm_file->f_mapping && + mapping_can_writeback(vma->vm_file->f_mapping); +} + /* * Some shared mappings will want the pages marked read-only * to track write events. If so, we'll downgrade vm_page_prot @@ -1484,14 +1509,13 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg) int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot) { vm_flags_t vm_flags = vma->vm_flags; - const struct vm_operations_struct *vm_ops = vma->vm_ops; /* If it was private or non-writable, the write bit is already clear */ if ((vm_flags & (VM_WRITE|VM_SHARED)) != ((VM_WRITE|VM_SHARED))) return 0; /* The backer wishes to know when pages are first written to? */ - if (vm_ops && (vm_ops->page_mkwrite || vm_ops->pfn_mkwrite)) + if (vm_ops_needs_writenotify(vma->vm_ops)) return 1; /* The open routine did something to the protections that pgprot_modify @@ -1511,13 +1535,7 @@ int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot) if (userfaultfd_wp(vma)) return 1; - /* Specialty mapping? */ - if (vm_flags & VM_PFNMAP) - return 0; - - /* Can the mapping track the dirty pages? */ - return vma->vm_file && vma->vm_file->f_mapping && - mapping_can_writeback(vma->vm_file->f_mapping); + return vma_needs_dirty_tracking(vma); } /* From patchwork Tue May 2 16:34:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13229153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74053C7EE23 for ; Tue, 2 May 2023 16:34:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0CFAE900006; Tue, 2 May 2023 12:34:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08206900002; Tue, 2 May 2023 12:34:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1592900006; Tue, 2 May 2023 12:34:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f42.google.com (mail-wr1-f42.google.com [209.85.221.42]) by kanga.kvack.org (Postfix) with ESMTP id 928F6900002 for ; Tue, 2 May 2023 12:34:16 -0400 (EDT) Received: by mail-wr1-f42.google.com with SMTP id ffacd0b85a97d-3062c1e7df8so1849217f8f.1 for ; Tue, 02 May 2023 09:34:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683045255; x=1685637255; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hDxuvhlrwYGCtBwJIBLRhiGRbpzp5H+edOHA56egbkk=; b=IzG1Ue+sSjQWtDgF8bz1cO/9kEB4U3eS7wvUN/6uPiFcmH2vv0p05zz51RdT/E83Wn XQP93rJPRSGEFCCBeCbLVXitY9duUiLNsZBpeMytKwCqyjci89S5XDxh8M90J/hQDnzQ o25rVRVm2U142hmp8kO/5pvZGP9Xna1YVoCQEnY2Vd0m+39z5B55LjxK9+F1kqQnoD1P /9GV9jgDeJbIvZ6SINb2LlC1rtj1ZTv/EW8niyMtcoqE7W8ntWqIZ0bKole79HgABDsR ohN7aLpVRQqhatNalKmgDP71L16ymM+oJLSfAE6gwytQz7iNjYVNfYTkejTY81Whm9tn 0Vzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683045255; x=1685637255; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hDxuvhlrwYGCtBwJIBLRhiGRbpzp5H+edOHA56egbkk=; b=BOWMGP2Cb7zUj8Vrm6G6mf7O6xp9tnditxxzv/a2R7vX73Esm17z0hUgAUD1jqvuhn Q8ay//+73eSWf9Ycx9c/WF4yK/XJSNymbzuzF1W1OVZ5IO5thruCao4/XdWcIILE1dAv mdnDOUCDlKXdt0P5l8PXkQLUL+ltabi11PestgLnMrEK+fH6ssuMFMrzFzbvji4CCNJQ G0KPm36Fohjirka1/guTC3QmJ/3TcK0IXYh1BJQnV6JVETs+UqVCwmxi9E+nMBMW6Bxd N7xnpnd/mHUI9QSlr9xeYyysPwPlY/RSosTh0to1pyGbgMzcxOJIq1Ugtwt29uEQAEGS zoQw== X-Gm-Message-State: AC+VfDxUC0mZzWK95FWIHE36bzwQjKAVwkefu6tEaISng36LsAKoLDvw 5Z89sw4qXsxBZT99acpvKEoSDTdFn1N5PA== X-Google-Smtp-Source: ACHHUZ426hhGDH8KE7RfWfcA7flmr3t4alMi7Lnczuum1dniVrvWCd3Qn1SCgRGMKiZ+pMysFexBCA== X-Received: by 2002:a5d:4090:0:b0:2c5:3cd2:b8e with SMTP id o16-20020a5d4090000000b002c53cd20b8emr10899748wrp.1.1683045255467; Tue, 02 May 2023 09:34:15 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id b10-20020a5d550a000000b0030639a86f9dsm1789919wrv.51.2023.05.02.09.34.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 May 2023 09:34:14 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu , Matthew Rosato , "Paul E . McKenney" , Christian Borntraeger , Lorenzo Stoakes Subject: [PATCH v7 2/3] mm/gup: disallow FOLL_LONGTERM GUP-nonfast writing to file-backed mappings Date: Tue, 2 May 2023 17:34:04 +0100 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Writing to file-backed mappings which require folio dirty tracking using GUP is a fundamentally broken operation, as kernel write access to GUP mappings do not adhere to the semantics expected by a file system. A GUP caller uses the direct mapping to access the folio, which does not cause write notify to trigger, nor does it enforce that the caller marks the folio dirty. The problem arises when, after an initial write to the folio, writeback results in the folio being cleaned and then the caller, via the GUP interface, writes to the folio again. As a result of the use of this secondary, direct, mapping to the folio no write notify will occur, and if the caller does mark the folio dirty, this will be done so unexpectedly. For example, consider the following scenario:- 1. A folio is written to via GUP which write-faults the memory, notifying the file system and dirtying the folio. 2. Later, writeback is triggered, resulting in the folio being cleaned and the PTE being marked read-only. 3. The GUP caller writes to the folio, as it is mapped read/write via the direct mapping. 4. The GUP caller, now done with the page, unpins it and sets it dirty (though it does not have to). This results in both data being written to a folio without writenotify, and the folio being dirtied unexpectedly (if the caller decides to do so). This issue was first reported by Jan Kara [1] in 2018, where the problem resulted in file system crashes. This is only relevant when the mappings are file-backed and the underlying file system requires folio dirty tracking. File systems which do not, such as shmem or hugetlb, are not at risk and therefore can be written to without issue. Unfortunately this limitation of GUP has been present for some time and requires future rework of the GUP API in order to provide correct write access to such mappings. However, for the time being we introduce this check to prevent the most egregious case of this occurring, use of the FOLL_LONGTERM pin. These mappings are considerably more likely to be written to after folios are cleaned and thus simply must not be permitted to do so. This patch changes only the slow-path GUP functions, a following patch adapts the GUP-fast path along similar lines. [1]:https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz/ Suggested-by: Jason Gunthorpe Signed-off-by: Lorenzo Stoakes Reviewed-by: John Hubbard Reviewed-by: Mika Penttilä Reviewed-by: Jan Kara Reviewed-by: Jason Gunthorpe --- mm/gup.c | 43 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 42 insertions(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index ff689c88a357..6e209ca10967 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -959,16 +959,53 @@ static int faultin_page(struct vm_area_struct *vma, return 0; } +/* + * Writing to file-backed mappings which require folio dirty tracking using GUP + * is a fundamentally broken operation, as kernel write access to GUP mappings + * do not adhere to the semantics expected by a file system. + * + * Consider the following scenario:- + * + * 1. A folio is written to via GUP which write-faults the memory, notifying + * the file system and dirtying the folio. + * 2. Later, writeback is triggered, resulting in the folio being cleaned and + * the PTE being marked read-only. + * 3. The GUP caller writes to the folio, as it is mapped read/write via the + * direct mapping. + * 4. The GUP caller, now done with the page, unpins it and sets it dirty + * (though it does not have to). + * + * This results in both data being written to a folio without writenotify, and + * the folio being dirtied unexpectedly (if the caller decides to do so). + */ +static bool writeable_file_mapping_allowed(struct vm_area_struct *vma, + unsigned long gup_flags) +{ + /* + * If we aren't pinning then no problematic write can occur. A long term + * pin is the most egregious case so this is the case we disallow. + */ + if (!(gup_flags & (FOLL_PIN | FOLL_LONGTERM))) + return true; + + /* + * If the VMA does not require dirty tracking then no problematic write + * can occur either. + */ + return !vma_needs_dirty_tracking(vma); +} + static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) { vm_flags_t vm_flags = vma->vm_flags; int write = (gup_flags & FOLL_WRITE); int foreign = (gup_flags & FOLL_REMOTE); + bool vma_anon = vma_is_anonymous(vma); if (vm_flags & (VM_IO | VM_PFNMAP)) return -EFAULT; - if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma)) + if ((gup_flags & FOLL_ANON) && !vma_anon) return -EFAULT; if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) @@ -978,6 +1015,10 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) return -EFAULT; if (write) { + if (!vma_anon && + !writeable_file_mapping_allowed(vma, gup_flags)) + return -EFAULT; + if (!(vm_flags & VM_WRITE)) { if (!(gup_flags & FOLL_FORCE)) return -EFAULT; From patchwork Tue May 2 16:34:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13229154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C74E6C77B73 for ; Tue, 2 May 2023 16:34:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5FF61900007; Tue, 2 May 2023 12:34:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AF3B900002; Tue, 2 May 2023 12:34:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 450C2900007; Tue, 2 May 2023 12:34:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by kanga.kvack.org (Postfix) with ESMTP id EB512900002 for ; Tue, 2 May 2023 12:34:18 -0400 (EDT) Received: by mail-wr1-f53.google.com with SMTP id ffacd0b85a97d-2f6401ce8f8so2515404f8f.3 for ; Tue, 02 May 2023 09:34:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683045258; x=1685637258; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JD+whRS851erhVVABMqdG0VTRc0I6Z1rUQ+rUiOFVdo=; b=LOnfftTmcJ81+iwcBByQXB7Ir64O4Y1k4pQkOcPV6TI0bAkgfp3UaIZJ+zuRf48mH+ sKQ+qJFhhAT+ExIv4raZCe4Y7g6YIyog7zt61v/DO+Fa7aU1K2QXQwuKAeXJn8SIoYiB W4rnS2m03quL9SZbgo+gI56ky1WuHrcAT7sa4wKL20+7gCA8GdMCVBH1cDnLcEb6fDXA c27XUVFOEl1cVsIdPhrXIdFzaWhtobeae2CFGmZSQpcLNg81bdYOgyqtNdKwNp+4Ve3p UgEWWsgUHsHxSNWn8K/VU+Ldw1J4kZz6hnohQGiLK02HgtBSbtURRyI2GNirMOgnJNRQ Xo9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683045258; x=1685637258; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JD+whRS851erhVVABMqdG0VTRc0I6Z1rUQ+rUiOFVdo=; b=WF/VReSHZBuT0xS6RxC1hfb7EJ3oAWbnWCOpoTBc38mzVfLNm3c3RlVnxKkHgg38h+ Qx3R8xQlHg9tBCRUHg+1ar9558MmyxRfC4oTkvIy5El98h/waC3MIhWw9idDH09Jzz9U gvB6AhLWE15Ke+j/JWUY4cba/jUR4z7lZWXydthUmIUNOkDeu7N8AX3Wch9Ceihy2hLo +iKKHPNAFFX4o6aIzgd06BqhkBZHWxAuEoqTgQFbmGcgT8IBYuBMWnzreIp4H4hN8rON 2rP1wolUn81AgvCfWmCxeuMnF07CdpgtekKFzRrD1NeFSVAYfNhppzA1Yo5bg2R8GwcX htFA== X-Gm-Message-State: AC+VfDwKoebelS4ivFXxjwdLJO73Y9h0UFWAzrGOjmop1NdAcN+7C9UH blQ2EAKmg2foDQmlRh2/1/B69oDtVszOhA== X-Google-Smtp-Source: ACHHUZ71Yj6T5jkZc/shg2Kn5A9sybaTNToTec3q07n6D5PpB37fQKAx38VozXIWishADRjyKrMnbg== X-Received: by 2002:adf:edcf:0:b0:306:2cd4:b01b with SMTP id v15-20020adfedcf000000b003062cd4b01bmr5305262wro.37.1683045257681; Tue, 02 May 2023 09:34:17 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id b10-20020a5d550a000000b0030639a86f9dsm1789919wrv.51.2023.05.02.09.34.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 May 2023 09:34:17 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu , Matthew Rosato , "Paul E . McKenney" , Christian Borntraeger , Lorenzo Stoakes Subject: [PATCH v7 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings Date: Tue, 2 May 2023 17:34:05 +0100 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Writing to file-backed dirty-tracked mappings via GUP is inherently broken as we cannot rule out folios being cleaned and then a GUP user writing to them again and possibly marking them dirty unexpectedly. This is especially egregious for long-term mappings (as indicated by the use of the FOLL_LONGTERM flag), so we disallow this case in GUP-fast as we have already done in the slow path. We have access to less information in the fast path as we cannot examine the VMA containing the mapping, however we can determine whether the folio is anonymous and then whitelist known-good mappings - specifically hugetlb and shmem mappings. While we obtain a stable folio for this check, the mapping might not be, as a truncate could nullify it at any time. Since doing so requires mappings to be zapped, we can synchronise against a TLB shootdown operation. For some architectures TLB shootdown is synchronised by IPI, against which we are protected as the GUP-fast operation is performed with interrupts disabled. Equally, we are protected from architectures which specify CONFIG_MMU_GATHER_RCU_TABLE_FREE as the interrupts being disabled imply an RCU lock as well. We whitelist anonymous mappings (and those which otherwise do not have a valid mapping), shmem and hugetlb mappings, none of which require dirty tracking so are safe to long-term pin. It's important to note that there are no APIs allowing users to specify FOLL_FAST_ONLY for a PUP-fast let alone with FOLL_LONGTERM, so we can always rely on the fact that if we fail to pin on the fast path, the code will fall back to the slow path which can perform the more thorough check. Suggested-by: David Hildenbrand Suggested-by: Kirill A . Shutemov Suggested-by: Peter Zijlstra Signed-off-by: Lorenzo Stoakes --- mm/gup.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 60 insertions(+), 2 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 6e209ca10967..93b4aa39e5a5 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -95,6 +96,52 @@ static inline struct folio *try_get_folio(struct page *page, int refs) return folio; } +/* + * Used in the GUP-fast path to determine whether a FOLL_PIN | FOLL_LONGTERM | + * FOLL_WRITE pin is permitted for a specific folio. + * + * This assumes the folio is stable and pinned. + * + * Writing to pinned file-backed dirty tracked folios is inherently problematic + * (see comment describing the writeable_file_mapping_allowed() function). We + * therefore try to avoid the most egregious case of a long-term mapping doing + * so. + * + * This function cannot be as thorough as that one as the VMA is not available + * in the fast path, so instead we whitelist known good cases. + */ +static bool folio_longterm_write_pin_allowed(struct folio *folio) +{ + struct address_space *mapping; + + /* + * GUP-fast disables IRQs - this prevents IPIs from causing page tables + * to disappear from under us, as well as preventing RCU grace periods + * from making progress (i.e. implying rcu_read_lock()). + * + * This means we can rely on the folio remaining stable for all + * architectures, both those that set CONFIG_MMU_GATHER_RCU_TABLE_FREE + * and those that do not. + * + * We get the added benefit that given inodes, and thus address_space, + * objects are RCU freed, we can rely on the mapping remaining stable + * here with no risk of a truncation or similar race. + */ + lockdep_assert_irqs_disabled(); + + /* + * If no mapping can be found, this implies an anonymous or otherwise + * non-file backed folio so in this instance we permit the pin. + * + * shmem and hugetlb mappings do not require dirty-tracking so we + * explicitly whitelist these. + * + * Other non dirty-tracked folios will be picked up on the slow path. + */ + mapping = folio_mapping(folio); + return !mapping || shmem_mapping(mapping) || folio_test_hugetlb(folio); +} + /** * try_grab_folio() - Attempt to get or pin a folio. * @page: pointer to page to be grabbed @@ -123,6 +170,8 @@ static inline struct folio *try_get_folio(struct page *page, int refs) */ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) { + bool is_longterm = flags & FOLL_LONGTERM; + if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page))) return NULL; @@ -136,8 +185,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) * right zone, so fail and let the caller fall back to the slow * path. */ - if (unlikely((flags & FOLL_LONGTERM) && - !is_longterm_pinnable_page(page))) + if (unlikely(is_longterm && !is_longterm_pinnable_page(page))) return NULL; /* @@ -148,6 +196,16 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) if (!folio) return NULL; + /* + * Can this folio be safely pinned? We need to perform this + * check after the folio is stabilised. + */ + if ((flags & FOLL_WRITE) && is_longterm && + !folio_longterm_write_pin_allowed(folio)) { + folio_put_refs(folio, refs); + return NULL; + } + /* * When pinning a large folio, use an exact count to track it. *