From patchwork Mon May 1 23:11:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13229119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 163EBC7EE23 for ; Tue, 2 May 2023 15:57:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 969496B0074; Tue, 2 May 2023 11:57:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9195E900002; Tue, 2 May 2023 11:57:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BAC36B0078; Tue, 2 May 2023 11:57:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by kanga.kvack.org (Postfix) with ESMTP id 2C2506B0074 for ; Tue, 2 May 2023 11:57:16 -0400 (EDT) Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-50bc070c557so5923332a12.0 for ; Tue, 02 May 2023 08:57:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683043035; x=1685635035; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wRhgM2f/NNBjKearO/NDkRN71s+dJvkB6KmKE0+fglE=; b=BiVLHhvEKkAcay7zPSyl5e4m8s2rDa3May5OQpiNulCZfZf1Ep71iaKOw3J85qPdHr tD11SBPaljyLUbhkF88CNc5mftm01QJj3M1rODAx0VgeV/o56VOsJrwB2hcGM4FwH+6S h6UCnCbXiqmFp679guJKVNc0sE2sEZ3WCHah22FLhcq7tuRuJYNTCZggfCZ0Dv7WCTbm 6Z9sSHsB6/lmjEw4j2orqkGwpB4tdeb2WXIEGkOCIf0m5OC7xFBrsccl2k3BE77o3EXU 7DWELT+Oi1x0swwyy3fto5o/6JbxoKssjbrsDT0KXizTNgvtkIq1TTJQssayGETOfv/M GDwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683043035; x=1685635035; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wRhgM2f/NNBjKearO/NDkRN71s+dJvkB6KmKE0+fglE=; b=TFpHXjDtyLhVnKDe8p5xefHBlOqczCsPL2w/K6WUQUDf4/8djqZ+GfDdlw/dk0APIF C0ztFp3fItBMRiU4Ucyq78iZslikZwP5/LoIJs8wxwotlhGgCJ6cCbmVUdEFZ2u4/akq ggpsn77Vcpt1yD7PcdA5SAlcJKWjhz10d7ljR0U/iLZZSWlthnBd/aWjlXv6o73/DcpF qB+3TINpNYdMhsIkrk1GpLJFfFn40KpEsTYclFu1WQhRhFuRhRGP5fiF94FinM7SJEgi /TGP7P+oiFo3AQ/sfOR8HOjbTahQaYbPKuZ3KqDj1GoL78puFdd8rjJW3Etbi39RGY1A wolw== X-Gm-Message-State: AC+VfDy5SR+A3HxV7TDbDVsWyiU0IozolG3AVly/TYOSVuLDmv2dOPUp eusYoIHmuthuGMMaf/UYlF1rOwCupJBdmg== X-Google-Smtp-Source: ACHHUZ5YTZK8t09/4JtEs4EDGQIt2XfYUtCcZjnm3hRXzBTIlHBQPbU9q8wthFbfrn9ldV69FnkQ7Q== X-Received: by 2002:a7b:c408:0:b0:3eb:42fc:fb30 with SMTP id k8-20020a7bc408000000b003eb42fcfb30mr10475842wmi.32.1682982845060; Mon, 01 May 2023 16:14:05 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm48948904wmn.2.2023.05.01.16.14.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 16:14:04 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu , Lorenzo Stoakes Subject: [PATCH v6 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings Date: Tue, 2 May 2023 00:11:49 +0100 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Writing to file-backed dirty-tracked mappings via GUP is inherently broken as we cannot rule out folios being cleaned and then a GUP user writing to them again and possibly marking them dirty unexpectedly. This is especially egregious for long-term mappings (as indicated by the use of the FOLL_LONGTERM flag), so we disallow this case in GUP-fast as we have already done in the slow path. We have access to less information in the fast path as we cannot examine the VMA containing the mapping, however we can determine whether the folio is anonymous and then whitelist known-good mappings - specifically hugetlb and shmem mappings. While we obtain a stable folio for this check, the mapping might not be, as a truncate could nullify it at any time. Since doing so requires mappings to be zapped, we can synchronise against a TLB shootdown operation. For some architectures TLB shootdown is synchronised by IPI, against which we are protected as the GUP-fast operation is performed with interrupts disabled. However, other architectures which specify CONFIG_MMU_GATHER_RCU_TABLE_FREE use an RCU lock for this operation. In these instances, we acquire an RCU lock while performing our checks. If we cannot get a stable mapping, we fall back to the slow path, as otherwise we'd have to walk the page tables again and it's simpler and more effective to just fall back. It's important to note that there are no APIs allowing users to specify FOLL_FAST_ONLY for a PUP-fast let alone with FOLL_LONGTERM, so we can always rely on the fact that if we fail to pin on the fast path, the code will fall back to the slow path which can perform the more thorough check. Suggested-by: David Hildenbrand Suggested-by: Kirill A . Shutemov Signed-off-by: Lorenzo Stoakes --- mm/gup.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 85 insertions(+), 2 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 0f09dec0906c..431618048a03 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -95,6 +96,77 @@ static inline struct folio *try_get_folio(struct page *page, int refs) return folio; } +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE +static bool stabilise_mapping_rcu(struct folio *folio) +{ + struct address_space *mapping = READ_ONCE(folio->mapping); + + rcu_read_lock(); + + return mapping == READ_ONCE(folio->mapping); +} + +static void unlock_rcu(void) +{ + rcu_read_unlock(); +} +#else +static bool stabilise_mapping_rcu(struct folio *) +{ + return true; +} + +static void unlock_rcu(void) +{ +} +#endif + +/* + * Used in the GUP-fast path to determine whether a FOLL_PIN | FOLL_LONGTERM | + * FOLL_WRITE pin is permitted for a specific folio. + * + * This assumes the folio is stable and pinned. + * + * Writing to pinned file-backed dirty tracked folios is inherently problematic + * (see comment describing the writeable_file_mapping_allowed() function). We + * therefore try to avoid the most egregious case of a long-term mapping doing + * so. + * + * This function cannot be as thorough as that one as the VMA is not available + * in the fast path, so instead we whitelist known good cases. + * + * The folio is stable, but the mapping might not be. When truncating for + * instance, a zap is performed which triggers TLB shootdown. IRQs are disabled + * so we are safe from an IPI, but some architectures use an RCU lock for this + * operation, so we acquire an RCU lock to ensure the mapping is stable. + */ +static bool folio_longterm_write_pin_allowed(struct folio *folio) +{ + bool ret; + + /* hugetlb mappings do not require dirty tracking. */ + if (folio_test_hugetlb(folio)) + return true; + + if (stabilise_mapping_rcu(folio)) { + struct address_space *mapping = folio_mapping(folio); + + /* + * Neither anonymous nor shmem-backed folios require + * dirty tracking. + */ + ret = folio_test_anon(folio) || + (mapping && shmem_mapping(mapping)); + } else { + /* If the mapping is unstable, fallback to the slow path. */ + ret = false; + } + + unlock_rcu(); + + return ret; +} + /** * try_grab_folio() - Attempt to get or pin a folio. * @page: pointer to page to be grabbed @@ -123,6 +195,8 @@ static inline struct folio *try_get_folio(struct page *page, int refs) */ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) { + bool is_longterm = flags & FOLL_LONGTERM; + if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page))) return NULL; @@ -136,8 +210,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) * right zone, so fail and let the caller fall back to the slow * path. */ - if (unlikely((flags & FOLL_LONGTERM) && - !is_longterm_pinnable_page(page))) + if (unlikely(is_longterm && !is_longterm_pinnable_page(page))) return NULL; /* @@ -148,6 +221,16 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) if (!folio) return NULL; + /* + * Can this folio be safely pinned? We need to perform this + * check after the folio is stabilised. + */ + if ((flags & FOLL_WRITE) && is_longterm && + !folio_longterm_write_pin_allowed(folio)) { + folio_put_refs(folio, refs); + return NULL; + } + /* * When pinning a large folio, use an exact count to track it. *