From patchwork Thu May 4 21:27:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13231849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39663C7EE2C for ; Thu, 4 May 2023 21:28:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A03F6B007B; Thu, 4 May 2023 17:28:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52B3E6B007D; Thu, 4 May 2023 17:28:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37C0A6B007E; Thu, 4 May 2023 17:28:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) by kanga.kvack.org (Postfix) with ESMTP id CE3CD6B007B for ; Thu, 4 May 2023 17:28:04 -0400 (EDT) Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-3f1e2555b5aso7745745e9.0 for ; Thu, 04 May 2023 14:28:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683235683; x=1685827683; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LVvS/g9TQZhbyJ0eYwxV/bq9pypwHnS3KxbZ3TakH0A=; b=nDqaT9jTn1xS/73an8X8CZx5lEdAkbbCfNc3aHqQJY0MVaAq0rKXw3Hl7EMh/PsKaw KjyDymSUIa5lvaOxTxYcZSbLRrSnGQzJhrU33LOy4zs4CYIrEBerOvyOsdbMlCNDO4dL 4gY+0oFUzAdGI9dIz6XS+VdORUSXJ7h5O/+xBztBvos5R5Dp5BwIfaMrSxDXjkraVAb3 GqetQj/TybYu3/KEm0PfEL08hp229jk6gotjF2Jewejdv6Rzc52IcDKptRklcWfBNEck NXllxEh+8zHWwya/9Vrmo0A+7JwnwYxL3tCYQhtxtoBkNw/T+kFfCUInMF5IfH12dFgK 6QdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683235683; x=1685827683; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LVvS/g9TQZhbyJ0eYwxV/bq9pypwHnS3KxbZ3TakH0A=; b=Q/eumnNkMi2V8jTpgg/OQFf+BcNiJK2np/B8za0bxFjs2yDd7bD3EoLLuMRJV7S9W3 BsgQmLHUW9ofImWhQlzjB0LKLpxIP0QLQ2MQY8+JjMqot+z58xfsdxrGQM/blLuLnV/L Rj0aoLddo+KHZVC3i80dIeriHhWR6xXHPGyiQY5bpqJ23KzjcQeC2T7snb0VFS9O+/7N SzlJoXaPijKK8L70Z9x2bsuyzkvA/GrjNIec12J7uTuv9CVB9B3WWQf3M0EMvXfPt9oa 1UL4iPGwKkQmoCL80hPQaNq8FbFQElFlnsuIFduBwZeYBHTGRqGoKvcew0r55kA5kiB/ YDMw== X-Gm-Message-State: AC+VfDxBMYk0enLRPXzBXsC+DNRdjv2fsbSrLII+652ywa5TXr2QZBnG G0M/RirvpHrkrgvtaw5FDVWk8M/hYjRMsA== X-Google-Smtp-Source: ACHHUZ7OkkaKrVwLmJGvJfx2GtQ/hBrq0xkNESgtAjZZX9WIor0cxYkB1uwQpjW6k8RY3NIvur5rlA== X-Received: by 2002:a1c:f30d:0:b0:3f1:7ba6:d5ab with SMTP id q13-20020a1cf30d000000b003f17ba6d5abmr580330wmq.36.1683235683276; Thu, 04 May 2023 14:28:03 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id h15-20020a05600c314f00b003f1978bbcd6sm51617562wmo.3.2023.05.04.14.28.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 May 2023 14:28:02 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu , Matthew Rosato , "Paul E . McKenney" , Christian Borntraeger , Lorenzo Stoakes Subject: [PATCH v9 2/3] mm/gup: disallow FOLL_LONGTERM GUP-nonfast writing to file-backed mappings Date: Thu, 4 May 2023 22:27:52 +0100 Message-Id: <7282506742d2390c125949c2f9894722750bb68a.1683235180.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Writing to file-backed mappings which require folio dirty tracking using GUP is a fundamentally broken operation, as kernel write access to GUP mappings do not adhere to the semantics expected by a file system. A GUP caller uses the direct mapping to access the folio, which does not cause write notify to trigger, nor does it enforce that the caller marks the folio dirty. The problem arises when, after an initial write to the folio, writeback results in the folio being cleaned and then the caller, via the GUP interface, writes to the folio again. As a result of the use of this secondary, direct, mapping to the folio no write notify will occur, and if the caller does mark the folio dirty, this will be done so unexpectedly. For example, consider the following scenario:- 1. A folio is written to via GUP which write-faults the memory, notifying the file system and dirtying the folio. 2. Later, writeback is triggered, resulting in the folio being cleaned and the PTE being marked read-only. 3. The GUP caller writes to the folio, as it is mapped read/write via the direct mapping. 4. The GUP caller, now done with the page, unpins it and sets it dirty (though it does not have to). This results in both data being written to a folio without writenotify, and the folio being dirtied unexpectedly (if the caller decides to do so). This issue was first reported by Jan Kara [1] in 2018, where the problem resulted in file system crashes. This is only relevant when the mappings are file-backed and the underlying file system requires folio dirty tracking. File systems which do not, such as shmem or hugetlb, are not at risk and therefore can be written to without issue. Unfortunately this limitation of GUP has been present for some time and requires future rework of the GUP API in order to provide correct write access to such mappings. However, for the time being we introduce this check to prevent the most egregious case of this occurring, use of the FOLL_LONGTERM pin. These mappings are considerably more likely to be written to after folios are cleaned and thus simply must not be permitted to do so. This patch changes only the slow-path GUP functions, a following patch adapts the GUP-fast path along similar lines. [1]:https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz/ Suggested-by: Jason Gunthorpe Signed-off-by: Lorenzo Stoakes Reviewed-by: John Hubbard Reviewed-by: Mika Penttilä Reviewed-by: Jan Kara Reviewed-by: Jason Gunthorpe Acked-by: David Hildenbrand --- mm/gup.c | 44 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 43 insertions(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index ff689c88a357..0ea9ebec9547 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -959,16 +959,54 @@ static int faultin_page(struct vm_area_struct *vma, return 0; } +/* + * Writing to file-backed mappings which require folio dirty tracking using GUP + * is a fundamentally broken operation, as kernel write access to GUP mappings + * do not adhere to the semantics expected by a file system. + * + * Consider the following scenario:- + * + * 1. A folio is written to via GUP which write-faults the memory, notifying + * the file system and dirtying the folio. + * 2. Later, writeback is triggered, resulting in the folio being cleaned and + * the PTE being marked read-only. + * 3. The GUP caller writes to the folio, as it is mapped read/write via the + * direct mapping. + * 4. The GUP caller, now done with the page, unpins it and sets it dirty + * (though it does not have to). + * + * This results in both data being written to a folio without writenotify, and + * the folio being dirtied unexpectedly (if the caller decides to do so). + */ +static bool writable_file_mapping_allowed(struct vm_area_struct *vma, + unsigned long gup_flags) +{ + /* + * If we aren't pinning then no problematic write can occur. A long term + * pin is the most egregious case so this is the case we disallow. + */ + if ((gup_flags & (FOLL_PIN | FOLL_LONGTERM)) != + (FOLL_PIN | FOLL_LONGTERM)) + return true; + + /* + * If the VMA does not require dirty tracking then no problematic write + * can occur either. + */ + return !vma_needs_dirty_tracking(vma); +} + static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) { vm_flags_t vm_flags = vma->vm_flags; int write = (gup_flags & FOLL_WRITE); int foreign = (gup_flags & FOLL_REMOTE); + bool vma_anon = vma_is_anonymous(vma); if (vm_flags & (VM_IO | VM_PFNMAP)) return -EFAULT; - if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma)) + if ((gup_flags & FOLL_ANON) && !vma_anon) return -EFAULT; if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) @@ -978,6 +1016,10 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) return -EFAULT; if (write) { + if (!vma_anon && + !writable_file_mapping_allowed(vma, gup_flags)) + return -EFAULT; + if (!(vm_flags & VM_WRITE)) { if (!(gup_flags & FOLL_FORCE)) return -EFAULT;