From patchwork Wed May 27 19:49:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11573847 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58498739 for ; Wed, 27 May 2020 19:49:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 37789208B8 for ; Wed, 27 May 2020 19:49:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="RbHn4yG2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728356AbgE0Tt4 (ORCPT ); Wed, 27 May 2020 15:49:56 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:6834 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726114AbgE0Ttz (ORCPT ); Wed, 27 May 2020 15:49:55 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 27 May 2020 12:48:32 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 27 May 2020 12:49:55 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 27 May 2020 12:49:55 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 27 May 2020 19:49:55 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 27 May 2020 19:49:55 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.87.74]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 27 May 2020 12:49:55 -0700 From: John Hubbard To: Andrew Morton CC: Souptick Joarder , LKML , , John Hubbard , Daniel Vetter , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Vlastimil Babka , Jan Kara , Dave Chinner , Jonathan Corbet , , Subject: [PATCH] mm/gup: update pin_user_pages.rst for "case 3" (mmu notifiers) Date: Wed, 27 May 2020 12:49:53 -0700 Message-ID: <20200527194953.11130-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590608912; bh=+JO9SzIVC7F8gI8jTFzEl6QoxL/thoNHpxziO/PBuik=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Type: Content-Transfer-Encoding; b=RbHn4yG2kSWbcFxzxdHvbZj23GM1l2XQ9OV43KKqB5pYAAx0dqZbie/A8Lqe7iA8N D3Xv3j5Dm16ONLL0bFQGcsZX3X4x3NSRsWwXzknqY5esUouiyjwcd+WKX6bK0z6bKX tZtbIaYr6XyQwN2CqThSIgUKjVbBXz6zPp/kZYHVvhMmqCDVtvxfowJXSBcWsNsDqW hVMwcZZQhRR1I3NPHeiEdBXN5rXNrcT26/g12MndJ67DRV7Zu3sjBM/mvEGn/48Kif af4BZjhcPZeI6WEHMUpqJ7u/rJpAGCQXIpF5RdLZ/7xEwHgAyHYet9n5rrzvxrOVRS mwUAVRtqviZqg== Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Update case 3 so that it covers the use of mmu notifiers, for hardware that does, or does not have replayable page faults. Also, elaborate case 4 slightly, as it was quite cryptic. Cc: Daniel Vetter Cc: Jérôme Glisse Cc: Vlastimil Babka Cc: Jan Kara Cc: Dave Chinner Cc: Jonathan Corbet Cc: linux-doc@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Signed-off-by: John Hubbard --- Documentation/core-api/pin_user_pages.rst | 33 +++++++++++++---------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst index 2e939ff10b86..4675b04e8829 100644 --- a/Documentation/core-api/pin_user_pages.rst +++ b/Documentation/core-api/pin_user_pages.rst @@ -148,23 +148,28 @@ NOTE: Some pages, such as DAX pages, cannot be pinned with longterm pins. That's because DAX pages do not have a separate page cache, and so "pinning" implies locking down file system blocks, which is not (yet) supported in that way. -CASE 3: Hardware with page faulting support -------------------------------------------- -Here, a well-written driver doesn't normally need to pin pages at all. However, -if the driver does choose to do so, it can register MMU notifiers for the range, -and will be called back upon invalidation. Either way (avoiding page pinning, or -using MMU notifiers to unpin upon request), there is proper synchronization with -both filesystem and mm (page_mkclean(), munmap(), etc). - -Therefore, neither flag needs to be set. - -In this case, ideally, neither get_user_pages() nor pin_user_pages() should be -called. Instead, the software should be written so that it does not pin pages. -This allows mm and filesystems to operate more efficiently and reliably. +CASE 3: MMU notifier registration, with or without page faulting hardware +------------------------------------------------------------------------- +Device drivers can pin pages via get_user_pages*(), and register for mmu +notifier callbacks for the memory range. Then, upon receiving a notifier +"invalidate range" callback , stop the device from using the range, and unpin +the pages. There may be other possible schemes, such as for example explicitly +synchronizing against pending IO, that accomplish approximately the same thing. + +Or, if the hardware supports replayable page faults, then the device driver can +avoid pinning entirely (this is ideal), as follows: register for mmu notifier +callbacks as above, but instead of stopping the device and unpinning in the +callback, simply remove the range from the device's page tables. + +Either way, as long as the driver unpins the pages upon mmu notifier callback, +then there is proper synchronization with both filesystem and mm +(page_mkclean(), munmap(), etc). Therefore, neither flag needs to be set. CASE 4: Pinning for struct page manipulation only ------------------------------------------------- -Here, normal GUP calls are sufficient, so neither flag needs to be set. +If only struct page data (as opposed to the actual memory contents that a page +is tracking) is affected, then normal GUP calls are sufficient, and neither flag +needs to be set. page_maybe_dma_pinned(): the whole point of pinning ===================================================