From patchwork Mon Jul 2 00:56:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10500279 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4B998602D6 for ; Mon, 2 Jul 2018 00:57:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3A98B28736 for ; Mon, 2 Jul 2018 00:57:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2EA662876C; Mon, 2 Jul 2018 00:57:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A507328736 for ; Mon, 2 Jul 2018 00:57:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 25CA06B0006; Sun, 1 Jul 2018 20:57:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1E52C6B0008; Sun, 1 Jul 2018 20:57:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05AEB6B000C; Sun, 1 Jul 2018 20:57:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot0-f198.google.com (mail-ot0-f198.google.com [74.125.82.198]) by kanga.kvack.org (Postfix) with ESMTP id CD7496B0006 for ; Sun, 1 Jul 2018 20:57:39 -0400 (EDT) Received: by mail-ot0-f198.google.com with SMTP id a1-v6so10561730oti.8 for ; Sun, 01 Jul 2018 17:57:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=HhAI7TUKIO2hMbI+QxZbnwxaMd8iJDubr9ysaPy3gts=; b=c3O/A62qfZLnjdbOfPQC17Fo98qvPzijLU+9OynjLY/44ZxggT9Omh2YzpDYpTmWFJ ZRrKcQaM7j1i6xM+sUxX6jMVjf9We5YiX+2Hw/AtlKXmiZXHT510WgH8kDPnslx8ZKx+ IgVdkzfdhdxiwdpnc8adqH23PoKcnbpyldvrjCgaghnWv5HTfMV4BDcltidRKn5mBHA5 4LgvvWU848GnOx7fN8kMpgthgl31/mxWeTjgLr7WIb4VOFzEq6euN9VRTWDctPfpfb22 UBudy7VVwrIqXRg+UrBOWF4b7+3g2qw8v1KFJmcb0jcs/Eihg2YZuz2790BY6fEPp9DP UdsA== X-Gm-Message-State: APt69E2NN6p0ozYbAcNHdC2AWiC9KMx5fGRuyyjsfEaPOdi8oYkiAG0E UyDe14+pMazyBTGrbkMC9hXNmyRN7IczWzJ6tmED3Ls6t3OOY2AUtAFlR9dfjMsaJ4uekLeF8km J9Rx4TCGZcXFt0EGkxG4E9gH4sKi9SF/S+M5GTrwSzwITAHA0d9EjO8eWdpqs4KYJRih5wCOowx zpcZ1ZXd7eNkVgksckndlRzm8UV7PjJ7WeUL7ezynay9AzUjCB0nEiO+qXZuLBLkZ4qp9/g4Y3L pOvw7u2NYhuLe+hdA5Tl97vWWRGGAYrCXKvT3wFZARyO1Kce6bh6lKMTFz1iB43P7YFPA5AKX9q C+w+hH7ZxvZB4R4HibEfWTuz2ruzCV53hV3KTpItLTNn80r8h90F4c03F4+pNjYK5rV2ZMfF5WE l X-Received: by 2002:aca:db05:: with SMTP id s5-v6mr13639383oig.87.1530493059600; Sun, 01 Jul 2018 17:57:39 -0700 (PDT) X-Received: by 2002:aca:db05:: with SMTP id s5-v6mr13639346oig.87.1530493058793; Sun, 01 Jul 2018 17:57:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530493058; cv=none; d=google.com; s=arc-20160816; b=Rx06JFX3a5KKVRjQ7aL5H4ICMJLyEZztFFXnKxvWvxJRz3gbMc2txdMZFcSlKaOLEB L8MvyBlQWfzAbZlKms+e04HxICM9z2HfK1e0H/tiRiQA1dUlXKZ8zb08aCRdOj0Irnuo lGIaJvZ4zR4U37SthBjFalFHoSKR6V+5p8HhczAx2EXJO8i+97qjl0Qc33F+6Sm4X6Y1 E8CAWDWazpFnM1GHTFpB974CD11V++UoleXEnZxWhAFr912+lwqbatCVtbluHwRiEPXj lIudhl/qTgOdRPcpI0ueNo2/6Cv87P1kqhW0q24ddrqG1CObsiob7qKr9qK6UYLlWccR 3lRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=HhAI7TUKIO2hMbI+QxZbnwxaMd8iJDubr9ysaPy3gts=; b=jVpvd3eI9ApCQCbwm1LEltG8Qa+7u9+UgFA6brBpCHY8n8MJBoZdVfdQKAp3lWzMA+ QjDotEuMaVMFrJwfacJCW+/N+tVh0uhNRIjdKs8JJTSuUfpQEWk2oNF0MDa+sPkvro/a TCRo3v3FWWAlkcki2CeD6+OXAuP1kvzhTbgQn6txgfVpONwSL2UXE3UWxK8vYnf7ngZO 3haAYjSWiZzZWv5xfszZAoqZW9B6FLWKsOYfUnWKFdgO3D7ZfEwso7qn6pyHi+rtvsFs hvbq6vm0k+FIe3YKoGjSVxmuQnQ0/l92LODRRE1+wTW0n69RM9O4QLTk6JNoXhPvrj6n kXGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=FSdPzP3b; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id j143-v6sor3882475oib.260.2018.07.01.17.57.38 for (Google Transport Security); Sun, 01 Jul 2018 17:57:38 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=FSdPzP3b; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HhAI7TUKIO2hMbI+QxZbnwxaMd8iJDubr9ysaPy3gts=; b=FSdPzP3b4zyl0MoJzhrKQDe0Uyo3SWPEII5s4mx81SFxdpwfjw9FN3lF3+GKY63ZKK ru0St8YCkjsuPOA40xebBZfVNW13yjNzOG9hPIFZHEqxPDOTOL4hhZNkHAYPDeA5bxma xhiJaZkKleVA9owbIm1sIta+2S8eUajOd/lBF6YwmuvcpbNL9m9Urbps9I5utsM20dKh Ar/4yybqZR+1n33By6VmJkDVtuX+MKDEsbsxqNhulgoZ1H9fCDy/SP+cG6yZKk2ZX0Sj MeBvLS6F0mNqHAjPvz1LC2Dk6Rvu5pHzOXcO/m9jWU4CvjlAGy1gt2qC/g+bqtisKUb5 l0oQ== X-Google-Smtp-Source: AAOMgpeOHdZ8c5huIMbyyi2b0wJScRjvHaEa0FN1K+jHrdo8oBCwYrOAqMMOPOsGHvpYLkt/EDmm0Q== X-Received: by 2002:aca:5d86:: with SMTP id r128-v6mr140774oib.243.1530493058580; Sun, 01 Jul 2018 17:57:38 -0700 (PDT) Received: from sandstorm.nvidia.com ([2600:1700:43b0:3120:feaa:14ff:fe9e:34cb]) by smtp.gmail.com with ESMTPSA id v6-v6sm4111672oix.30.2018.07.01.17.57.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 01 Jul 2018 17:57:37 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH v2 2/6] mm: introduce page->dma_pinned_flags, _count Date: Sun, 1 Jul 2018 17:56:50 -0700 Message-Id: <20180702005654.20369-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180702005654.20369-1-jhubbard@nvidia.com> References: <20180702005654.20369-1-jhubbard@nvidia.com> X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard Add two struct page fields that, combined, are unioned with struct page->lru. There is no change in the size of struct page. These new fields are for type safety and clarity. Also add page flag accessors to test, set and clear the new page->dma_pinned_flags field. The page->dma_pinned_count field will be used in upcoming patches Signed-off-by: John Hubbard --- include/linux/mm_types.h | 22 ++++++++++++----- include/linux/page-flags.h | 50 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 66 insertions(+), 6 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 99ce070e7dcb..0ecd29dcd642 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -78,12 +78,22 @@ struct page { */ union { struct { /* Page cache and anonymous pages */ - /** - * @lru: Pageout list, eg. active_list protected by - * zone_lru_lock. Sometimes used as a generic list - * by the page owner. - */ - struct list_head lru; + union { + /** + * @lru: Pageout list, eg. active_list protected + * by zone_lru_lock. Sometimes used as a + * generic list by the page owner. + */ + struct list_head lru; + /* Used by get_user_pages*(). Pages may not be + * on an LRU while these dma_pinned_* fields + * are in use. + */ + struct { + unsigned long dma_pinned_flags; + atomic_t dma_pinned_count; + }; + }; /* See page-flags.h for PAGE_MAPPING_FLAGS */ struct address_space *mapping; pgoff_t index; /* Our offset within mapping. */ diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 901943e4754b..b694a1a3bdf3 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -420,6 +420,56 @@ static __always_inline int __PageMovable(struct page *page) PAGE_MAPPING_MOVABLE; } +/* + * page->dma_pinned_flags is protected by the zone_gup_lock, plus the + * page->dma_pinned_count field as well. + * + * Because page->dma_pinned_flags is unioned with page->lru, any page that + * uses these flags must NOT be on an LRU. That's partly enforced by + * ClearPageDmaPinned, which gives the page back to LRU. + * + * Because PageDmaPinned also corresponds to PageTail (the lowest bit in + * the first union of struct page), and this flag is checked without knowing + * whether it is a tail page or a PageDmaPinned page, start the flags at + * bit 1 (0x2), rather than bit 0. + */ +#define PAGE_DMA_PINNED 0x2 +#define PAGE_DMA_PINNED_FLAGS (PAGE_DMA_PINNED) + +/* + * Because these flags are read outside of a lock, ensure visibility between + * different threads, by using READ|WRITE_ONCE. + */ +static __always_inline int PageDmaPinnedFlags(struct page *page) +{ + VM_BUG_ON(page != compound_head(page)); + return (READ_ONCE(page->dma_pinned_flags) & PAGE_DMA_PINNED_FLAGS) != 0; +} + +static __always_inline int PageDmaPinned(struct page *page) +{ + VM_BUG_ON(page != compound_head(page)); + return (READ_ONCE(page->dma_pinned_flags) & PAGE_DMA_PINNED) != 0; +} + +static __always_inline void SetPageDmaPinned(struct page *page) +{ + VM_BUG_ON(page != compound_head(page)); + WRITE_ONCE(page->dma_pinned_flags, PAGE_DMA_PINNED); +} + +static __always_inline void ClearPageDmaPinned(struct page *page) +{ + VM_BUG_ON(page != compound_head(page)); + VM_BUG_ON_PAGE(!PageDmaPinnedFlags(page), page); + + /* This does a WRITE_ONCE to the lru.next, which is also the + * page->dma_pinned_flags field. So in addition to restoring page->lru, + * this provides visibility to other threads. + */ + INIT_LIST_HEAD(&page->lru); +} + #ifdef CONFIG_KSM /* * A KSM page is one of those write-protected "shared pages" or "merged pages"