From patchwork Thu Mar 5 17:17:59 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Arcangeli X-Patchwork-Id: 5948361 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 705109F318 for ; Thu, 5 Mar 2015 17:25:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8530F202B8 for ; Thu, 5 Mar 2015 17:25:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 822D1202FF for ; Thu, 5 Mar 2015 17:25:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753240AbbCERTT (ORCPT ); Thu, 5 Mar 2015 12:19:19 -0500 Received: from mx1.redhat.com ([209.132.183.28]:38849 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752501AbbCERTQ (ORCPT ); Thu, 5 Mar 2015 12:19:16 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id t25HIDJD001193 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 5 Mar 2015 12:18:13 -0500 Received: from mail.random (ovpn-116-22.ams2.redhat.com [10.36.116.22]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t25HIBBn019010 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 5 Mar 2015 12:18:12 -0500 From: Andrea Arcangeli To: qemu-devel@nongnu.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-api@vger.kernel.org, Android Kernel Team Cc: "Kirill A. Shutemov" , Pavel Emelyanov , Sanidhya Kashyap , zhang.zhanghailiang@huawei.com, Linus Torvalds , Andres Lagar-Cavilla , Dave Hansen , Paolo Bonzini , Rik van Riel , Mel Gorman , Andy Lutomirski , Andrew Morton , Sasha Levin , Hugh Dickins , Peter Feiner , "Dr. David Alan Gilbert" , Christopher Covington , Johannes Weiner , Robert Love , Dmitry Adamushko , Neil Brown , Mike Hommey , Taras Glek , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Minchan Kim , Keith Packard , "Huangpeng (Peter)" , Anthony Liguori , Stefan Hajnoczi , Wenchao Xia , Andrew Jones , Juan Quintela Subject: [PATCH 16/21] userfaultfd: remap_pages: rmap preparation Date: Thu, 5 Mar 2015 18:17:59 +0100 Message-Id: <1425575884-2574-17-git-send-email-aarcange@redhat.com> In-Reply-To: <1425575884-2574-1-git-send-email-aarcange@redhat.com> References: <1425575884-2574-1-git-send-email-aarcange@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As far as the rmap code is concerned, rmap_pages only alters the page->mapping and page->index. It does it while holding the page lock. However there are a few places that in presence of anon pages are allowed to do rmap walks without the page lock (split_huge_page and page_referenced_anon). Those places that are doing rmap walks without taking the page lock first, must be updated to re-check that the page->mapping didn't change after they obtained the anon_vma lock. remap_pages takes the anon_vma lock for writing before altering the page->mapping, so if the page->mapping is still the same after obtaining the anon_vma lock (without the page lock), the rmap walks can go ahead safely (and remap_pages will wait them to complete before proceeding). remap_pages serializes against itself with the page lock. All other places taking the anon_vma lock while holding the mmap_sem for writing, don't need to check if the page->mapping has changed after taking the anon_vma lock, regardless of the page lock, because remap_pages holds the mmap_sem for reading. There's one constraint enforced to allow this simplification: the source pages passed to remap_pages must be mapped only in one vma, but this is not a limitation when used to handle userland page faults. The source addresses passed to remap_pages should be set as VM_DONTCOPY with MADV_DONTFORK to avoid any risk of the mapcount of the pages increasing, if fork runs in parallel in another thread, before or while remap_pages runs. Signed-off-by: Andrea Arcangeli --- mm/huge_memory.c | 23 +++++++++++++++++++---- mm/rmap.c | 9 +++++++++ 2 files changed, 28 insertions(+), 4 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8f1b6a5..1e25cb3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1902,6 +1902,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) { struct anon_vma *anon_vma; int ret = 1; + struct address_space *mapping; BUG_ON(is_huge_zero_page(page)); BUG_ON(!PageAnon(page)); @@ -1913,10 +1914,24 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) * page_lock_anon_vma_read except the write lock is taken to serialise * against parallel split or collapse operations. */ - anon_vma = page_get_anon_vma(page); - if (!anon_vma) - goto out; - anon_vma_lock_write(anon_vma); + for (;;) { + mapping = ACCESS_ONCE(page->mapping); + anon_vma = page_get_anon_vma(page); + if (!anon_vma) + goto out; + anon_vma_lock_write(anon_vma); + /* + * We don't hold the page lock here so + * remap_pages_huge_pmd can change the anon_vma from + * under us until we obtain the anon_vma lock. Verify + * that we obtained the anon_vma lock before + * remap_pages did. + */ + if (likely(mapping == ACCESS_ONCE(page->mapping))) + break; + anon_vma_unlock_write(anon_vma); + put_anon_vma(anon_vma); + } ret = 0; if (!PageCompound(page)) diff --git a/mm/rmap.c b/mm/rmap.c index 5e3e090..5ab2df1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -492,6 +492,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page) struct anon_vma *root_anon_vma; unsigned long anon_mapping; +repeat: rcu_read_lock(); anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping); if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) @@ -530,6 +531,14 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page) rcu_read_unlock(); anon_vma_lock_read(anon_vma); + /* check if remap_anon_pages changed the anon_vma */ + if (unlikely((unsigned long) ACCESS_ONCE(page->mapping) != anon_mapping)) { + anon_vma_unlock_read(anon_vma); + put_anon_vma(anon_vma); + anon_vma = NULL; + goto repeat; + } + if (atomic_dec_and_test(&anon_vma->refcount)) { /* * Oops, we held the last refcount, release the lock