From patchwork Tue Jun 15 11:03:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 12321123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1023C48BDF for ; Tue, 15 Jun 2021 11:04:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7F7A961458 for ; Tue, 15 Jun 2021 11:04:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7F7A961458 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0C5FD6B006E; Tue, 15 Jun 2021 07:04:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 076316B0070; Tue, 15 Jun 2021 07:04:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E80426B0071; Tue, 15 Jun 2021 07:04:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id B4CE36B006E for ; Tue, 15 Jun 2021 07:04:23 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 53D9F759A for ; Tue, 15 Jun 2021 11:04:23 +0000 (UTC) X-FDA: 78255674406.01.A498C98 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf14.hostedemail.com (Postfix) with ESMTP id 52542C00F796 for ; Tue, 15 Jun 2021 11:04:13 +0000 (UTC) Received: by mail-lf1-f50.google.com with SMTP id f30so26372558lfj.1 for ; Tue, 15 Jun 2021 04:04:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:from:date:message-id:subject:to:cc; bh=AR9qZ9nPf3y/DYti7LQfXe9NuHVDwgf+ocWHVKASceM=; b=R1M1EUfZxcte72PpHDhurkvvCCLD2ySQZ/mF+1iHxCElfsAKIstoqjl05WD/v3V3rV GSi3L0mEX2dAXDmh7BflXJ0cNhAcrQiKJU+vwGBLFwz91F4fFMz0njgaRXyUYXHpfbWT 7LDBZ+9T9SyK7ZV00hdEtKf5OwVTvAvS3yvZcFn1CAjqsJlLJlbMA+kYQ6ZRgDerhVbe GnedPh8uxJTekmfy5LX3Dd4SPN0N4V2X+ZRP9p1uat7LXvRnXfE9YgYKZXlo3Tr2M0I4 nLJoriTN1i4vtD9tIZJggM0t8DaGK3Skn4/hzSenjmA5QuWYNGb30E9T6vn0Me8KKF0A eO8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc; bh=AR9qZ9nPf3y/DYti7LQfXe9NuHVDwgf+ocWHVKASceM=; b=fLf7t3xxEqEF8EcG+AMASZenUtmTrRQ5ahYQaZfTNR3yOwoWTvZZTmGyB9e2DC/Y9v K7IAjvkbtPFpBOrYunnRBVTrD6OFcWmLUHu9iFyg7pOMCiINqtiHATUnuaLmwD7grld3 HjapeLdzhJdr85k+e6dHBJP47eAoWC7fGbNf2po1x2CK9FfUvCZ4LJDD/o7+Asw/++gf gs1qevc2cr+4NOMY5f5QOok22B59a4adVs/fY7ZiRvzJrumrVmJLDZZWy1DhaIeHfF+8 DFvQ6G0po72dE4MTPkNHactzdxOJKno05e49oVyGedHrFGKACoGMi8cUeFMuvNmt3N45 cr2g== X-Gm-Message-State: AOAM531pB7YBTJKb5L8HnEdv+37bEQzyRd5qc/9chqHQL/wOx1hHm8H4 BrvWyKT0bSWxiBe4uQYTAHiNdsAd7SzsfRT4CV+cyZ+XoZWmpUEU X-Google-Smtp-Source: ABdhPJx4EMns+LSi3v/rh93wV9hZpyJgk+942FvO+OlhWp4SzZTldg5/YppsCthQaGzJoUm/b39bzKnZl7e9uCYfar8= X-Received: by 2002:a05:6512:754:: with SMTP id c20mr15120356lfs.356.1623755059592; Tue, 15 Jun 2021 04:04:19 -0700 (PDT) MIME-Version: 1.0 From: Jann Horn Date: Tue, 15 Jun 2021 13:03:53 +0200 Message-ID: Subject: page refcount race between prep_compound_gigantic_page() and __page_cache_add_speculative()? To: Linux-MM Cc: kernel list , Youquan Song , Andrea Arcangeli , Jan Kara , Mike Kravetz , John Hubbard , "Kirill A. Shutemov" Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=R1M1EUfZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of jannh@google.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=jannh@google.com X-Rspamd-Server: rspam02 X-Stat-Signature: hf889e6i1emnhz88fjrkjcs8yury4j35 X-Rspamd-Queue-Id: 52542C00F796 X-HE-Tag: 1623755053-72449 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: == short summary == sysfs/sysctl writes can invoke prep_compound_gigantic_page(), which forcibly zeroes the refcount of a page with refcount >=1. in the extremely rare case where the refcount is >1 because of a temporary reference from concurrent __page_cache_add_speculative(), stuff will probably blow up. Because of John Hubbard's question on the "[PATCH v2] mm/gup: fix try_grab_compound_head() race with split_huge_page()" thread (https://lore.kernel.org/linux-mm/50d828d1-2ce6-21b4-0e27-fb15daa77561@nvidia.com/), I was looking around in related code, and stumbled over this old commit, whose changes are still present in the current kernel, and which looks wrong to me. I'm not currently planning to try to fix this (because I'm not familiar with the compaction code and its interaction with the page allocator); so if someone who is more familiar with this stuff wants to pick this up, feel free to do so. commit 58a84aa92723d1ac3e1cc4e3b0ff49291663f7e1 Author: Youquan Song Date: Thu Dec 8 14:34:18 2011 -0800 thp: set compound tail page _count to zero Commit 70b50f94f1644 ("mm: thp: tail page refcounting fix") keeps all page_tail->_count zero at all times. But the current kernel does not set page_tail->_count to zero if a 1GB page is utilized. So when an IOMMU 1GB page is used by KVM, it wil result in a kernel oops because a tail page's _count does not equal zero. kernel BUG at include/linux/mm.h:386! invalid opcode: 0000 [#1] SMP Call Trace: gup_pud_range+0xb8/0x19d get_user_pages_fast+0xcb/0x192 ? trace_hardirqs_off+0xd/0xf hva_to_pfn+0x119/0x2f2 gfn_to_pfn_memslot+0x2c/0x2e kvm_iommu_map_pages+0xfd/0x1c1 kvm_iommu_map_memslots+0x7c/0xbd kvm_iommu_map_guest+0xaa/0xbf kvm_vm_ioctl_assigned_device+0x2ef/0xa47 kvm_vm_ioctl+0x36c/0x3a2 do_vfs_ioctl+0x49e/0x4e4 sys_ioctl+0x5a/0x7c system_call_fastpath+0x16/0x1b RIP gup_huge_pud+0xf2/0x159 Signed-off-by: Youquan Song Reviewed-by: Andrea Arcangeli Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds } __page_cache_add_speculative() can run on pages that have already been allocated, and the only thing that can stop it from temporarily lifting the page refcount is the page refcount being zero. So if these set_page_count() calls have any effect (outside __init code), and the refcount is not zero when they occur, then that means we can have a race where a refcount is forcibly zeroed while __page_cache_add_speculative() is holding temporary references; and then we can end up with a use-after-free of struct page. As far as I can tell, on the normal compound page allocation path (prep_compound_page()), the whole compound page is coming fresh off the allocator freelist (except for some __init logic) and only the refcount of the head page has been initialized in post_alloc_hook(); and so all its tail pages are guaranteed to have a zero refcount. So on that path the proper fix is probably to just replace the set_page_count() call with a VM_BUG_ON_PAGE(). The messier path, as the original commit describes, is "gigantic" page allocation. In that case, we'll go through the following path (if we ignore CMA): alloc_fresh_huge_page(): alloc_gigantic_page() alloc_contig_pages() __alloc_contig_pages() alloc_contig_range() isolate_freepages_range() split_map_pages() post_alloc_hook() [FOR EVERY PAGE] set_page_refcounted() set_page_count(page, 1) prep_compound_gigantic_page() set_page_count(p, 0) [FOR EVERY TAIL PAGE] so all the tail pages are initially allocated with refcount 1 by the page allocator, and then we overwrite those refcounts with zeroes. Luckily, the only non-__init codepath that can get here is __nr_hugepages_store_common(), which is only invoked from privileged writes to sysfs/sysctls. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bb28a5f9db8d..73f17c0293c0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -576,6 +576,7 @@ static void prep_compound_gigantic_page(struct page *page, unsigned long order) __SetPageHead(page); for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { __SetPageTail(p); + set_page_count(p, 0); p->first_page = page; } } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9dd443d89d8b..850009a7101e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -356,8 +356,8 @@ void prep_compound_page(struct page *page, unsigned long order) __SetPageHead(page); for (i = 1; i < nr_pages; i++) { struct page *p = page + i; - __SetPageTail(p); + set_page_count(p, 0); p->first_page = page; }