From patchwork Fri Mar 2 21:26:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10255705 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F255F6037F for ; Fri, 2 Mar 2018 21:26:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E1F5F2852E for ; Fri, 2 Mar 2018 21:26:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D6B5028535; Fri, 2 Mar 2018 21:26:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B2AB12852E for ; Fri, 2 Mar 2018 21:26:51 +0000 (UTC) Received: (qmail 7628 invoked by uid 550); 2 Mar 2018 21:26:50 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 7610 invoked from network); 2 Mar 2018 21:26:49 -0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Kjw4KLsYxrNBv9OqNayEHk3oHiUnr7Dvv+SCU6b6Ev4=; b=tpr5ydUC9QliGaRXtpMErtA6V s7J7iFjxHCc3T3G3jWxYybIDSfmCA1DYJVcceHoimjSVtIK+OeiPR5MK8Hom6p3OcEaqOiDqRs75J ISlds5Oo7RM3f4EF4h4fx0uQ2wH3MZWCg1vfTaropEgzt4WmXY5/VOYrobFQtU1AsugwWSu8EUXwM Z66qqKs8CTLgU5v+mHe0fmZUWMcLL6imTyd1yrTrB24fCplS1fd9VDuikhvXxlp7HgvqMn85APlMp hAI2vQLwdarQnBG3hHtjwO+oJ8k/l/vdPRZ6tTsfCgyjr7BaZz7ZrDCrXejr/sKtN5+iTds2ImCuT gOEeEMzdw==; Date: Fri, 2 Mar 2018 13:26:37 -0800 From: Matthew Wilcox To: linux-mm@kvack.org Cc: kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC] Handle mapcount overflows Message-ID: <20180302212637.GB671@bombadil.infradead.org> References: <20180208021112.GB14918@bombadil.infradead.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180208021112.GB14918@bombadil.infradead.org> User-Agent: Mutt/1.9.2 (2017-12-15) X-Virus-Scanned: ClamAV using ClamSMTP Here's my third effort to handle page->_mapcount overflows. The idea is to minimise overhead, so we keep a list of users with more than 5000 mappings. In order to overflow _mapcount, you have to have 2 billion mappings, so you'd need 400,000 tasks to evade the tracking, and your sysadmin has probably accused you of forkbombing the system long before then. Not to mention the 6GB of RAM you consumed just in stacks and the 24GB of RAM you consumed in page tables ... but I digress. Let's assume that the sysadmin has increased the number of processes to 100,000. You'd need to create 20,000 mappings per process to overflow _mapcount, and they'd end up on the 'heavy_users' list. Not everybody on the heavy_users list is going to be guilty, but if we hit an overflow, we look at everybody on the heavy_users list and if they've got the page mapped more than 1000 times, they get a SIGSEGV. I'm not entirely sure how to forcibly tear down a task's mappings, so I've just left a comment in there to do that. Looking for feedback on this approach. diff --git a/mm/internal.h b/mm/internal.h index 7059a8389194..977852b8329e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -97,6 +97,11 @@ extern void putback_lru_page(struct page *page); */ extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); +#ifdef CONFIG_64BIT +extern void mm_mapcount_overflow(struct page *page); +#else +static inline void mm_mapcount_overflow(struct page *page) { } +#endif /* * in mm/page_alloc.c */ diff --git a/mm/mmap.c b/mm/mmap.c index 9efdc021ad22..575766ec02f8 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1315,6 +1315,115 @@ static inline int mlock_future_check(struct mm_struct *mm, return 0; } +#ifdef CONFIG_64BIT +/* + * Machines with more than 2TB of memory can create enough VMAs to overflow + * page->_mapcount if they all point to the same page. 32-bit machines do + * not need to be concerned. + */ +/* + * Experimentally determined. gnome-shell currently uses fewer than + * 3000 mappings, so should have zero effect on desktop users. + */ +#define mm_track_threshold 5000 +static DEFINE_SPINLOCK(heavy_users_lock); +static DEFINE_IDR(heavy_users); + +static void mmap_track_user(struct mm_struct *mm, int max) +{ + struct mm_struct *entry; + unsigned int id; + + idr_preload(GFP_KERNEL); + spin_lock(&heavy_users_lock); + idr_for_each_entry(&heavy_users, entry, id) { + if (entry == mm) + break; + if (entry->map_count < mm_track_threshold) + idr_remove(&heavy_users, id); + } + if (!entry) + idr_alloc(&heavy_users, mm, 0, 0, GFP_ATOMIC); + spin_unlock(&heavy_users_lock); +} + +static void mmap_untrack_user(struct mm_struct *mm) +{ + struct mm_struct *entry; + unsigned int id; + + spin_lock(&heavy_users_lock); + idr_for_each_entry(&heavy_users, entry, id) { + if (entry == mm) { + idr_remove(&heavy_users, id); + break; + } + } + spin_unlock(&heavy_users_lock); +} + +static void kill_mm(struct task_struct *tsk) +{ + /* Tear down the mappings first */ + do_send_sig_info(SIGKILL, SEND_SIG_FORCED, tsk, true); +} + +static void kill_abuser(struct mm_struct *mm) +{ + struct task_struct *tsk; + + for_each_process(tsk) + if (tsk->mm == mm) + break; + + if (down_write_trylock(&mm->mmap_sem)) { + kill_mm(tsk); + up_write(&mm->mmap_sem); + } else { + do_send_sig_info(SIGKILL, SEND_SIG_FORCED, tsk, true); + } +} + +void mm_mapcount_overflow(struct page *page) +{ + struct mm_struct *entry = current->mm; + unsigned int id; + struct vm_area_struct *vma; + struct address_space *mapping = page_mapping(page); + unsigned long pgoff = page_to_pgoff(page); + unsigned int count = 0; + + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + 1) { + if (vma->vm_mm == entry) + count++; + if (count > 1000) + kill_mm(current); + } + + rcu_read_lock(); + idr_for_each_entry(&heavy_users, entry, id) { + count = 0; + + vma_interval_tree_foreach(vma, &mapping->i_mmap, + pgoff, pgoff + 1) { + if (vma->vm_mm == entry) + count++; + if (count > 1000) { + kill_abuser(entry); + goto out; + } + } + } + if (!entry) + panic("No abusers found but mapcount exceeded\n"); +out: + rcu_read_unlock(); +} +#else +static void mmap_track_user(struct mm_struct *mm, int max) { } +static void mmap_untrack_user(struct mm_struct *mm) { } +#endif + /* * The caller must hold down_write(¤t->mm->mmap_sem). */ @@ -1357,6 +1466,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr, /* Too many mappings? */ if (mm->map_count > sysctl_max_map_count) return -ENOMEM; + if (mm->map_count > mm_track_threshold) + mmap_track_user(mm, mm_track_threshold); /* Obtain the address to map to. we verify (or select) it and ensure * that it represents a valid section of the address space. @@ -2997,6 +3108,8 @@ void exit_mmap(struct mm_struct *mm) /* mm's last user has gone, and its about to be pulled down */ mmu_notifier_release(mm); + mmap_untrack_user(mm); + if (mm->locked_vm) { vma = mm->mmap; while (vma) { diff --git a/mm/rmap.c b/mm/rmap.c index 47db27f8049e..d88acf5c98e9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1190,6 +1190,7 @@ void page_add_file_rmap(struct page *page, bool compound) VM_BUG_ON_PAGE(!PageSwapBacked(page), page); __inc_node_page_state(page, NR_SHMEM_PMDMAPPED); } else { + int v; if (PageTransCompound(page) && page_mapping(page)) { VM_WARN_ON_ONCE(!PageLocked(page)); @@ -1197,8 +1198,13 @@ void page_add_file_rmap(struct page *page, bool compound) if (PageMlocked(page)) clear_page_mlock(compound_head(page)); } - if (!atomic_inc_and_test(&page->_mapcount)) + v = atomic_inc_return(&page->_mapcount); + if (likely(v > 0)) goto out; + if (unlikely(v < 0)) { + mm_mapcount_overflow(page); + goto out; + } } __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr); out: