From patchwork Thu Feb 8 21:37:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10208061 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EEA0360327 for ; Thu, 8 Feb 2018 21:38:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE8672933D for ; Thu, 8 Feb 2018 21:38:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D1BDE295B2; Thu, 8 Feb 2018 21:38:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id CE4852933D for ; Thu, 8 Feb 2018 21:38:00 +0000 (UTC) Received: (qmail 17605 invoked by uid 550); 8 Feb 2018 21:37:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 17563 invoked from network); 8 Feb 2018 21:37:57 -0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=BfQOMMOu6KzUdHWxvbsoLT+xKufQsPS7BbY7wjQnAeI=; b=UCV8Z94+HOrb7410uODel81FL lmvVG8vN5b+cNkv09ik1ahLvgRDxf+YktDCFSNTV64ldDqsd9VgCErszdS8xAWyn68FDCOo5Wo3tI Q0fDKnjbNv9kWnRdmR+QRdlGmvFUGAi6w6bAF0IOtHLY+7u2dtw7M0bA1JiyAkYIGez9ZGjtTs8no jfGB61OBXeyg00G0m2EHaClwN+VntmuOSKUpsekstWf2KqCBGKXgWN7IMWI673E/sSBu3DBWoU0qz hozjUAfykOy80F5HvbqWqZu5cSLpliBSHkQjLfOctGgDUPQ7pDwsSFbohUDR1Q/0Pkelu3nQw8CKR 7JajSn5vA==; Date: Thu, 8 Feb 2018 13:37:43 -0800 From: Matthew Wilcox To: Daniel Micay Cc: Jann Horn , linux-mm@kvack.org, Kernel Hardening , kernel list , "Kirill A. Shutemov" Subject: [RFC] Limit mappings to ten per page per process Message-ID: <20180208213743.GC3424@bombadil.infradead.org> References: <20180208021112.GB14918@bombadil.infradead.org> <20180208185648.GB9524@bombadil.infradead.org> <20180208194235.GA3424@bombadil.infradead.org> <20180208202100.GB3424@bombadil.infradead.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180208202100.GB3424@bombadil.infradead.org> User-Agent: Mutt/1.9.1 (2017-09-22) X-Virus-Scanned: ClamAV using ClamSMTP On Thu, Feb 08, 2018 at 12:21:00PM -0800, Matthew Wilcox wrote: > Now that I think about it, though, perhaps the simplest solution is not > to worry about checking whether _mapcount has saturated, and instead when > adding a new mmap, check whether this task already has it mapped 10 times. > If so, refuse the mapping. That turns out to be quite easy. Comments on this approach? diff --git a/mm/mmap.c b/mm/mmap.c index 9efdc021ad22..fd64ff662117 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1615,6 +1615,34 @@ static inline int accountable_mapping(struct file *file, vm_flags_t vm_flags) return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE; } +/** + * mmap_max_overlaps - Check the process has not exceeded its quota of mappings. + * @mm: The memory map for the process creating the mapping. + * @file: The file the mapping is coming from. + * @pgoff: The start of the mapping in the file. + * @count: The number of pages to map. + * + * Return: %true if this region of the file has too many overlapping mappings + * by this process. + */ +bool mmap_max_overlaps(struct mm_struct *mm, struct file *file, + pgoff_t pgoff, pgoff_t count) +{ + unsigned int overlaps = 0; + struct vm_area_struct *vma; + + if (!file) + return false; + + vma_interval_tree_foreach(vma, &file->f_mapping->i_mmap, + pgoff, pgoff + count) { + if (vma->vm_mm == mm) + overlaps++; + } + + return overlaps > 9; +} + unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, struct list_head *uf) @@ -1640,6 +1668,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr, return -ENOMEM; } + if (mmap_max_overlaps(mm, file, pgoff, len >> PAGE_SHIFT)) + return -ENOMEM; + /* Clear old maps */ while (find_vma_links(mm, addr, addr + len, &prev, &rb_link, &rb_parent)) { diff --git a/mm/mremap.c b/mm/mremap.c index 049470aa1e3e..27cf5cf9fc0f 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -430,6 +430,10 @@ static struct vm_area_struct *vma_to_resize(unsigned long addr, (new_len - old_len) >> PAGE_SHIFT)) return ERR_PTR(-ENOMEM); + if (mmap_max_overlaps(mm, vma->vm_file, pgoff, + (new_len - old_len) >> PAGE_SHIFT)) + return ERR_PTR(-ENOMEM); + if (vma->vm_flags & VM_ACCOUNT) { unsigned long charged = (new_len - old_len) >> PAGE_SHIFT; if (security_vm_enough_memory_mm(mm, charged))