diff mbox

[RFC] Limit mappings to ten per page per process

Message ID 20180208213743.GC3424@bombadil.infradead.org (mailing list archive)
State New, archived
Headers show

Commit Message

Matthew Wilcox Feb. 8, 2018, 9:37 p.m. UTC
On Thu, Feb 08, 2018 at 12:21:00PM -0800, Matthew Wilcox wrote:
> Now that I think about it, though, perhaps the simplest solution is not
> to worry about checking whether _mapcount has saturated, and instead when
> adding a new mmap, check whether this task already has it mapped 10 times.
> If so, refuse the mapping.

That turns out to be quite easy.  Comments on this approach?

Comments

Kirill A . Shutemov Feb. 9, 2018, 4:26 a.m. UTC | #1
On Thu, Feb 08, 2018 at 01:37:43PM -0800, Matthew Wilcox wrote:
> On Thu, Feb 08, 2018 at 12:21:00PM -0800, Matthew Wilcox wrote:
> > Now that I think about it, though, perhaps the simplest solution is not
> > to worry about checking whether _mapcount has saturated, and instead when
> > adding a new mmap, check whether this task already has it mapped 10 times.
> > If so, refuse the mapping.
> 
> That turns out to be quite easy.  Comments on this approach?

This *may* break some remap_file_pages() users.

And it may be rather costly for popular binaries. Consider libc.so.
Matthew Wilcox Feb. 14, 2018, 1:51 p.m. UTC | #2
On Fri, Feb 09, 2018 at 07:26:09AM +0300, Kirill A. Shutemov wrote:
> On Thu, Feb 08, 2018 at 01:37:43PM -0800, Matthew Wilcox wrote:
> > On Thu, Feb 08, 2018 at 12:21:00PM -0800, Matthew Wilcox wrote:
> > > Now that I think about it, though, perhaps the simplest solution is not
> > > to worry about checking whether _mapcount has saturated, and instead when
> > > adding a new mmap, check whether this task already has it mapped 10 times.
> > > If so, refuse the mapping.
> > 
> > That turns out to be quite easy.  Comments on this approach?
> 
> This *may* break some remap_file_pages() users.

We have some?!  ;-)  I don't understand the use case where they want to
map the same page of a file multiple times into the same process.  I mean,
yes, of course, they might ask for it, but I don't understand why they would.
Do you have any insight here?

> And it may be rather costly for popular binaries. Consider libc.so.

We already walk this tree to insert the mapping; this just adds a second
walk of the tree to check which overlapping mappings exist.  I would
expect it to just make the tree cache-hot.
Kirill A . Shutemov Feb. 14, 2018, 2:05 p.m. UTC | #3
On Wed, Feb 14, 2018 at 05:51:41AM -0800, Matthew Wilcox wrote:
> On Fri, Feb 09, 2018 at 07:26:09AM +0300, Kirill A. Shutemov wrote:
> > On Thu, Feb 08, 2018 at 01:37:43PM -0800, Matthew Wilcox wrote:
> > > On Thu, Feb 08, 2018 at 12:21:00PM -0800, Matthew Wilcox wrote:
> > > > Now that I think about it, though, perhaps the simplest solution is not
> > > > to worry about checking whether _mapcount has saturated, and instead when
> > > > adding a new mmap, check whether this task already has it mapped 10 times.
> > > > If so, refuse the mapping.
> > > 
> > > That turns out to be quite easy.  Comments on this approach?
> > 
> > This *may* break some remap_file_pages() users.
> 
> We have some?!  ;-)

I can't prove otherwise :)

> I don't understand the use case where they want to map the same page of
> a file multiple times into the same process.  I mean, yes, of course,
> they might ask for it, but I don't understand why they would.  Do you
> have any insight here?

Some form of data deduplication? Like having repeating chunks stored once
on presistent storage and page cache, but put into memory in
"uncompressed" form.

It's not limited to remap_file_pages(). Plain mmap() can be used for this
too.
diff mbox

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index 9efdc021ad22..fd64ff662117 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1615,6 +1615,34 @@  static inline int accountable_mapping(struct file *file, vm_flags_t vm_flags)
 	return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE;
 }
 
+/**
+ * mmap_max_overlaps - Check the process has not exceeded its quota of mappings.
+ * @mm: The memory map for the process creating the mapping.
+ * @file: The file the mapping is coming from.
+ * @pgoff: The start of the mapping in the file.
+ * @count: The number of pages to map.
+ *
+ * Return: %true if this region of the file has too many overlapping mappings
+ *         by this process.
+ */
+bool mmap_max_overlaps(struct mm_struct *mm, struct file *file,
+			pgoff_t pgoff, pgoff_t count)
+{
+	unsigned int overlaps = 0;
+	struct vm_area_struct *vma;
+
+	if (!file)
+		return false;
+
+	vma_interval_tree_foreach(vma, &file->f_mapping->i_mmap,
+				  pgoff, pgoff + count) {
+		if (vma->vm_mm == mm)
+			overlaps++;
+	}
+
+	return overlaps > 9;
+}
+
 unsigned long mmap_region(struct file *file, unsigned long addr,
 		unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
 		struct list_head *uf)
@@ -1640,6 +1668,9 @@  unsigned long mmap_region(struct file *file, unsigned long addr,
 			return -ENOMEM;
 	}
 
+	if (mmap_max_overlaps(mm, file, pgoff, len >> PAGE_SHIFT))
+		return -ENOMEM;
+
 	/* Clear old maps */
 	while (find_vma_links(mm, addr, addr + len, &prev, &rb_link,
 			      &rb_parent)) {
diff --git a/mm/mremap.c b/mm/mremap.c
index 049470aa1e3e..27cf5cf9fc0f 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -430,6 +430,10 @@  static struct vm_area_struct *vma_to_resize(unsigned long addr,
 				(new_len - old_len) >> PAGE_SHIFT))
 		return ERR_PTR(-ENOMEM);
 
+	if (mmap_max_overlaps(mm, vma->vm_file, pgoff,
+				(new_len - old_len) >> PAGE_SHIFT))
+		return ERR_PTR(-ENOMEM);
+
 	if (vma->vm_flags & VM_ACCOUNT) {
 		unsigned long charged = (new_len - old_len) >> PAGE_SHIFT;
 		if (security_vm_enough_memory_mm(mm, charged))