diff mbox series

[1/2] shmem: export shmem_unlock_mapping

Message ID 20181016174300.197906-2-vovoy@chromium.org (mailing list archive)
State New, archived
Headers show
Series shmem, drm/i915: Mark pinned shmemfs pages as unevictable | expand

Commit Message

Kuo-Hsin Yang Oct. 16, 2018, 5:42 p.m. UTC
By exporting this function, drivers can mark/unmark a shmemfs address
space as unevictable in the following way: 1. mark an address space as
unevictable with mapping_set_unevictable(), pages in the address space
will be moved to unevictable list in vmscan. 2. mark an address space
evictable with mapping_clear_unevictable(), and move these pages back to
evictable list with shmem_unlock_mapping().

Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
---
 Documentation/vm/unevictable-lru.rst | 4 +++-
 mm/shmem.c                           | 2 ++
 2 files changed, 5 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
index fdd84cb8d511..a812fb55136d 100644
--- a/Documentation/vm/unevictable-lru.rst
+++ b/Documentation/vm/unevictable-lru.rst
@@ -143,7 +143,7 @@  using a number of wrapper functions:
 	Query the address space, and return true if it is completely
 	unevictable.
 
-These are currently used in two places in the kernel:
+These are currently used in three places in the kernel:
 
  (1) By ramfs to mark the address spaces of its inodes when they are created,
      and this mark remains for the life of the inode.
@@ -154,6 +154,8 @@  These are currently used in two places in the kernel:
      swapped out; the application must touch the pages manually if it wants to
      ensure they're in memory.
 
+ (3) By the i915 driver to mark pinned address space until it's unpinned.
+
 
 Detecting Unevictable Pages
 ---------------------------
diff --git a/mm/shmem.c b/mm/shmem.c
index 446942677cd4..d1ce34c09df6 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -786,6 +786,7 @@  void shmem_unlock_mapping(struct address_space *mapping)
 		cond_resched();
 	}
 }
+EXPORT_SYMBOL_GPL(shmem_unlock_mapping);
 
 /*
  * Remove range of pages and swap entries from radix tree, and free them.
@@ -3874,6 +3875,7 @@  int shmem_lock(struct file *file, int lock, struct user_struct *user)
 void shmem_unlock_mapping(struct address_space *mapping)
 {
 }
+EXPORT_SYMBOL_GPL(shmem_unlock_mapping);
 
 #ifdef CONFIG_MMU
 unsigned long shmem_get_unmapped_area(struct file *file,