From patchwork Fri Nov 4 12:48:25 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: akash.goel@intel.com X-Patchwork-Id: 9412575 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C8C47601C2 for ; Fri, 4 Nov 2016 12:52:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C031628D82 for ; Fri, 4 Nov 2016 12:52:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B4E502AD00; Fri, 4 Nov 2016 12:52:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 81CFA28D82 for ; Fri, 4 Nov 2016 12:52:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id ABF906E960; Fri, 4 Nov 2016 12:52:25 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 737426E926 for ; Fri, 4 Nov 2016 12:52:24 +0000 (UTC) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 04 Nov 2016 05:30:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,443,1473145200"; d="scan'208";a="782391343" Received: from akashgoe-desktop.iind.intel.com ([10.223.82.136]) by FMSMGA003.fm.intel.com with ESMTP; 04 Nov 2016 05:30:17 -0700 From: akash.goel@intel.com To: intel-gfx@lists.freedesktop.org Date: Fri, 4 Nov 2016 18:18:25 +0530 Message-Id: <1478263706-24783-1-git-send-email-akash.goel@intel.com> X-Mailer: git-send-email 1.9.2 In-Reply-To: <1476976532.3002.6.camel@linux.intel.com> References: <1476976532.3002.6.camel@linux.intel.com> Cc: Hugh Dickins , Akash Goel , linux-mm@kvack.org, Sourab Gupta , linux-kernel@vger.linux.org Subject: [Intel-gfx] [PATCH 1/2] shmem: Support for registration of driver/file owner specific ops X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Chris Wilson This provides support for the drivers or shmem file owners to register a set of callbacks, which can be invoked from the address space operations methods implemented by shmem. This allow the file owners to hook into the shmem address space operations to do some extra/custom operations in addition to the default ones. The private_data field of address_space struct is used to store the pointer to driver specific ops. Currently only one ops field is defined, which is migratepage, but can be extended on an as-needed basis. The need for driver specific operations arises since some of the operations (like migratepage) may not be handled completely within shmem, so as to be effective, and would need some driver specific handling also. Specifically, i915.ko would like to participate in migratepage(). i915.ko uses shmemfs to provide swappable backing storage for its user objects, but when those objects are in use by the GPU it must pin the entire object until the GPU is idle. As a result, large chunks of memory can be arbitrarily withdrawn from page migration, resulting in premature out-of-memory due to fragmentation. However, if i915.ko can receive the migratepage() request, it can then flush the object from the GPU, remove its pin and thus enable the migration. Since gfx allocations are one of the major consumer of system memory, its imperative to have such a mechanism to effectively deal with fragmentation. And therefore the need for such a provision for initiating driver specific actions during address space operations. v2: - Drop dev_ prefix from the members of shmem_dev_info structure. (Joonas) - Change the return type of shmem_set_device_op() to void and remove the check for pre-existing data. (Joonas) - Rename shmem_set_device_op() to shmem_set_dev_info() to be consistent with shmem_dev_info structure. (Joonas) Cc: Hugh Dickins Cc: linux-mm@kvack.org Cc: linux-kernel@vger.linux.org Signed-off-by: Sourab Gupta Signed-off-by: Akash Goel Reviewed-by: Chris Wilson --- include/linux/shmem_fs.h | 13 +++++++++++++ mm/shmem.c | 17 ++++++++++++++++- 2 files changed, 29 insertions(+), 1 deletion(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index ff078e7..22796a0 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -39,11 +39,24 @@ struct shmem_sb_info { unsigned long shrinklist_len; /* Length of shrinklist */ }; +struct shmem_dev_info { + void *private_data; + int (*migratepage)(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode, void *dev_priv_data); +}; + static inline struct shmem_inode_info *SHMEM_I(struct inode *inode) { return container_of(inode, struct shmem_inode_info, vfs_inode); } +static inline void shmem_set_dev_info(struct address_space *mapping, + struct shmem_dev_info *info) +{ + mapping->private_data = info; +} + /* * Functions in mm/shmem.c called directly from elsewhere: */ diff --git a/mm/shmem.c b/mm/shmem.c index ad7813d..bf71ddd 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1290,6 +1290,21 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) return 0; } +#ifdef CONFIG_MIGRATION +static int shmem_migratepage(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode) +{ + struct shmem_dev_info *dev_info = mapping->private_data; + + if (dev_info && dev_info->migratepage) + return dev_info->migratepage(mapping, newpage, page, + mode, dev_info->private_data); + + return migrate_page(mapping, newpage, page, mode); +} +#endif + #if defined(CONFIG_NUMA) && defined(CONFIG_TMPFS) static void shmem_show_mpol(struct seq_file *seq, struct mempolicy *mpol) { @@ -3654,7 +3669,7 @@ static void shmem_destroy_inodecache(void) .write_end = shmem_write_end, #endif #ifdef CONFIG_MIGRATION - .migratepage = migrate_page, + .migratepage = shmem_migratepage, #endif .error_remove_page = generic_error_remove_page, };