From patchwork Fri Nov 8 00:08:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Toshiki Fukasawa X-Patchwork-Id: 11233919 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E996112B for ; Fri, 8 Nov 2019 00:14:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6872E21D7B for ; Fri, 8 Nov 2019 00:14:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6872E21D7B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=vx.jp.nec.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AACE06B0006; Thu, 7 Nov 2019 19:14:21 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A5CF56B0007; Thu, 7 Nov 2019 19:14:21 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 99B2A6B0008; Thu, 7 Nov 2019 19:14:21 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id 8543B6B0006 for ; Thu, 7 Nov 2019 19:14:21 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 52AC7181AC9CC for ; Fri, 8 Nov 2019 00:14:21 +0000 (UTC) X-FDA: 76131188322.08.ghost02_8b1299e2bf39 X-Spam-Summary: 2,0,0,06338b43e8a941e1,d41d8cd98f00b204,t-fukasawa@vx.jp.nec.com,::dan.j.williams@intel.com:linux-kernel@vger.kernel.org:akpm@linux-foundation.org:mhocko@kernel.org:adobriyan@gmail.com:hch@lst.de:longman@redhat.com:sfr@canb.auug.org.au:mst@redhat.com:cai@lca.pw:n-horiguchi@ah.jp.nec.com:j-nomura@ce.jp.nec.com,RULES_HIT:2:10:41:69:355:379:582:800:960:973:988:989:1152:1260:1261:1277:1311:1313:1314:1345:1359:1437:1513:1515:1516:1518:1521:1535:1593:1594:1605:1606:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3871:3872:3874:4118:4321:5007:6119:6261:6611:6742:7903:8957:9592:10004:10399:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:14096:14097:14110:14394:21080:21433:21451:21627:30012:30051:30054:30064,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: ghost02_8b1299e2bf39 X-Filterd-Recvd-Size: 7726 Received: from tyo161.gate.nec.co.jp (tyo161.gate.nec.co.jp [114.179.232.161]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Nov 2019 00:14:19 +0000 (UTC) Received: from mailgate02.nec.co.jp ([114.179.233.122]) by tyo161.gate.nec.co.jp (8.15.1/8.15.1) with ESMTPS id xA80E98b032382 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 8 Nov 2019 09:14:09 +0900 Received: from mailsv01.nec.co.jp (mailgate-v.nec.co.jp [10.204.236.94]) by mailgate02.nec.co.jp (8.15.1/8.15.1) with ESMTP id xA80E95o006960; Fri, 8 Nov 2019 09:14:09 +0900 Received: from mail01b.kamome.nec.co.jp (mail01b.kamome.nec.co.jp [10.25.43.2]) by mailsv01.nec.co.jp (8.15.1/8.15.1) with ESMTP id xA80BoBi012271; Fri, 8 Nov 2019 09:14:09 +0900 Received: from bpxc99gp.gisp.nec.co.jp ([10.38.151.148] [10.38.151.148]) by mail01b.kamome.nec.co.jp with ESMTP id BT-MMP-10172970; Fri, 8 Nov 2019 09:08:09 +0900 Received: from BPXM20GP.gisp.nec.co.jp ([10.38.151.212]) by BPXC20GP.gisp.nec.co.jp ([10.38.151.148]) with mapi id 14.03.0439.000; Fri, 8 Nov 2019 09:08:08 +0900 From: Toshiki Fukasawa To: "linux-mm@kvack.org" , "dan.j.williams@intel.com" CC: "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "mhocko@kernel.org" , "adobriyan@gmail.com" , "hch@lst.de" , "longman@redhat.com" , "sfr@canb.auug.org.au" , "mst@redhat.com" , "cai@lca.pw" , Naoya Horiguchi , Junichi Nomura Subject: [PATCH 1/3] procfs: refactor kpage_*_read() in fs/proc/page.c Thread-Topic: [PATCH 1/3] procfs: refactor kpage_*_read() in fs/proc/page.c Thread-Index: AQHVlciYTszf+OoxZUWdkxf4BbqNCg== Date: Fri, 8 Nov 2019 00:08:07 +0000 Message-ID: <20191108000855.25209-2-t-fukasawa@vx.jp.nec.com> References: <20191108000855.25209-1-t-fukasawa@vx.jp.nec.com> In-Reply-To: <20191108000855.25209-1-t-fukasawa@vx.jp.nec.com> Accept-Language: ja-JP, en-US Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.34.125.135] MIME-Version: 1.0 X-TM-AS-MML: disable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kpagecount_read(), kpageflags_read(), and kpagecgroup_read() have duplicate code. This patch moves it into a common function. Signed-off-by: Toshiki Fukasawa --- fs/proc/page.c | 133 +++++++++++++++++++++------------------------------------ 1 file changed, 48 insertions(+), 85 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index 7c952ee..a49b638 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -21,20 +21,19 @@ #define KPMMASK (KPMSIZE - 1) #define KPMBITS (KPMSIZE * BITS_PER_BYTE) -/* /proc/kpagecount - an array exposing page counts - * - * Each entry is a u64 representing the corresponding - * physical page count. +typedef u64 (*read_page_data_fn_t)(struct page *page); + +/* + * This is general function to read various data on pages. */ -static ssize_t kpagecount_read(struct file *file, char __user *buf, - size_t count, loff_t *ppos) +static ssize_t kpage_common_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos, read_page_data_fn_t read_fn) { u64 __user *out = (u64 __user *)buf; struct page *ppage; unsigned long src = *ppos; unsigned long pfn; ssize_t ret = 0; - u64 pcount; pfn = src / KPMSIZE; count = min_t(size_t, count, (max_pfn * KPMSIZE) - src); @@ -48,12 +47,7 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf, */ ppage = pfn_to_online_page(pfn); - if (!ppage || PageSlab(ppage) || page_has_type(ppage)) - pcount = 0; - else - pcount = page_mapcount(ppage); - - if (put_user(pcount, out)) { + if (put_user(read_fn(ppage), out)) { ret = -EFAULT; break; } @@ -71,6 +65,30 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf, return ret; } +/* /proc/kpagecount - an array exposing page counts + * + * Each entry is a u64 representing the corresponding + * physical page count. + */ + +static u64 page_count_data(struct page *page) +{ + u64 pcount; + + if (!page || PageSlab(page) || page_has_type(page)) + pcount = 0; + else + pcount = page_mapcount(page); + + return pcount; +} + +static ssize_t kpagecount_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos) +{ + return kpage_common_read(file, buf, count, ppos, page_count_data); +} + static const struct file_operations proc_kpagecount_operations = { .llseek = mem_lseek, .read = kpagecount_read, @@ -203,43 +221,15 @@ u64 stable_page_flags(struct page *page) return u; }; +static u64 page_flags_data(struct page *page) +{ + return stable_page_flags(page); +} + static ssize_t kpageflags_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { - u64 __user *out = (u64 __user *)buf; - struct page *ppage; - unsigned long src = *ppos; - unsigned long pfn; - ssize_t ret = 0; - - pfn = src / KPMSIZE; - count = min_t(unsigned long, count, (max_pfn * KPMSIZE) - src); - if (src & KPMMASK || count & KPMMASK) - return -EINVAL; - - while (count > 0) { - /* - * TODO: ZONE_DEVICE support requires to identify - * memmaps that were actually initialized. - */ - ppage = pfn_to_online_page(pfn); - - if (put_user(stable_page_flags(ppage), out)) { - ret = -EFAULT; - break; - } - - pfn++; - out++; - count -= KPMSIZE; - - cond_resched(); - } - - *ppos += (char __user *)out - buf; - if (!ret) - ret = (char __user *)out - buf; - return ret; + return kpage_common_read(file, buf, count, ppos, page_flags_data); } static const struct file_operations proc_kpageflags_operations = { @@ -248,49 +238,22 @@ static ssize_t kpageflags_read(struct file *file, char __user *buf, }; #ifdef CONFIG_MEMCG -static ssize_t kpagecgroup_read(struct file *file, char __user *buf, - size_t count, loff_t *ppos) +static u64 page_cgroup_data(struct page *page) { - u64 __user *out = (u64 __user *)buf; - struct page *ppage; - unsigned long src = *ppos; - unsigned long pfn; - ssize_t ret = 0; u64 ino; - pfn = src / KPMSIZE; - count = min_t(unsigned long, count, (max_pfn * KPMSIZE) - src); - if (src & KPMMASK || count & KPMMASK) - return -EINVAL; - - while (count > 0) { - /* - * TODO: ZONE_DEVICE support requires to identify - * memmaps that were actually initialized. - */ - ppage = pfn_to_online_page(pfn); - - if (ppage) - ino = page_cgroup_ino(ppage); - else - ino = 0; - - if (put_user(ino, out)) { - ret = -EFAULT; - break; - } - - pfn++; - out++; - count -= KPMSIZE; + if (page) + ino = page_cgroup_ino(page); + else + ino = 0; - cond_resched(); - } + return ino; +} - *ppos += (char __user *)out - buf; - if (!ret) - ret = (char __user *)out - buf; - return ret; +static ssize_t kpagecgroup_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos) +{ + return kpage_common_read(file, buf, count, ppos, page_cgroup_data); } static const struct file_operations proc_kpagecgroup_operations = { From patchwork Fri Nov 8 00:08:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Toshiki Fukasawa X-Patchwork-Id: 11233921 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8DA6515AB for ; Fri, 8 Nov 2019 00:15:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 54CC52067B for ; Fri, 8 Nov 2019 00:15:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 54CC52067B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=vx.jp.nec.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A37136B0005; Thu, 7 Nov 2019 19:15:08 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9C01F6B0007; Thu, 7 Nov 2019 19:15:08 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 888926B0008; Thu, 7 Nov 2019 19:15:08 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 6D7686B0005 for ; Thu, 7 Nov 2019 19:15:08 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 30A72824999B for ; Fri, 8 Nov 2019 00:15:08 +0000 (UTC) X-FDA: 76131190296.21.crack96_f94ed0201e18 X-Spam-Summary: 2,0,0,88a1a509b8001a7e,d41d8cd98f00b204,t-fukasawa@vx.jp.nec.com,::dan.j.williams@intel.com:linux-kernel@vger.kernel.org:akpm@linux-foundation.org:mhocko@kernel.org:adobriyan@gmail.com:hch@lst.de:longman@redhat.com:sfr@canb.auug.org.au:mst@redhat.com:cai@lca.pw:n-horiguchi@ah.jp.nec.com:j-nomura@ce.jp.nec.com,RULES_HIT:10:41:355:379:582:800:960:966:973:988:989:1152:1260:1261:1277:1311:1313:1314:1345:1359:1437:1513:1515:1516:1518:1521:1535:1544:1593:1594:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2741:2898:3138:3139:3140:3141:3142:3355:3622:3865:3866:3867:3870:3871:3872:3874:4117:4321:4385:4605:5007:6117:6119:6261:6611:6691:6742:7903:8603:10004:10241:10399:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13138:13231:14181:14394:14721:21063:21080:21451:21627:30030:30054:30064:30070,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral, Custom_r X-HE-Tag: crack96_f94ed0201e18 X-Filterd-Recvd-Size: 6916 Received: from tyo162.gate.nec.co.jp (tyo162.gate.nec.co.jp [114.179.232.162]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Nov 2019 00:15:06 +0000 (UTC) Received: from mailgate02.nec.co.jp ([114.179.233.122]) by tyo162.gate.nec.co.jp (8.15.1/8.15.1) with ESMTPS id xA80Ew8r002048 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 8 Nov 2019 09:14:58 +0900 Received: from mailsv02.nec.co.jp (mailgate-v.nec.co.jp [10.204.236.94]) by mailgate02.nec.co.jp (8.15.1/8.15.1) with ESMTP id xA80EwTT009552; Fri, 8 Nov 2019 09:14:58 +0900 Received: from mail01b.kamome.nec.co.jp (mail01b.kamome.nec.co.jp [10.25.43.2]) by mailsv02.nec.co.jp (8.15.1/8.15.1) with ESMTP id xA80DeGm009682; Fri, 8 Nov 2019 09:14:57 +0900 Received: from bpxc99gp.gisp.nec.co.jp ([10.38.151.149] [10.38.151.149]) by mail02.kamome.nec.co.jp with ESMTP id BT-MMP-10186096; Fri, 8 Nov 2019 09:08:11 +0900 Received: from BPXM20GP.gisp.nec.co.jp ([10.38.151.212]) by BPXC21GP.gisp.nec.co.jp ([10.38.151.149]) with mapi id 14.03.0439.000; Fri, 8 Nov 2019 09:08:11 +0900 From: Toshiki Fukasawa To: "linux-mm@kvack.org" , "dan.j.williams@intel.com" CC: "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "mhocko@kernel.org" , "adobriyan@gmail.com" , "hch@lst.de" , "longman@redhat.com" , "sfr@canb.auug.org.au" , "mst@redhat.com" , "cai@lca.pw" , Naoya Horiguchi , Junichi Nomura Subject: [PATCH 2/3] mm: Introduce subsection_dev_map Thread-Topic: [PATCH 2/3] mm: Introduce subsection_dev_map Thread-Index: AQHVlciaV4LdiB2s4UGTCvjTchQqtg== Date: Fri, 8 Nov 2019 00:08:10 +0000 Message-ID: <20191108000855.25209-3-t-fukasawa@vx.jp.nec.com> References: <20191108000855.25209-1-t-fukasawa@vx.jp.nec.com> In-Reply-To: <20191108000855.25209-1-t-fukasawa@vx.jp.nec.com> Accept-Language: ja-JP, en-US Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.34.125.135] MIME-Version: 1.0 X-TM-AS-MML: disable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, there is no way to identify pfn on ZONE_DEVICE. Identifying pfn on system memory can be done by using a section-level flag. On the other hand, identifying pfn on ZONE_DEVICE requires a subsection-level flag since ZONE_DEVICE can be created in units of subsections. This patch introduces a new bitmap subsection_dev_map so that we can identify pfn on ZONE_DEVICE. Also, subsection_dev_map is used to prove that struct pages included in the subsection have been initialized since it is set after memmap_init_zone_device(). We can avoid accessing pages currently being initialized by checking subsection_dev_map. Signed-off-by: Toshiki Fukasawa --- include/linux/mmzone.h | 19 +++++++++++++++++++ mm/memremap.c | 2 ++ mm/sparse.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index bda2028..11376c4 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1174,11 +1174,17 @@ static inline unsigned long section_nr_to_pfn(unsigned long sec) struct mem_section_usage { DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION); +#ifdef CONFIG_ZONE_DEVICE + DECLARE_BITMAP(subsection_dev_map, SUBSECTIONS_PER_SECTION); +#endif /* See declaration of similar field in struct zone */ unsigned long pageblock_flags[0]; }; void subsection_map_init(unsigned long pfn, unsigned long nr_pages); +#ifdef CONFIG_ZONE_DEVICE +void subsections_mark_device(unsigned long start_pfn, unsigned long size); +#endif struct page; struct page_ext; @@ -1367,6 +1373,19 @@ static inline int pfn_present(unsigned long pfn) return present_section(__nr_to_section(pfn_to_section_nr(pfn))); } +static inline int pfn_zone_device(unsigned long pfn) +{ +#ifdef CONFIG_ZONE_DEVICE + if (pfn_valid(pfn)) { + struct mem_section *ms = __pfn_to_section(pfn); + int idx = subsection_map_index(pfn); + + return test_bit(idx, ms->usage->subsection_dev_map); + } +#endif + return 0; +} + /* * These are _only_ used during initialisation, therefore they * can use __initdata ... They could have names to indicate diff --git a/mm/memremap.c b/mm/memremap.c index 03ccbdf..8a97fd4 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -303,6 +303,8 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], PHYS_PFN(res->start), PHYS_PFN(resource_size(res)), pgmap); + subsections_mark_device(PHYS_PFN(res->start), + PHYS_PFN(resource_size(res))); percpu_ref_get_many(pgmap->ref, pfn_end(pgmap) - pfn_first(pgmap)); return __va(res->start); diff --git a/mm/sparse.c b/mm/sparse.c index f6891c1..a3fc9e0a 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -603,6 +603,31 @@ void __init sparse_init(void) vmemmap_populate_print_last(); } +#ifdef CONFIG_ZONE_DEVICE +void subsections_mark_device(unsigned long start_pfn, unsigned long size) +{ + struct mem_section *ms; + unsigned long *dev_map; + unsigned long sec, start_sec, end_sec, pfns; + + start_sec = pfn_to_section_nr(start_pfn); + end_sec = pfn_to_section_nr(start_pfn + size - 1); + for (sec = start_sec; sec <= end_sec; + sec++, start_pfn += pfns, size -= pfns) { + pfns = min(size, PAGES_PER_SECTION + - (start_pfn & ~PAGE_SECTION_MASK)); + if (WARN_ON(!valid_section_nr(sec))) + continue; + ms = __pfn_to_section(start_pfn); + if (!ms->usage) + continue; + + dev_map = &ms->usage->subsection_dev_map[0]; + subsection_mask_set(dev_map, start_pfn, pfns); + } +} +#endif + #ifdef CONFIG_MEMORY_HOTPLUG /* Mark all memory sections within the pfn range as online */ @@ -782,7 +807,14 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages, memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); ms->section_mem_map = sparse_encode_mem_map(NULL, section_nr); } +#ifdef CONFIG_ZONE_DEVICE + /* deactivation of a partial section on ZONE_DEVICE */ + if (ms->usage) { + unsigned long *dev_map = &ms->usage->subsection_dev_map[0]; + bitmap_andnot(dev_map, dev_map, map, SUBSECTIONS_PER_SECTION); + } +#endif if (section_is_early && memmap) free_map_bootmem(memmap); else From patchwork Fri Nov 8 00:08:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Toshiki Fukasawa X-Patchwork-Id: 11233923 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06C30112B for ; Fri, 8 Nov 2019 00:15:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C15C121D6C for ; Fri, 8 Nov 2019 00:15:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C15C121D6C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=vx.jp.nec.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 10F286B0006; Thu, 7 Nov 2019 19:15:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 099356B0007; Thu, 7 Nov 2019 19:15:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECAB86B0008; Thu, 7 Nov 2019 19:15:54 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id D76BA6B0006 for ; Thu, 7 Nov 2019 19:15:54 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 7186A180AD820 for ; Fri, 8 Nov 2019 00:15:54 +0000 (UTC) X-FDA: 76131192228.23.army47_165fceec16100 X-Spam-Summary: 2,0,0,ba5fe2f92bd102ec,d41d8cd98f00b204,t-fukasawa@vx.jp.nec.com,::dan.j.williams@intel.com:linux-kernel@vger.kernel.org:akpm@linux-foundation.org:mhocko@kernel.org:adobriyan@gmail.com:hch@lst.de:longman@redhat.com:sfr@canb.auug.org.au:mst@redhat.com:cai@lca.pw:n-horiguchi@ah.jp.nec.com:j-nomura@ce.jp.nec.com,RULES_HIT:10:41:355:379:582:800:960:966:968:973:988:989:1152:1260:1261:1277:1311:1313:1314:1345:1359:1437:1513:1515:1516:1518:1521:1535:1544:1593:1594:1605:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3872:3874:4117:4321:4385:4605:5007:6117:6119:6120:6261:6611:6742:7901:7903:10004:10399:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12895:13161:13229:14181:14394:14721:21080:21451:21627:30034:30051:30054:30064,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: army47_165fceec16100 X-Filterd-Recvd-Size: 6595 Received: from tyo161.gate.nec.co.jp (tyo161.gate.nec.co.jp [114.179.232.161]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Nov 2019 00:15:53 +0000 (UTC) Received: from mailgate01.nec.co.jp ([114.179.233.122]) by tyo161.gate.nec.co.jp (8.15.1/8.15.1) with ESMTPS id xA80Fj5Y001311 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 8 Nov 2019 09:15:45 +0900 Received: from mailsv01.nec.co.jp (mailgate-v.nec.co.jp [10.204.236.94]) by mailgate01.nec.co.jp (8.15.1/8.15.1) with ESMTP id xA80FjKP001121; Fri, 8 Nov 2019 09:15:45 +0900 Received: from mail01b.kamome.nec.co.jp (mail01b.kamome.nec.co.jp [10.25.43.2]) by mailsv01.nec.co.jp (8.15.1/8.15.1) with ESMTP id xA80Etp6014126; Fri, 8 Nov 2019 09:15:45 +0900 Received: from bpxc99gp.gisp.nec.co.jp ([10.38.151.152] [10.38.151.152]) by mail01b.kamome.nec.co.jp with ESMTP id BT-MMP-10172977; Fri, 8 Nov 2019 09:08:14 +0900 Received: from BPXM20GP.gisp.nec.co.jp ([10.38.151.212]) by BPXC24GP.gisp.nec.co.jp ([10.38.151.152]) with mapi id 14.03.0439.000; Fri, 8 Nov 2019 09:08:14 +0900 From: Toshiki Fukasawa To: "linux-mm@kvack.org" , "dan.j.williams@intel.com" CC: "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "mhocko@kernel.org" , "adobriyan@gmail.com" , "hch@lst.de" , "longman@redhat.com" , "sfr@canb.auug.org.au" , "mst@redhat.com" , "cai@lca.pw" , Naoya Horiguchi , Junichi Nomura Subject: [PATCH 3/3] mm: make pfn walker support ZONE_DEVICE Thread-Topic: [PATCH 3/3] mm: make pfn walker support ZONE_DEVICE Thread-Index: AQHVlcicrT2ngYobmkioLRJ7DsejPw== Date: Fri, 8 Nov 2019 00:08:13 +0000 Message-ID: <20191108000855.25209-4-t-fukasawa@vx.jp.nec.com> References: <20191108000855.25209-1-t-fukasawa@vx.jp.nec.com> In-Reply-To: <20191108000855.25209-1-t-fukasawa@vx.jp.nec.com> Accept-Language: ja-JP, en-US Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.34.125.135] MIME-Version: 1.0 X-TM-AS-MML: disable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch allows pfn walker to read pages on ZONE_DEVICE. There are the following notes: a) The reserved pages indicated by vmem_altmap->reserve are uninitialized, so it must be skipped to read. b) To get vmem_altmap, we need to use get_dev_pagemap(), but doing it for all pfns is too slow. This patch solves both of them. Since vmem_altmap could reserve only first few pages, we can reduce the number of checks by counting sequential valid pages. Signed-off-by: Toshiki Fukasawa --- fs/proc/page.c | 22 ++++++++++++++++++---- include/linux/memremap.h | 6 ++++++ mm/memremap.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+), 4 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index a49b638..b6241ea 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -33,6 +33,7 @@ static ssize_t kpage_common_read(struct file *file, char __user *buf, struct page *ppage; unsigned long src = *ppos; unsigned long pfn; + unsigned long valid_pages = 0; ssize_t ret = 0; pfn = src / KPMSIZE; @@ -41,11 +42,24 @@ static ssize_t kpage_common_read(struct file *file, char __user *buf, return -EINVAL; while (count > 0) { - /* - * TODO: ZONE_DEVICE support requires to identify - * memmaps that were actually initialized. - */ ppage = pfn_to_online_page(pfn); + if (!ppage && pfn_zone_device(pfn)) { + /* + * Skip to read first few uninitialized pages on + * ZONE_DEVICE. And count valid pages starting + * with the pfn so that minimize the number of + * calls to nr_valid_pages_zone_device(). + */ + if (!valid_pages) + valid_pages = nr_valid_pages_zone_device(pfn); + if (valid_pages) { + ppage = pfn_to_page(pfn); + valid_pages--; + } + } else if (valid_pages) { + /* ZONE_DEVICE has been hot removed */ + valid_pages = 0; + } if (put_user(read_fn(ppage), out)) { ret = -EFAULT; diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 6fefb09..d111ae3 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -123,6 +123,7 @@ static inline struct vmem_altmap *pgmap_altmap(struct dev_pagemap *pgmap) } #ifdef CONFIG_ZONE_DEVICE +unsigned long nr_valid_pages_zone_device(unsigned long pfn); void *memremap_pages(struct dev_pagemap *pgmap, int nid); void memunmap_pages(struct dev_pagemap *pgmap); void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); @@ -133,6 +134,11 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, unsigned long vmem_altmap_offset(struct vmem_altmap *altmap); void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns); #else +static inline unsigned long nr_valid_pages_zone_device(unsigned long pfn) +{ + return 0; +} + static inline void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) { diff --git a/mm/memremap.c b/mm/memremap.c index 8a97fd4..307c73e 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -73,6 +73,35 @@ static unsigned long pfn_next(unsigned long pfn) return pfn + 1; } +/* + * This returns the number of sequential valid pages starting from @pfn + * on ZONE_DEVICE. The invalid pages reserved by driver is first few + * pages on ZONE_DEVICE. + */ +unsigned long nr_valid_pages_zone_device(unsigned long pfn) +{ + struct dev_pagemap *pgmap; + struct vmem_altmap *altmap; + unsigned long pages; + + pgmap = get_dev_pagemap(pfn, NULL); + if (!pgmap) + return 0; + altmap = pgmap_altmap(pgmap); + if (altmap && pfn < (altmap->base_pfn + altmap->reserve)) + pages = 0; + else + /* + * PHYS_PFN(pgmap->res.end) is end pfn on pgmap + * (not start pfn on next mapping). + */ + pages = PHYS_PFN(pgmap->res.end) - pfn + 1; + + put_dev_pagemap(pgmap); + + return pages; +} + #define for_each_device_pfn(pfn, map) \ for (pfn = pfn_first(map); pfn < pfn_end(map); pfn = pfn_next(pfn))