From patchwork Thu Jan 23 13:29:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Xinhai X-Patchwork-Id: 11347831 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D716E921 for ; Thu, 23 Jan 2020 13:31:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C42921835 for ; Thu, 23 Jan 2020 13:31:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="rQkIHHi+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C42921835 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B7FE26B0003; Thu, 23 Jan 2020 08:31:39 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B09146B0005; Thu, 23 Jan 2020 08:31:39 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D17D6B0006; Thu, 23 Jan 2020 08:31:39 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id 826246B0003 for ; Thu, 23 Jan 2020 08:31:39 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 2F8D340CD for ; Thu, 23 Jan 2020 13:31:39 +0000 (UTC) X-FDA: 76408986318.03.mass63_2b7ad89331f27 X-Spam-Summary: 2,0,0,3afb2f15393e6238,d41d8cd98f00b204,lixinhai.lxh@gmail.com,::akpm@linux-foundation.org:mhocko@suse.com:mike.kravetz@oracle.com:anshuman.khandual@arm.com:n-horiguchi@ah.jp.nec.com,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1345:1431:1437:1535:1544:1605:1711:1730:1747:1777:1792:1963:2198:2199:2393:2559:2562:2693:2899:2901:2910:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4037:4118:4321:4605:5007:6261:6653:7514:7875:7901:7903:8603:9413:9592:10004:11026:11232:11473:11658:11914:12043:12291:12295:12296:12297:12438:12517:12519:12555:12679:12895:12986:13255:14096:14181:14394:14687:14721:21060:21080:21444:21450:21451:21627:21666:21740:21990:30012:30051:30054:30064:30070,0,RBL:209.85.221.65:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: mass63_2b7ad89331f27 X-Filterd-Recvd-Size: 7193 Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Thu, 23 Jan 2020 13:31:38 +0000 (UTC) Received: by mail-wr1-f65.google.com with SMTP id c9so3069514wrw.8 for ; Thu, 23 Jan 2020 05:31:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=VifPe7JN3maSAVTl6Y13FXysxVFiS5Bl8QZMy9Ouo+I=; b=rQkIHHi+eK67TDoB20nRJr4holzsG33ZHkkcY3rctYU3ZliYzlMXcFirQ2m6psK9Dp el/97McOPccIIulcsqr4vpHxyOMeiAWfPgz2ZhuJkHnAfpKHbOZCJyyd+XJUTdOAwCgx x/HZNUkfPqb/FA1eCuWvPvXHbAWjrkPSsaRBpD5pn8+JKPTCk+QAvTX5vwOi+hJGSUYL vl9cj5uF4wk2P7hGNEwDUltKAnjgzyepwVHZXpO2PbEHC7wYAhZaBs76Uy7R1cO34NpS d6yy4USKYbN7nk0Dz/uqg5++EKgZTwArUzoeizKQ7GYKiNqN1i/rTlIW1d1kEjww5ALL qg/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=VifPe7JN3maSAVTl6Y13FXysxVFiS5Bl8QZMy9Ouo+I=; b=WH4KCRUQYqITiqGJPwCuKfCw94naFSQekxrVPmUR1sm1VW+w6cPTOSs0cM7HO1tfj0 tmOPJwIUs1n35iLBmj7M0ABK1Bf6+so158LawBRV6USM1/T2woPgQO64G1JgV2kDM0M5 mDuK0QMszaAzBhDZMoCNLePjVYbEDdkKl20hZJWiN6YxYbxloSqHl+t+h1+uBSwpvQi0 decf68C0lDD2MHC+FebUQpkSk7GsIn/O2ZJ9fe7+f4zAR+G2n6NDAG2cal03Lqoiopu2 3apLqdvcwBUmuLrXrM2L+5tz+t7rLQNJoHnUOMoYCZWZtQYAf8HCTjyci3d7i5FUIJ0J mH4g== X-Gm-Message-State: APjAAAWAKO5/GLCLsLqRxVK8hbpMhDR7OihJTLO8Heq4sT9hg70XXT2Q iyXteOL+6BTVUODwReFlWMKhu4WB X-Google-Smtp-Source: APXvYqwEZcxIXSx+f93iqbKJ+tlZK6oT4PX54u2DDGL4BzaASHrUmj8UeeTUEDfiOjnQbRgI6L4irg== X-Received: by 2002:adf:df0e:: with SMTP id y14mr16995621wrl.377.1579786296275; Thu, 23 Jan 2020 05:31:36 -0800 (PST) Received: from localhost.localdomain.localdomain ([131.228.2.21]) by smtp.gmail.com with ESMTPSA id w19sm2564017wmc.22.2020.01.23.05.31.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 Jan 2020 05:31:35 -0800 (PST) From: Li Xinhai To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Michal Hocko , Mike Kravetz , Anshuman Khandual , Naoya Horiguchi Subject: [PATCH v5] mm/mempolicy: Checking hugepage migration is supported by arch in vma_migratable Date: Thu, 23 Jan 2020 13:29:39 +0000 Message-Id: <1579786179-30633-1-git-send-email-lixinhai.lxh@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vma_migratable() is called to check if pages in vma can be migrated before go ahead to further actions. Currently it is used in below code path: - task_numa_work - mbind - move_pages For hugetlb mapping, whether vma is migratable or not is determined by: - CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION - arch_hugetlb_migration_supported Issue: current code only checks for CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION alone, and no code should use it directly. (note that current code in vma_migratable don't cause failure or bug because unmap_and_move_huge_page() will catch unsupported hugepage and handle it properly) This patch checks the two factors by hugepage_migration_supported for impoving code logic and robustness. It will enable early bail out of hugepage migration procedure, but because currently all architecture supporting hugepage migration is able to support all page size, we would not see performance gain with this patch applied. vma_migratable() is moved to mm/mempolicy.c, because of the circular reference of mempolicy.h and hugetlb.h cause defining it as inline not feasible. Signed-off-by: Li Xinhai Cc: Michal Hocko Cc: Mike Kravetz Cc: Anshuman Khandual Cc: Naoya Horiguchi Acked-by: Michal Hocko Reviewed-by: Mike Kravetz Reviewed-by: Anshuman Khandual Reviewed-by: Naoya Horiguchi --- V2, V3 and V4: All for using different ways to fix the circular reference of hugetlb.h and mempolicy.h. The exsiting relationship of these two files is allowing inline functions of hugetlb.h being able to refer symbols defined in mempolicy.h, but no feasible way for inline functions in mempolicy.h to using functions in hugetlb.h. After evaluated different fixes to this situation, current patch looks better, which no longer define vma_migratable as inline. v4->v5: New wrapper vm_hugepage_migration_supported() is not necessary, remove it and use hugepage_migration_supported(). include/linux/mempolicy.h | 29 +---------------------------- mm/mempolicy.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 28 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 5228c62..8165278 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -173,34 +173,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol); /* Check if a vma is migratable */ -static inline bool vma_migratable(struct vm_area_struct *vma) -{ - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) - return false; - - /* - * DAX device mappings require predictable access latency, so avoid - * incurring periodic faults. - */ - if (vma_is_dax(vma)) - return false; - -#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION - if (vma->vm_flags & VM_HUGETLB) - return false; -#endif - - /* - * Migration allocates pages in the highest zone. If we cannot - * do so then migration (at least from node to node) is not - * possible. - */ - if (vma->vm_file && - gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) - < policy_zone) - return false; - return true; -} +extern bool vma_migratable(struct vm_area_struct *vma); extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long); extern void mpol_put_task_policy(struct task_struct *); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 067cf7d..9319dcb 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1714,6 +1714,34 @@ static int kernel_get_mempolicy(int __user *policy, #endif /* CONFIG_COMPAT */ +bool vma_migratable(struct vm_area_struct *vma) +{ + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + return false; + + /* + * DAX device mappings require predictable access latency, so avoid + * incurring periodic faults. + */ + if (vma_is_dax(vma)) + return false; + + if (is_vm_hugetlb_page(vma) && + !hugepage_migration_supported(hstate_vma(vma))) + return false; + + /* + * Migration allocates pages in the highest zone. If we cannot + * do so then migration (at least from node to node) is not + * possible. + */ + if (vma->vm_file && + gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) + < policy_zone) + return false; + return true; +} + struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, unsigned long addr) {