From patchwork Tue Jul 17 11:20:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 10529049 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6E1BE600F4 for ; Tue, 17 Jul 2018 11:22:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5AE2028BAB for ; Tue, 17 Jul 2018 11:22:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4C81628C20; Tue, 17 Jul 2018 11:22:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9297528C0F for ; Tue, 17 Jul 2018 11:22:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 500276B027F; Tue, 17 Jul 2018 07:22:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 33C2B6B0280; Tue, 17 Jul 2018 07:22:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01EE26B0281; Tue, 17 Jul 2018 07:22:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id 9B7706B027F for ; Tue, 17 Jul 2018 07:22:06 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id g20-v6so402626pfi.2 for ; Tue, 17 Jul 2018 04:22:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=Fbedi7F5eHgababG0hl3sg/NHgxriWdfcN1jQPvyRyI=; b=kWWGhlI4f9W4HlgugdNjrz7C/mnUwzb/ZGzsDPtdIdLEIFFONx1kHY65jBH2WKYpaK oQE5IJGwVRz9CuxGX0X3HjVGqN+zDXxEGMcnLJgKXcbju+8bCdG6PPd7nnQSSXJK/yfq anFdZ6x4r5pVEMeJ4JDcoQemZVSC+/4SdMfOeDdf4AR7xRIhlpJW1shZzv9XDWjlFTxJ FGyAbpyvzLLnfpI3mbzCcdV2TBKfPZHNYIJCNo6wuFjh1/Q4v/wNDAwVUZYMzLUns8jG GOK1lI3q8EvZlhxWUsCZtoJXvhS9oFAYyMW8rlhvPdkEMnVdeEg9WkcugmyaF2Sepeeb Shug== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: AOUpUlFamwysCcmezr2KJ8JqY9Cfp8FoVeR3riaMzh2Gzj6x22o0Srtq /5wQI4IjxDA4ruRVpHiuQlTTVJUH+tcJ0hS4zpvXvDHD/L0migyqKrN8I61MpyPNEdAQKlQoW2T hO1It8FdfF6QOVljmuyvV/picSCgzcdz+YZd6zvZSqJOQykT6fnnDgcvfStZBxycVVQ== X-Received: by 2002:a63:3c4a:: with SMTP id i10-v6mr1171718pgn.415.1531826526279; Tue, 17 Jul 2018 04:22:06 -0700 (PDT) X-Google-Smtp-Source: AAOMgpecufxSV7kxFyunWQTySlVttqL4NLSOQjEN30SaREkYs5XMpgokq0+jGu+k6pneoJJ9M5Vb X-Received: by 2002:a63:3c4a:: with SMTP id i10-v6mr1171660pgn.415.1531826525032; Tue, 17 Jul 2018 04:22:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531826525; cv=none; d=google.com; s=arc-20160816; b=FFoUHHdf9WO4fkLqLC52MLala3XWAVsNQngQZc6YQBfE735ox1bqykiqz9KyYvlSpT 8B7az2IhM0zcI6m71NU5hTBjr6aMhSg8mzz1i0XJvyvdARVWu9LM5G80e4PQGqrja4OD Ov0w9nne8SgjhrrqB5jex91G4ZjdnnN+HjSp6AjCyWoJHa+M7CKUebBZO5z4a9FsiXvV E+Xe62zuSmx4PFEtRKULCJhANqkudpiOg61/ha6Mvbz1e177AeuE8VDdmY10RHH60jN5 73ZOy9U6h1GjwSsrwkbu5sRxMyinjzNn3sAyjAmfn1EY/SjtA5L0DP+EHOYUgNIDTxiN lWaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Fbedi7F5eHgababG0hl3sg/NHgxriWdfcN1jQPvyRyI=; b=RxEBzey7s/WNFzbz17Nv5ljzr+TuXzRNYK8App/rWpvJhT63TGRrH7YVLv7Nj6ILR5 8TS/VFij6kO6jQJZ91sv9iNIHhbdcLPbaYhdPA0+YttGhyRPo/051xhVjWpcuFI6rezn TIx12uN/Edhlc/Y7B90/qWs2RQCGjq4Hx4Dq67GTrOwk4mFlCSdV590BgAhY1U21Rsvs XzabIrxx7Qjn1JtU0WbGggBT859qWIlGUlQko5mlaqt7Rc9qj4jF2dWJ6bZc0Fwk7TTK 1zZx9TOf0BJ4oR3OZRzcwt+SerR3SykFR8jG0OHTfdrMOnc3CiYzQVflPu+cjGw72WrG c72A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga17.intel.com (mga17.intel.com. [192.55.52.151]) by mx.google.com with ESMTPS id f12-v6si685766pgm.601.2018.07.17.04.22.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Jul 2018 04:22:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.151 as permitted sender) client-ip=192.55.52.151; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jul 2018 04:22:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,365,1526367600"; d="scan'208";a="67665020" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga002.fm.intel.com with ESMTP; 17 Jul 2018 04:21:45 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id E0E01341; Tue, 17 Jul 2018 14:21:48 +0300 (EEST) From: "Kirill A. Shutemov" To: Ingo Molnar , x86@kernel.org, Thomas Gleixner , "H. Peter Anvin" , Tom Lendacky Cc: Dave Hansen , Kai Huang , Jacob Pan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" Subject: [PATCHv5 05/19] mm/page_alloc: Handle allocation for encrypted memory Date: Tue, 17 Jul 2018 14:20:15 +0300 Message-Id: <20180717112029.42378-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180717112029.42378-1-kirill.shutemov@linux.intel.com> References: <20180717112029.42378-1-kirill.shutemov@linux.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP For encrypted memory, we need to allocate pages for a specific encryption KeyID. There are two cases when we need to allocate a page for encryption: - Allocation for an encrypted VMA; - Allocation for migration of encrypted page; The first case can be covered within alloc_page_vma(). We know KeyID from the VMA. The second case requires few new page allocation routines that would allocate the page for a specific KeyID. An encrypted page has to be cleared after KeyID set. This is handled in prep_encrypted_page() that will be provided by arch-specific code. Signed-off-by: Kirill A. Shutemov --- include/linux/gfp.h | 48 ++++++++++++++++++++++++++++++++++++----- include/linux/migrate.h | 12 ++++++++--- mm/compaction.c | 1 + mm/mempolicy.c | 28 ++++++++++++++++++------ mm/migrate.c | 4 ++-- mm/page_alloc.c | 47 ++++++++++++++++++++++++++++++++++++++++ 6 files changed, 123 insertions(+), 17 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 66f395737990..347a40558cfc 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -446,16 +446,46 @@ static inline void arch_free_page(struct page *page, int order) { } static inline void arch_alloc_page(struct page *page, int order) { } #endif +#ifndef prep_encrypted_page +static inline void prep_encrypted_page(struct page *page, int order, + int keyid, bool zero) +{ +} +#endif + +/* + * Encrypted page has to be cleared once keyid is set, not on allocation. + */ +static inline bool encrypted_page_needs_zero(int keyid, gfp_t *gfp_mask) +{ + if (!keyid) + return false; + + if (*gfp_mask & __GFP_ZERO) { + *gfp_mask &= ~__GFP_ZERO; + return true; + } + + return false; +} + struct page * __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, nodemask_t *nodemask); +struct page * +__alloc_pages_nodemask_keyid(gfp_t gfp_mask, unsigned int order, + int preferred_nid, nodemask_t *nodemask, int keyid); + static inline struct page * __alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid) { return __alloc_pages_nodemask(gfp_mask, order, preferred_nid, NULL); } +struct page *__alloc_pages_node_keyid(int nid, int keyid, + gfp_t gfp_mask, unsigned int order); + /* * Allocate pages, preferring the node given as nid. The node must be valid and * online. For more general interface, see alloc_pages_node(). @@ -483,6 +513,19 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, return __alloc_pages_node(nid, gfp_mask, order); } +static inline struct page *alloc_pages_node_keyid(int nid, int keyid, + gfp_t gfp_mask, unsigned int order) +{ + if (nid == NUMA_NO_NODE) + nid = numa_mem_id(); + + return __alloc_pages_node_keyid(nid, keyid, gfp_mask, order); +} + +extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, + struct vm_area_struct *vma, unsigned long addr, + int node, bool hugepage); + #ifdef CONFIG_NUMA extern struct page *alloc_pages_current(gfp_t gfp_mask, unsigned order); @@ -491,14 +534,9 @@ alloc_pages(gfp_t gfp_mask, unsigned int order) { return alloc_pages_current(gfp_mask, order); } -extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, - struct vm_area_struct *vma, unsigned long addr, - int node, bool hugepage); #else #define alloc_pages(gfp_mask, order) \ alloc_pages_node(numa_node_id(), gfp_mask, order) -#define alloc_pages_vma(gfp_mask, order, vma, addr, node, false)\ - alloc_pages(gfp_mask, order) #endif #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) #define alloc_page_vma(gfp_mask, vma, addr) \ diff --git a/include/linux/migrate.h b/include/linux/migrate.h index f2b4abbca55e..fede9bfa89d9 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -38,9 +38,15 @@ static inline struct page *new_page_nodemask(struct page *page, unsigned int order = 0; struct page *new_page = NULL; - if (PageHuge(page)) + if (PageHuge(page)) { + /* + * HugeTLB doesn't support encryption. We shouldn't see + * such pages. + */ + WARN_ON(page_keyid(page)); return alloc_huge_page_nodemask(page_hstate(compound_head(page)), preferred_nid, nodemask); + } if (PageTransHuge(page)) { gfp_mask |= GFP_TRANSHUGE; @@ -50,8 +56,8 @@ static inline struct page *new_page_nodemask(struct page *page, if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) gfp_mask |= __GFP_HIGHMEM; - new_page = __alloc_pages_nodemask(gfp_mask, order, - preferred_nid, nodemask); + new_page = __alloc_pages_nodemask_keyid(gfp_mask, order, + preferred_nid, nodemask, page_keyid(page)); if (new_page && PageTransHuge(new_page)) prep_transhuge_page(new_page); diff --git a/mm/compaction.c b/mm/compaction.c index faca45ebe62d..fd51aa32ad96 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1187,6 +1187,7 @@ static struct page *compaction_alloc(struct page *migratepage, list_del(&freepage->lru); cc->nr_freepages--; + prep_encrypted_page(freepage, 0, page_keyid(migratepage), false); return freepage; } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 581b729e05a0..ce7b436444b5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -921,22 +921,28 @@ static void migrate_page_add(struct page *page, struct list_head *pagelist, /* page allocation callback for NUMA node migration */ struct page *alloc_new_node_page(struct page *page, unsigned long node) { - if (PageHuge(page)) + if (PageHuge(page)) { + /* + * HugeTLB doesn't support encryption. We shouldn't see + * such pages. + */ + WARN_ON(page_keyid(page)); return alloc_huge_page_node(page_hstate(compound_head(page)), node); - else if (PageTransHuge(page)) { + } else if (PageTransHuge(page)) { struct page *thp; - thp = alloc_pages_node(node, + thp = alloc_pages_node_keyid(node, page_keyid(page), (GFP_TRANSHUGE | __GFP_THISNODE), HPAGE_PMD_ORDER); if (!thp) return NULL; prep_transhuge_page(thp); return thp; - } else - return __alloc_pages_node(node, GFP_HIGHUSER_MOVABLE | - __GFP_THISNODE, 0); + } else { + return __alloc_pages_node_keyid(node, page_keyid(page), + GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, 0); + } } /* @@ -2013,9 +2019,16 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, { struct mempolicy *pol; struct page *page; - int preferred_nid; + bool zero = false; + int keyid, preferred_nid; nodemask_t *nmask; + keyid = vma_keyid(vma); + if (keyid && (gfp & __GFP_ZERO)) { + zero = true; + gfp &= ~__GFP_ZERO; + } + pol = get_vma_policy(vma, addr); if (pol->mode == MPOL_INTERLEAVE) { @@ -2058,6 +2071,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask); mpol_cond_put(pol); out: + prep_encrypted_page(page, order, keyid, zero); return page; } diff --git a/mm/migrate.c b/mm/migrate.c index 8c0af0f7cab1..eb8dea219dcb 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1847,7 +1847,7 @@ static struct page *alloc_misplaced_dst_page(struct page *page, int nid = (int) data; struct page *newpage; - newpage = __alloc_pages_node(nid, + newpage = __alloc_pages_node_keyid(nid, page_keyid(page), (GFP_HIGHUSER_MOVABLE | __GFP_THISNODE | __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN) & @@ -2030,7 +2030,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, if (numamigrate_update_ratelimit(pgdat, HPAGE_PMD_NR)) goto out_dropref; - new_page = alloc_pages_node(node, + new_page = alloc_pages_node_keyid(node, page_keyid(page), (GFP_TRANSHUGE_LIGHT | __GFP_THISNODE), HPAGE_PMD_ORDER); if (!new_page) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5d800d61ddb7..d7dc54b75f5d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3697,6 +3697,39 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla } #endif /* CONFIG_COMPACTION */ +#ifndef CONFIG_NUMA +struct page *alloc_pages_vma(gfp_t gfp_mask, int order, + struct vm_area_struct *vma, unsigned long addr, + int node, bool hugepage) +{ + struct page *page; + bool need_zero; + int keyid = vma_keyid(vma); + + need_zero = encrypted_page_needs_zero(keyid, &gfp_mask); + page = alloc_pages(gfp_mask, order); + prep_encrypted_page(page, order, keyid, need_zero); + + return page; +} +#endif + +struct page * __alloc_pages_node_keyid(int nid, int keyid, + gfp_t gfp_mask, unsigned int order) +{ + struct page *page; + bool need_zero; + + VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); + VM_WARN_ON(!node_online(nid)); + + need_zero = encrypted_page_needs_zero(keyid, &gfp_mask); + page = __alloc_pages(gfp_mask, order, nid); + prep_encrypted_page(page, order, keyid, need_zero); + + return page; +} + #ifdef CONFIG_LOCKDEP static struct lockdep_map __fs_reclaim_map = STATIC_LOCKDEP_MAP_INIT("fs_reclaim", &__fs_reclaim_map); @@ -4401,6 +4434,20 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, } EXPORT_SYMBOL(__alloc_pages_nodemask); +struct page * +__alloc_pages_nodemask_keyid(gfp_t gfp_mask, unsigned int order, + int preferred_nid, nodemask_t *nodemask, int keyid) +{ + struct page *page; + bool need_zero; + + need_zero = encrypted_page_needs_zero(keyid, &gfp_mask); + page = __alloc_pages_nodemask(gfp_mask, order, preferred_nid, nodemask); + prep_encrypted_page(page, order, keyid, need_zero); + return page; +} +EXPORT_SYMBOL(__alloc_pages_nodemask_keyid); + /* * Common helper functions. */