From patchwork Tue Jul 17 11:20:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 10529003 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 865B4603ED for ; Tue, 17 Jul 2018 11:21:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 72BC9205A8 for ; Tue, 17 Jul 2018 11:21:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6609B28A8C; Tue, 17 Jul 2018 11:21:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FAE4205A8 for ; Tue, 17 Jul 2018 11:21:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1053E6B0008; Tue, 17 Jul 2018 07:21:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0DEA96B000C; Tue, 17 Jul 2018 07:21:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC1E56B000E; Tue, 17 Jul 2018 07:21:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id A3D976B0008 for ; Tue, 17 Jul 2018 07:21:47 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id y16-v6so319162pgv.23 for ; Tue, 17 Jul 2018 04:21:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=58FSc7q/s/Ine145Y5RvvVXtlGUb5/loV56nqBdwxHc=; b=sAo5giqb1i1efYdFkMbISmcrGL1SNmH70yX4prRRWRNhDpIjvvnORH6MQRWoY5CtNM TATE01Vl0JEn6xdJBMQkq5i2l9LhEoHHCgA/XICY9qH8D2+AXrvAH28YsFWX2GZabD2g HIsAgaTUh45GPRlEi+ztq7DaHgXUOI3SVVWMdfyfo/lUEkRy3DY4mpyqgquADkfkIROn id/7INgkBVquXuTH6B/RkvPelFGLA+QEgz1Fn/n6qPUmqbSyg7WKsc/3Q3yw/BbqQmnu fm3N9s3Od/6Mo4KEMLsQkrHjO/kgERHSnBYokYf0SMX9IyAcd7z34qUrJ40nkuBZddjX IPYg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: AOUpUlEL0YcuUQ1u+Z5NBro/KSW5puDSZQO/yvci738euFBzeN/j86it iabXALQs7obtB0V7wzqk/1QfzBCp42/0KiNwCgMljf8qZw98yJLSFiIM4FlP3wA07KmNLc/vEY6 tSAWkG9H2Hdvia86JEyCLWHgiQ57QyBPvdq4tssYYOBqS3JwEe5FlJ7ApwpoE6pokBA== X-Received: by 2002:a62:41d6:: with SMTP id g83-v6mr234514pfd.219.1531826507345; Tue, 17 Jul 2018 04:21:47 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcwL4/M8Ua3dlZU024nJW1EUcdIuKmtryozR/nySCLHF2+GPun7ja4lC4tJT5O/2VRTc7VX X-Received: by 2002:a62:41d6:: with SMTP id g83-v6mr234459pfd.219.1531826506161; Tue, 17 Jul 2018 04:21:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531826506; cv=none; d=google.com; s=arc-20160816; b=DAxVOSOlmibrXBmIsfEGxg9L2bHa6hqWcwAzjlrjHBwrfa3PIaXXNxQEgGd97ltvKr lb8MucSTVMIBEJf2Z6+Ge4JZo+Aq1GoNZf0e5Z1Q/lM2Gc5YlAxn82r6KBWF+n4+/yC3 bJ5v3vhhI9AFmNHKC+wDu279v2/hqZyP2yx6z7X7OAjaGaXbn8vKgW9K8HaM+ZGEwYWz NACPFB+9cYlAX1bubGl4MleebOpapaeaDTF7PHF0kqqmxEncj+v+pIR6s23AA79+Cv7S t8/lAD6Ly0d2Y8kUp3X1Btq03vkMJrb2X7FtQpqOd6JuwFkPb2Vx1TTpFQyMA9DyHkOY lITA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=58FSc7q/s/Ine145Y5RvvVXtlGUb5/loV56nqBdwxHc=; b=a45Ogz5MRv9PolsscXkwjX5EPOZt881fBgg1tGV3uu4R9J2vTxpJ1Jq4ZQ2cLDOjjP S16yQBgarkTZbJGkwOIzdpGYbjKFqXbBzQYV1wWgvxMlcNpR2ChSuYb78usNWRF0mTYl vg3aleEE7kmW4K5F952YPcb3/C6wGr7CBYt4v8dll1hjjOHEPUzh2y8t2JXc5We7/SFA hl1AzWOoREFCIYQe65jYnjSGwbceSYJoZMkUPqEemKBO6Ok7e0Ax0+oZ375eLmMJWioF TnqAEKMc7INaOywP4HvpkQ2JB1BP/Kh8MUs9OeMwWITKGQN5JrV4zqsrZbMIROg0LyFR ocZA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga04.intel.com (mga04.intel.com. [192.55.52.120]) by mx.google.com with ESMTPS id r21-v6si791980pgu.55.2018.07.17.04.21.45 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 17 Jul 2018 04:21:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.120 as permitted sender) client-ip=192.55.52.120; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of kirill.shutemov@linux.intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jul 2018 04:21:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,365,1526367600"; d="scan'208";a="57508412" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga008.jf.intel.com with ESMTP; 17 Jul 2018 04:21:42 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id AE18C166; Tue, 17 Jul 2018 14:21:48 +0300 (EEST) From: "Kirill A. Shutemov" To: Ingo Molnar , x86@kernel.org, Thomas Gleixner , "H. Peter Anvin" , Tom Lendacky Cc: Dave Hansen , Kai Huang , Jacob Pan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" Subject: [PATCHv5 01/19] mm: Do no merge VMAs with different encryption KeyIDs Date: Tue, 17 Jul 2018 14:20:11 +0300 Message-Id: <20180717112029.42378-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180717112029.42378-1-kirill.shutemov@linux.intel.com> References: <20180717112029.42378-1-kirill.shutemov@linux.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP VMAs with different KeyID do not mix together. Only VMAs with the same KeyID are compatible. Signed-off-by: Kirill A. Shutemov --- fs/userfaultfd.c | 7 ++++--- include/linux/mm.h | 9 ++++++++- mm/madvise.c | 2 +- mm/mempolicy.c | 3 ++- mm/mlock.c | 2 +- mm/mmap.c | 31 +++++++++++++++++++------------ mm/mprotect.c | 2 +- 7 files changed, 36 insertions(+), 20 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 594d192b2331..bb0db9f9d958 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -890,7 +890,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file) new_flags, vma->anon_vma, vma->vm_file, vma->vm_pgoff, vma_policy(vma), - NULL_VM_UFFD_CTX); + NULL_VM_UFFD_CTX, vma_keyid(vma)); if (prev) vma = prev; else @@ -1423,7 +1423,8 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, prev = vma_merge(mm, prev, start, vma_end, new_flags, vma->anon_vma, vma->vm_file, vma->vm_pgoff, vma_policy(vma), - ((struct vm_userfaultfd_ctx){ ctx })); + ((struct vm_userfaultfd_ctx){ ctx }), + vma_keyid(vma)); if (prev) { vma = prev; goto next; @@ -1581,7 +1582,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, prev = vma_merge(mm, prev, start, vma_end, new_flags, vma->anon_vma, vma->vm_file, vma->vm_pgoff, vma_policy(vma), - NULL_VM_UFFD_CTX); + NULL_VM_UFFD_CTX, vma_keyid(vma)); if (prev) { vma = prev; goto next; diff --git a/include/linux/mm.h b/include/linux/mm.h index 9a35362bbc92..c8780c5835ad 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1544,6 +1544,13 @@ static inline bool vma_is_anonymous(struct vm_area_struct *vma) return vma->vm_ops == &anon_vm_ops; } +#ifndef vma_keyid +static inline int vma_keyid(struct vm_area_struct *vma) +{ + return 0; +} +#endif + #ifdef CONFIG_SHMEM /* * The vma_is_shmem is not inline because it is used only by slow @@ -2219,7 +2226,7 @@ static inline int vma_adjust(struct vm_area_struct *vma, unsigned long start, extern struct vm_area_struct *vma_merge(struct mm_struct *, struct vm_area_struct *prev, unsigned long addr, unsigned long end, unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t, - struct mempolicy *, struct vm_userfaultfd_ctx); + struct mempolicy *, struct vm_userfaultfd_ctx, int keyid); extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *); extern int __split_vma(struct mm_struct *, struct vm_area_struct *, unsigned long addr, int new_below); diff --git a/mm/madvise.c b/mm/madvise.c index 4d3c922ea1a1..c88fb12be6e5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -138,7 +138,7 @@ static long madvise_behavior(struct vm_area_struct *vma, pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); *prev = vma_merge(mm, *prev, start, end, new_flags, vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx); + vma->vm_userfaultfd_ctx, vma_keyid(vma)); if (*prev) { vma = *prev; goto success; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index f0fcf70bcec7..581b729e05a0 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -705,7 +705,8 @@ static int mbind_range(struct mm_struct *mm, unsigned long start, ((vmstart - vma->vm_start) >> PAGE_SHIFT); prev = vma_merge(mm, prev, vmstart, vmend, vma->vm_flags, vma->anon_vma, vma->vm_file, pgoff, - new_pol, vma->vm_userfaultfd_ctx); + new_pol, vma->vm_userfaultfd_ctx, + vma_keyid(vma)); if (prev) { vma = prev; next = vma->vm_next; diff --git a/mm/mlock.c b/mm/mlock.c index 74e5a6547c3d..3c96321b66bb 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -534,7 +534,7 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev, pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); *prev = vma_merge(mm, *prev, start, end, newflags, vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx); + vma->vm_userfaultfd_ctx, vma_keyid(vma)); if (*prev) { vma = *prev; goto success; diff --git a/mm/mmap.c b/mm/mmap.c index 180f19dfb83f..4c604eb644b4 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -984,7 +984,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, */ static inline int is_mergeable_vma(struct vm_area_struct *vma, struct file *file, unsigned long vm_flags, - struct vm_userfaultfd_ctx vm_userfaultfd_ctx) + struct vm_userfaultfd_ctx vm_userfaultfd_ctx, + int keyid) { /* * VM_SOFTDIRTY should not prevent from VMA merging, if we @@ -998,6 +999,8 @@ static inline int is_mergeable_vma(struct vm_area_struct *vma, return 0; if (vma->vm_file != file) return 0; + if (vma_keyid(vma) != keyid) + return 0; if (vma->vm_ops->close) return 0; if (!is_mergeable_vm_userfaultfd_ctx(vma, vm_userfaultfd_ctx)) @@ -1034,9 +1037,10 @@ static int can_vma_merge_before(struct vm_area_struct *vma, unsigned long vm_flags, struct anon_vma *anon_vma, struct file *file, pgoff_t vm_pgoff, - struct vm_userfaultfd_ctx vm_userfaultfd_ctx) + struct vm_userfaultfd_ctx vm_userfaultfd_ctx, + int keyid) { - if (is_mergeable_vma(vma, file, vm_flags, vm_userfaultfd_ctx) && + if (is_mergeable_vma(vma, file, vm_flags, vm_userfaultfd_ctx, keyid) && is_mergeable_anon_vma(anon_vma, vma->anon_vma, vma)) { if (vma->vm_pgoff == vm_pgoff) return 1; @@ -1055,9 +1059,10 @@ static int can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags, struct anon_vma *anon_vma, struct file *file, pgoff_t vm_pgoff, - struct vm_userfaultfd_ctx vm_userfaultfd_ctx) + struct vm_userfaultfd_ctx vm_userfaultfd_ctx, + int keyid) { - if (is_mergeable_vma(vma, file, vm_flags, vm_userfaultfd_ctx) && + if (is_mergeable_vma(vma, file, vm_flags, vm_userfaultfd_ctx, keyid) && is_mergeable_anon_vma(anon_vma, vma->anon_vma, vma)) { pgoff_t vm_pglen; vm_pglen = vma_pages(vma); @@ -1112,7 +1117,8 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, unsigned long end, unsigned long vm_flags, struct anon_vma *anon_vma, struct file *file, pgoff_t pgoff, struct mempolicy *policy, - struct vm_userfaultfd_ctx vm_userfaultfd_ctx) + struct vm_userfaultfd_ctx vm_userfaultfd_ctx, + int keyid) { pgoff_t pglen = (end - addr) >> PAGE_SHIFT; struct vm_area_struct *area, *next; @@ -1145,7 +1151,7 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, mpol_equal(vma_policy(prev), policy) && can_vma_merge_after(prev, vm_flags, anon_vma, file, pgoff, - vm_userfaultfd_ctx)) { + vm_userfaultfd_ctx, keyid)) { /* * OK, it can. Can we now merge in the successor as well? */ @@ -1154,7 +1160,8 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, can_vma_merge_before(next, vm_flags, anon_vma, file, pgoff+pglen, - vm_userfaultfd_ctx) && + vm_userfaultfd_ctx, + keyid) && is_mergeable_anon_vma(prev->anon_vma, next->anon_vma, NULL)) { /* cases 1, 6 */ @@ -1177,7 +1184,7 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, mpol_equal(policy, vma_policy(next)) && can_vma_merge_before(next, vm_flags, anon_vma, file, pgoff+pglen, - vm_userfaultfd_ctx)) { + vm_userfaultfd_ctx, keyid)) { if (prev && addr < prev->vm_end) /* case 4 */ err = __vma_adjust(prev, prev->vm_start, addr, prev->vm_pgoff, NULL, next); @@ -1722,7 +1729,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, * Can we just expand an old mapping? */ vma = vma_merge(mm, prev, addr, addr + len, vm_flags, - NULL, file, pgoff, NULL, NULL_VM_UFFD_CTX); + NULL, file, pgoff, NULL, NULL_VM_UFFD_CTX, 0); if (vma) goto out; @@ -2987,7 +2994,7 @@ static int do_brk_flags(unsigned long addr, unsigned long len, unsigned long fla /* Can we just expand an old private anonymous mapping? */ vma = vma_merge(mm, prev, addr, addr + len, flags, - NULL, NULL, pgoff, NULL, NULL_VM_UFFD_CTX); + NULL, NULL, pgoff, NULL, NULL_VM_UFFD_CTX, 0); if (vma) goto out; @@ -3189,7 +3196,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap, return NULL; /* should never get here */ new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags, vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx); + vma->vm_userfaultfd_ctx, vma_keyid(vma)); if (new_vma) { /* * Source vma may have been merged into new_vma diff --git a/mm/mprotect.c b/mm/mprotect.c index 625608bc8962..68dc476310c0 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -349,7 +349,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); *pprev = vma_merge(mm, *pprev, start, end, newflags, vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx); + vma->vm_userfaultfd_ctx, vma_keyid(vma)); if (*pprev) { vma = *pprev; VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);