From patchwork Sat Oct 20 22:21:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Soliman X-Patchwork-Id: 10650653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA9AC90 for ; Sat, 20 Oct 2018 22:23:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9703528382 for ; Sat, 20 Oct 2018 22:23:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 874BD28173; Sat, 20 Oct 2018 22:23:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B6C6428173 for ; Sat, 20 Oct 2018 22:23:21 +0000 (UTC) Received: (qmail 11934 invoked by uid 550); 20 Oct 2018 22:23:15 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 11879 invoked from network); 20 Oct 2018 22:23:15 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=RWiArh2hS1wXwjmyYMZMynOTmVpcrRnX76eAIdAErZY=; b=vF5/2nGRqtg9hFjMHUDP+4xIMhquOx7Wc2ig3FVa9eyWLXeakvyv+cPWGdI0uadhdk 8BQfoDmD+TFkDcK/kSGMFuMT5zhm6eMi1lQOnDQ7T/Gm6O5YIUekInLJPgws6caokH57 1QN4zJE28Zkuu+WTKyodRvvXuHoK+QmpbeazlyjCGYPdrTOqf0icMlUBGeM7PGaXJTdF oU4qNj3xSTpJR3CSmacC5IQHKiXy+NyQjPD9N5MDkxFeEIqOQ9Xlm+OgHA4qpy3tPXC5 lDph7LztAL423jsXzQQChok6xPZF2GUdbP5Z/JVQS5/kL6YpaE8WVn4JW+Rpe+gjyWkt Fzlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=RWiArh2hS1wXwjmyYMZMynOTmVpcrRnX76eAIdAErZY=; b=JLamyTTz/7MsRmnyKu/W3nDJ1FKlKscI4862gooY36zTcugFoDEzMj30fLsX1NhVuo 9U2Rflu29Y5bkyelxFnMhm0Vhl+RwpFKpFwDcDRBQKg/UWmG01qggf7qMgqK29q4Ln65 EVUQE33cjDSHw+u2KikGQRjXpHEtICL5E9FrhX1LkSBeGu+aR/p3hJFFz8KFQBE/x+7V VJbPhn5G5x0KvMfldsGAVFfQjRAeW3ERHV1cf737AzB600pHJOEYQzH9T3xHjGanDFmT tYdyaLkjlZog4XSDNogSOw+fZQuVmQjgRCUaYCOJ1jJf6KsLTdp1P1Dm9wDcK3F9smvX Tohw== X-Gm-Message-State: ABuFfog499T5MwUJ4reWRdLJFizrTu7YtZZsMLZFh0MHZLZSBQYKkGJ0 RaW0/na7MZBT7feyTOzjtLs= X-Google-Smtp-Source: ACcGV62sPgDQQ34lHfcuH5vxjYVwB/noaqbyNO3NCB237JXYctPyZePbBRuDnsTV6cJKmuNeW6tKgw== X-Received: by 2002:adf:a201:: with SMTP id p1-v6mr39365860wra.89.1540074183681; Sat, 20 Oct 2018 15:23:03 -0700 (PDT) From: Ahmed Abd El Mawgood To: Paolo Bonzini , rkrcmar@redhat.com, Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , hpa@zytor.com, x86@kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, ahmedsoliman0x666@gmail.com, Ovich00@gmail.com, kernel-hardening@lists.openwall.com, nigel.edwards@hpe.com, Boris Lukashev , Hossam Hassan <7ossam9063@gmail.com>, Ahmed Lotfy Subject: [PATCH V4 1/5] KVM: X86: Memory ROE documentation Date: Sun, 21 Oct 2018 00:21:23 +0200 Message-Id: <20181020222127.6368-2-ahmedsoliman0x666@gmail.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> References: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> X-Virus-Scanned: ClamAV using ClamSMTP Following up with my previous threads on KVM assisted Anti rootkit protections. The current version doesn't address the attacks involving pages remapping. It is still design in progress, nevertheless, it will be in my later patch sets. Signed-off-by: Ahmed Abd El Mawgood --- Documentation/virtual/kvm/hypercalls.txt | 31 ++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt index da24c138c8d1..8af64d826f03 100644 --- a/Documentation/virtual/kvm/hypercalls.txt +++ b/Documentation/virtual/kvm/hypercalls.txt @@ -141,3 +141,34 @@ a0 corresponds to the APIC ID in the third argument (a2), bit 1 corresponds to the APIC ID a2+1, and so on. Returns the number of CPUs to which the IPIs were delivered successfully. + +7. KVM_HC_ROE +---------------- +Architecture: x86 +Status: active +Purpose: Hypercall used to apply Read-Only Enforcement to guest memory and +registers +Usage 1: + a0: ROE_VERSION + +Returns non-signed number that represents the current version of ROE +implementation current version. + +Usage 2: + + a0: ROE_MPROTECT (requires version >= 1) + a1: Start address aligned to page boundary. + a2: Number of pages to be protected. + +This configuration lets a guest kernel have part of its read/write memory +converted into read-only. This action is irreversible. +Upon successful run, the number of pages protected is returned. + +Error codes: + -KVM_ENOSYS: system call being triggered from ring 3 or it is not + implemented. + -EINVAL: error based on given parameters. + +Notes: KVM_HC_ROE can not be triggered from guest Ring 3 (user mode). The +reason is that user mode malicious software can make use of it to enforce read +only protection on an arbitrary memory page thus crashing the kernel. From patchwork Sat Oct 20 22:21:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Soliman X-Patchwork-Id: 10650659 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F7F590 for ; Sat, 20 Oct 2018 22:23:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E81F28173 for ; Sat, 20 Oct 2018 22:23:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8265028382; Sat, 20 Oct 2018 22:23:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 1A76B28173 for ; Sat, 20 Oct 2018 22:23:28 +0000 (UTC) Received: (qmail 12224 invoked by uid 550); 20 Oct 2018 22:23:18 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 12160 invoked from network); 20 Oct 2018 22:23:17 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=RUYGp8iLApsFozbBs63aAa+WysaQOIS8JQ7soMZ0ybg=; b=FzlNQTyFAh+n2xndBYzicl6a0qGfDux0ntOMWHl6HoW4VOqIvdJKorS+Ff9WTLSyNp A37PzjSBNcgoyFk1Wwa1hwsTrHpP81SjnS/N1oYhDZFWHXtiExposhr865UrUdSXW1tG f8YsQxoOEzxQEiy3CbKVNSopHpD8vYOFyJvvbjMfQE62NL+mTo8jfLIXwE2j0cAcZ1Pz ankLvnueR6HFY164paJr/BU6oifVta5kjef/RdaOezEjw8KY5JsD9/KVaQIvfDPD0QiW s1ilklB5+3z1yL/mBd0S9wSMzUY2SKNcTCjB83QNz136nlXJxweB7OYkqYw7PPSS7kek we3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=RUYGp8iLApsFozbBs63aAa+WysaQOIS8JQ7soMZ0ybg=; b=TaCHeqK9+yF5xPOYEau+5WE1x8uPYqvyXGrgSXSrZxgSaQaI0ZsWlVj0QLBjdW9Qy6 8oBPZMjn7xWbKF2NmAfz0Mkioo1Yv/eetktxjTeVssHKfu7b9JRdrUyEH5W/5WW7izAP OTFYdPcW1l7NMdgQb/Stf9xjGnXXmurJM2W5ccNDZoSI83h68dgYtnqJcIz4vbcPjQRZ Jq8INWdvs57OwV10XSCzVYMhHR9aQxJu6QukWyktcKrPx7zDACK1APHSRsicUQypw5pe O76ttpljVQWQ6ulNxP8ze/8z1XoH92OxPxp5gxBcy7JfayAf3cSBBo7oP0QG9jwVy0rN Xh2w== X-Gm-Message-State: AGRZ1gJRs5P3to44FvtPzTNeIMoWzy3iW0uj57gMnrX/71QJU4+iKSYH Cw70VifboBaBOhDUIKXkHSY= X-Google-Smtp-Source: AJdET5dZu3NU00RhFhVSX797FS/yBhg/EG9aBPCSbT9zzLLtwaNKyr8ljhUMNlAKOXXLMKeonmfL/Q== X-Received: by 2002:adf:fe87:: with SMTP id l7-v6mr2818799wrr.164.1540074186379; Sat, 20 Oct 2018 15:23:06 -0700 (PDT) From: Ahmed Abd El Mawgood To: Paolo Bonzini , rkrcmar@redhat.com, Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , hpa@zytor.com, x86@kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, ahmedsoliman0x666@gmail.com, Ovich00@gmail.com, kernel-hardening@lists.openwall.com, nigel.edwards@hpe.com, Boris Lukashev , Hossam Hassan <7ossam9063@gmail.com>, Ahmed Lotfy Subject: [PATCH V4 2/5] KVM: X86: Adding arbitrary data pointer in kvm memslot iterator functions Date: Sun, 21 Oct 2018 00:21:24 +0200 Message-Id: <20181020222127.6368-3-ahmedsoliman0x666@gmail.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> References: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> X-Virus-Scanned: ClamAV using ClamSMTP This will help sharing data into the slot_level_handler callback. In my case I need to a share a counter for the pages traversed to use it in some bitmap. Being able to send arbitrary memory pointer into the slot_level_handler callback made it easy. Signed-off-by: Ahmed Abd El Mawgood --- arch/x86/kvm/mmu.c | 65 ++++++++++++++++++++++++++-------------------- 1 file changed, 37 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 51b953ad9d4e..cc36abe1ee44 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1486,7 +1486,7 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) static bool __rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - bool pt_protect) + bool pt_protect, void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1525,7 +1525,8 @@ static bool wrprot_ad_disabled_spte(u64 *sptep) * - W bit on ad-disabled SPTEs. * Returns true iff any D or W bits were cleared. */ -static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1551,7 +1552,8 @@ static bool spte_set_dirty(u64 *sptep) return mmu_spte_update(sptep, spte); } -static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1583,7 +1585,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_write_protect(kvm, rmap_head, false); + __rmap_write_protect(kvm, rmap_head, false, NULL); /* clear the first set bit */ mask &= mask - 1; @@ -1609,7 +1611,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_clear_dirty(kvm, rmap_head); + __rmap_clear_dirty(kvm, rmap_head, NULL); /* clear the first set bit */ mask &= mask - 1; @@ -1662,7 +1664,8 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, for (i = PT_PAGE_TABLE_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + write_protected |= __rmap_write_protect(kvm, rmap_head, true, + NULL); } return write_protected; @@ -1676,7 +1679,8 @@ static bool rmap_write_protect(struct kvm_vcpu *vcpu, u64 gfn) return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn); } -static bool kvm_zap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool kvm_zap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1696,7 +1700,7 @@ static int kvm_unmap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, unsigned long data) { - return kvm_zap_rmapp(kvm, rmap_head); + return kvm_zap_rmapp(kvm, rmap_head, NULL); } static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, @@ -5465,13 +5469,15 @@ void kvm_mmu_uninit_vm(struct kvm *kvm) } /* The return value indicates if tlb flush on all vcpus is needed. */ -typedef bool (*slot_level_handler) (struct kvm *kvm, struct kvm_rmap_head *rmap_head); +typedef bool (*slot_level_handler) (struct kvm *kvm, + struct kvm_rmap_head *rmap_head, void *data); /* The caller should hold mmu-lock before calling this function. */ static __always_inline bool slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, slot_level_handler fn, int start_level, int end_level, - gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb) + gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb, + void *data) { struct slot_rmap_walk_iterator iterator; bool flush = false; @@ -5479,7 +5485,7 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, for_each_slot_rmap_range(memslot, start_level, end_level, start_gfn, end_gfn, &iterator) { if (iterator.rmap) - flush |= fn(kvm, iterator.rmap); + flush |= fn(kvm, iterator.rmap, data); if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { if (flush && lock_flush_tlb) { @@ -5501,36 +5507,36 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, static __always_inline bool slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot, slot_level_handler fn, int start_level, int end_level, - bool lock_flush_tlb) + bool lock_flush_tlb, void *data) { return slot_handle_level_range(kvm, memslot, fn, start_level, end_level, memslot->base_gfn, memslot->base_gfn + memslot->npages - 1, - lock_flush_tlb); + lock_flush_tlb, data); } static __always_inline bool slot_handle_all_level(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL, - PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb); + PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb, data); } static __always_inline bool slot_handle_large_level(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL + 1, - PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb); + PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb, data); } static __always_inline bool slot_handle_leaf(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL, - PT_PAGE_TABLE_LEVEL, lock_flush_tlb); + PT_PAGE_TABLE_LEVEL, lock_flush_tlb, data); } void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) @@ -5552,7 +5558,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL, - start, end - 1, true); + start, end - 1, true, NULL); } } @@ -5560,9 +5566,10 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } static bool slot_rmap_write_protect(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) + struct kvm_rmap_head *rmap_head, + void *data) { - return __rmap_write_protect(kvm, rmap_head, false); + return __rmap_write_protect(kvm, rmap_head, false, data); } void kvm_mmu_slot_remove_write_access(struct kvm *kvm, @@ -5572,7 +5579,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_all_level(kvm, memslot, slot_rmap_write_protect, - false); + false, NULL); spin_unlock(&kvm->mmu_lock); /* @@ -5598,7 +5605,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, } static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) + struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -5636,7 +5644,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, /* FIXME: const-ify all uses of struct kvm_memory_slot. */ spin_lock(&kvm->mmu_lock); slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot, - kvm_mmu_zap_collapsible_spte, true); + kvm_mmu_zap_collapsible_spte, true, NULL); spin_unlock(&kvm->mmu_lock); } @@ -5646,7 +5654,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, bool flush; spin_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false, NULL); spin_unlock(&kvm->mmu_lock); lockdep_assert_held(&kvm->slots_lock); @@ -5669,7 +5677,7 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_large_level(kvm, memslot, slot_rmap_write_protect, - false); + false, NULL); spin_unlock(&kvm->mmu_lock); /* see kvm_mmu_slot_remove_write_access */ @@ -5686,7 +5694,8 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, bool flush; spin_lock(&kvm->mmu_lock); - flush = slot_handle_all_level(kvm, memslot, __rmap_set_dirty, false); + flush = slot_handle_all_level(kvm, memslot, __rmap_set_dirty, false, + NULL); spin_unlock(&kvm->mmu_lock); lockdep_assert_held(&kvm->slots_lock); From patchwork Sat Oct 20 22:21:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Soliman X-Patchwork-Id: 10650663 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 60A3613A9 for ; Sat, 20 Oct 2018 22:23:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4FBBE28173 for ; Sat, 20 Oct 2018 22:23:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43FB528382; Sat, 20 Oct 2018 22:23:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 8381E28173 for ; Sat, 20 Oct 2018 22:23:37 +0000 (UTC) Received: (qmail 13621 invoked by uid 550); 20 Oct 2018 22:23:21 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 13472 invoked from network); 20 Oct 2018 22:23:20 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=ImZ5d4bcZEugnUxaWlXSvI/F+sInLTPGppy4BtPWm1A=; b=Afk7nbfnjnGZPypj/UzGrLr9PtbqcP8GsRt/qH2+2haDzQwHnsycz84j2ViUWhtLd7 E6OBG7fjAn/IoMKX2hkaWYP+WzfyH9diNg+gvH9RbfZ6lS65yBENWjWzJwraz5FTZLVD vMIfwtGOyNIVupg23AcVBhwgy6QV7eUqr/PZhrYkVFD/OlC/y/b4wIYUsFdkn19bFoCA TFYim0IbFIvRXfOYG2bsGynwaoytOBCC6mVGuXtbVm7fUdGOM0at75cevj/eWdkd3aRi k8P44BFMjdpRudbiTkrkyYefVnvS9/cj9YGgKAWWAgTZq7LPjSJgJPpZbo4hFdD2WZFZ Bb3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=ImZ5d4bcZEugnUxaWlXSvI/F+sInLTPGppy4BtPWm1A=; b=uYs0Us+Tmz2SkR7404+6nQ5eFhqpVyA7gbky670Fw51yKj1hSrTMDdB/AAJ7jRt7s4 8IhIH4beauDzBN+FmwdjEJzGRSGEtTPQ0frnpeFuzffq5IQPSAOQkrHSFlzJiyMYYcPQ /x/NMA1Tnsa4iWhwQOs599FR5ZuA5oNqipbR/hZb8i+uVRo4lviXb95k6OznJR7r/yOd IyM/4ha8E4vvL8uvoXLPxxmwqWR4pBLXbruHuWynxgiqDCc39+0NkBbINSX6Gt1loHpf 4uyrQNHvQK18kxMLSPkU42Y2185MXR7Uf1/FXAorHO8I25I+v3242TvOIs72dj+6Shjf O++Q== X-Gm-Message-State: AGRZ1gJa3wugJVx9LEgzdSH/gXPzpQT5wNnWIAkciV8fV6PvEbm2ciME Ka/8iZ/LWu5AKNC0EcITYGo= X-Google-Smtp-Source: AJdET5evhN3lZuMFJQ3MidHfbbtq4xNDn8nyONfgrq++pc1+xLuyBs/W/GftsjHfsgFHKLX33HRmYw== X-Received: by 2002:adf:82b6:: with SMTP id 51-v6mr3995703wrc.252.1540074189166; Sat, 20 Oct 2018 15:23:09 -0700 (PDT) From: Ahmed Abd El Mawgood To: Paolo Bonzini , rkrcmar@redhat.com, Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , hpa@zytor.com, x86@kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, ahmedsoliman0x666@gmail.com, Ovich00@gmail.com, kernel-hardening@lists.openwall.com, nigel.edwards@hpe.com, Boris Lukashev , Hossam Hassan <7ossam9063@gmail.com>, Ahmed Lotfy Subject: [PATCH V4 3/5] KVM: X86: Adding skeleton for Memory ROE Date: Sun, 21 Oct 2018 00:21:25 +0200 Message-Id: <20181020222127.6368-4-ahmedsoliman0x666@gmail.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> References: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces a hypercall implemented for X86 that can assist against subset of kernel rootkits, it works by place readonly protection in shadow PTE. The end result protection is also kept in a bitmap for each kvm_memory_slot and is used as reference when updating SPTEs. The whole goal is to protect the guest kernel static data from modification if attacker is running from guest ring 0, for this reason there is no hypercall to revert effect of Memory ROE hypercall. This patch doesn't implement integrity check on guest TLB so obvious attack on the current implementation will involve guest virtual address -> guest physical address remapping, but there are plans to fix that. Signed-off-by: Ahmed Abd El Mawgood --- arch/x86/include/asm/kvm_host.h | 11 ++- arch/x86/kvm/Kconfig | 7 ++ arch/x86/kvm/mmu.c | 72 +++++++++++++--- arch/x86/kvm/x86.c | 143 +++++++++++++++++++++++++++++++- include/linux/kvm_host.h | 3 + include/uapi/linux/kvm_para.h | 4 + virt/kvm/kvm_main.c | 34 +++++++- 7 files changed, 255 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 09b2e3e2cf1b..aa080c3e302e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -238,6 +238,15 @@ struct kvm_mmu_memory_cache { void *objects[KVM_NR_MEM_OBJS]; }; +/* + * This is internal structure used to be be able to access kvm memory slot and + * have track of the number of current PTE when doing shadow PTE walk + */ +struct kvm_write_access_data { + int i; + struct kvm_memory_slot *memslot; +}; + /* * the pages used as guest page table on soft mmu are tracked by * kvm_memory_slot.arch.gfn_track which is 16 bits, so the role bits used @@ -1178,7 +1187,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, u64 acc_track_mask, u64 me_mask); void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); -void kvm_mmu_slot_remove_write_access(struct kvm *kvm, +void kvm_mmu_slot_apply_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot); void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *memslot); diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 1bbec387d289..2fcbb1788a24 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -96,6 +96,13 @@ config KVM_MMU_AUDIT This option adds a R/W kVM module parameter 'mmu_audit', which allows auditing of KVM MMU events at runtime. +config KVM_ROE + bool "Hypercall Memory Read-Only Enforcement" + depends on KVM && X86 + help + This option adds KVM_HC_ROE hypercall to kvm as a hardening + mechanism to protect memory pages from being edited. + # OK, it's a little counter-intuitive to do this, but it puts it neatly under # the virtualization menu. source drivers/vhost/Kconfig diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index cc36abe1ee44..c54aa5287e14 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1484,9 +1484,8 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) return mmu_spte_update(sptep, spte); } -static bool __rmap_write_protect(struct kvm *kvm, - struct kvm_rmap_head *rmap_head, - bool pt_protect, void *data) +static bool __rmap_write_protection(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, bool pt_protect) { u64 *sptep; struct rmap_iterator iter; @@ -1498,6 +1497,38 @@ static bool __rmap_write_protect(struct kvm *kvm, return flush; } +#ifdef CONFIG_KVM_ROE +static bool __rmap_write_protect_roe(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, + bool pt_protect, + struct kvm_write_access_data *d) +{ + u64 *sptep; + struct rmap_iterator iter; + bool prot; + bool flush = false; + + for_each_rmap_spte(rmap_head, &iter, sptep) { + prot = !test_bit(d->i, d->memslot->roe_bitmap) && pt_protect; + flush |= spte_write_protect(sptep, prot); + d->i++; + } + return flush; +} +#endif + +static bool __rmap_write_protect(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, + bool pt_protect, + struct kvm_write_access_data *d) +{ +#ifdef CONFIG_KVM_ROE + if (d != NULL) + return __rmap_write_protect_roe(kvm, rmap_head, pt_protect, d); +#endif + return __rmap_write_protection(kvm, rmap_head, pt_protect); +} + static bool spte_clear_dirty(u64 *sptep) { u64 spte = *sptep; @@ -1585,7 +1616,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_write_protect(kvm, rmap_head, false, NULL); + __rmap_write_protection(kvm, rmap_head, false); /* clear the first set bit */ mask &= mask - 1; @@ -1661,11 +1692,15 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head; int i; bool write_protected = false; + struct kvm_write_access_data data = { + .i = 0, + .memslot = slot, + }; for (i = PT_PAGE_TABLE_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { rmap_head = __gfn_to_rmap(gfn, i, slot); write_protected |= __rmap_write_protect(kvm, rmap_head, true, - NULL); + &data); } return write_protected; @@ -5569,21 +5604,36 @@ static bool slot_rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head, void *data) { - return __rmap_write_protect(kvm, rmap_head, false, data); + return __rmap_write_protect(kvm, rmap_head, false, + (struct kvm_write_access_data *)data); } -void kvm_mmu_slot_remove_write_access(struct kvm *kvm, +static bool slot_rmap_apply_protection(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, + void *data) +{ + struct kvm_write_access_data *d = (struct kvm_write_access_data *) data; + bool prot_mask = !(d->memslot->flags & KVM_MEM_READONLY); + + return __rmap_write_protect(kvm, rmap_head, prot_mask, d); +} + +void kvm_mmu_slot_apply_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot) { bool flush; + struct kvm_write_access_data data = { + .i = 0, + .memslot = memslot, + }; spin_lock(&kvm->mmu_lock); - flush = slot_handle_all_level(kvm, memslot, slot_rmap_write_protect, - false, NULL); + flush = slot_handle_all_level(kvm, memslot, slot_rmap_apply_protection, + false, &data); spin_unlock(&kvm->mmu_lock); /* - * kvm_mmu_slot_remove_write_access() and kvm_vm_ioctl_get_dirty_log() + * kvm_mmu_slot_apply_write_access() and kvm_vm_ioctl_get_dirty_log() * which do tlb flush out of mmu-lock should be serialized by * kvm->slots_lock otherwise tlb flush would be missed. */ @@ -5680,7 +5730,7 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, false, NULL); spin_unlock(&kvm->mmu_lock); - /* see kvm_mmu_slot_remove_write_access */ + /* see kvm_mmu_slot_apply_write_access*/ lockdep_assert_held(&kvm->slots_lock); if (flush) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ca717737347e..70f2b42a2f91 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4276,7 +4276,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) /* * All the TLBs can be flushed out of mmu lock, see the comments in - * kvm_mmu_slot_remove_write_access(). + * kvm_mmu_slot_apply_write_access(). */ lockdep_assert_held(&kvm->slots_lock); if (is_dirty) @@ -6798,7 +6798,137 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr, } #endif -/* +#ifdef CONFIG_KVM_ROE +static void kvm_roe_protect_slot(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, u64 npages) +{ + int i; + + for (i = gfn - slot->base_gfn; i < gfn + npages - slot->base_gfn; i++) + set_bit(i, slot->roe_bitmap); + kvm_mmu_slot_apply_write_access(kvm, slot); + kvm_arch_flush_shadow_memslot(kvm, slot); +} + +static int __kvm_roe_protect_range(struct kvm *kvm, gpa_t gpa, u64 npages) +{ + struct kvm_memory_slot *slot; + gfn_t gfn = gpa >> PAGE_SHIFT; + int count = 0; + + while (npages != 0) { + slot = gfn_to_memslot(kvm, gfn); + if (!slot) { + gfn += 1; + npages -= 1; + continue; + } + if (gfn + npages > slot->base_gfn + slot->npages) { + u64 _npages = slot->base_gfn + slot->npages - gfn; + + kvm_roe_protect_slot(kvm, slot, gfn, _npages); + gfn += _npages; + count += _npages; + npages -= _npages; + } else { + kvm_roe_protect_slot(kvm, slot, gfn, npages); + count += npages; + npages = 0; + } + } + if (count == 0) + return -EINVAL; + return count; +} + +static int kvm_roe_protect_range(struct kvm *kvm, gpa_t gpa, u64 npages) +{ + int r; + + mutex_lock(&kvm->slots_lock); + r = __kvm_roe_protect_range(kvm, gpa, npages); + mutex_unlock(&kvm->slots_lock); + return r; +} + +static bool kvm_roe_userspace(struct kvm_vcpu *vcpu) +{ + u64 rflags; + u64 cr0 = kvm_read_cr0(vcpu); + u64 iopl; + + // first checking we are not in protected mode + if ((cr0 & 1) == 0) + return false; + /* + * we don't need to worry about comments in __get_regs + * because we are sure that this function will only be + * triggered at the end of a hypercall + */ + rflags = kvm_get_rflags(vcpu); + iopl = (rflags >> 12) & 3; + if (iopl != 3) + return false; + return true; +} + +static int kvm_roe_full_protect_range(struct kvm_vcpu *vcpu, u64 gva, + u64 npages) +{ + struct kvm *kvm = vcpu->kvm; + gpa_t gpa; + u64 hva; + u64 count = 0; + int i; + int status; + + if (gva & ~PAGE_MASK) + return -EINVAL; + // We need to make sure that there will be no overflow + if ((npages << PAGE_SHIFT) >> PAGE_SHIFT != npages || npages == 0) + return -EINVAL; + for (i = 0; i < npages; i++) { + gpa = kvm_mmu_gva_to_gpa_system(vcpu, gva + (i << PAGE_SHIFT), + NULL); + hva = gfn_to_hva(kvm, gpa >> PAGE_SHIFT); + if (kvm_is_error_hva(hva)) + continue; + if (!access_ok(VERIFY_WRITE, hva, 1 << PAGE_SHIFT)) + continue; + status = kvm_roe_protect_range(vcpu->kvm, gpa, 1); + if (status > 0) + count += status; + } + if (count == 0) + return -EINVAL; + return count; +} + +static int kvm_roe(struct kvm_vcpu *vcpu, u64 a0, u64 a1, u64 a2, u64 a3) +{ + int ret; + /* + * First we need to make sure that we are running from something that + * isn't usermode + */ + if (kvm_roe_userspace(vcpu)) + return -KVM_ENOSYS; + switch (a0) { + case ROE_VERSION: + ret = 1; //current version + break; + case ROE_MPROTECT: + ret = kvm_roe_full_protect_range(vcpu, a1, a2); + break; + default: + ret = -EINVAL; + } + return ret; +} + +#endif + + /* * kvm_pv_kick_cpu_op: Kick a vcpu. * * @apicid - apicid of vcpu to be kicked. @@ -6868,6 +6998,11 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) case KVM_HC_SEND_IPI: ret = kvm_pv_send_ipi(vcpu->kvm, a0, a1, a2, a3, op_64_bit); break; +#endif +#ifdef CONFIG_KVM_ROE + case KVM_HC_ROE: + ret = kvm_roe(vcpu, a0, a1, a2, a3); + break; #endif default: ret = -KVM_ENOSYS; @@ -9119,8 +9254,8 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, struct kvm_memory_slot *new) { /* Still write protect RO slot */ + kvm_mmu_slot_apply_write_access(kvm, new); if (new->flags & KVM_MEM_READONLY) { - kvm_mmu_slot_remove_write_access(kvm, new); return; } @@ -9158,7 +9293,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, if (kvm_x86_ops->slot_enable_log_dirty) kvm_x86_ops->slot_enable_log_dirty(kvm, new); else - kvm_mmu_slot_remove_write_access(kvm, new); + kvm_mmu_slot_apply_write_access(kvm, new); } else { if (kvm_x86_ops->slot_disable_log_dirty) kvm_x86_ops->slot_disable_log_dirty(kvm, new); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c926698040e0..be6885bc28bc 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -297,6 +297,9 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu) struct kvm_memory_slot { gfn_t base_gfn; unsigned long npages; +#ifdef CONFIG_KVM_ROE + unsigned long *roe_bitmap; +#endif unsigned long *dirty_bitmap; struct kvm_arch_memory_slot arch; unsigned long userspace_addr; diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 6c0ce49931e5..e6004e0750fd 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -28,7 +28,11 @@ #define KVM_HC_MIPS_CONSOLE_OUTPUT 8 #define KVM_HC_CLOCK_PAIRING 9 #define KVM_HC_SEND_IPI 10 +#define KVM_HC_ROE 11 +/* ROE Functionality parameters */ +#define ROE_VERSION 0 +#define ROE_MPROTECT 1 /* * hypercalls use architecture specific */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f986e31fa68c..423a9c014120 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -554,6 +554,11 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, struct kvm_memory_slot *dont) { +#ifdef CONFIG_KVM_ROE + if (!dont) + kvfree(free->roe_bitmap); +#endif + if (!dont || free->dirty_bitmap != dont->dirty_bitmap) kvm_destroy_dirty_bitmap(free); @@ -800,6 +805,17 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot) return 0; } +static int kvm_init_roe_bitmap(struct kvm_memory_slot *slot) +{ +#ifdef CONFIG_KVM_ROE + slot->roe_bitmap = kvzalloc(BITS_TO_LONGS(slot->npages) * + sizeof(unsigned long), GFP_KERNEL); + if (!slot->roe_bitmap) + return -ENOMEM; +#endif + return 0; +} + /* * Insert memslot and re-sort memslots based on their GFN, * so binary search could be used to lookup GFN. @@ -1017,6 +1033,8 @@ int __kvm_set_memory_region(struct kvm *kvm, if (kvm_create_dirty_bitmap(&new) < 0) goto out_free; } + if (kvm_init_roe_bitmap(&new) < 0) + goto out_free; slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL); if (!slots) @@ -1270,13 +1288,23 @@ static bool memslot_is_readonly(struct kvm_memory_slot *slot) return slot->flags & KVM_MEM_READONLY; } +static bool gfn_is_readonly(struct kvm_memory_slot *slot, gfn_t gfn) +{ +#ifdef CONFIG_KVM_ROE + return test_bit(gfn - slot->base_gfn, slot->roe_bitmap) || + memslot_is_readonly(slot); +#else + return memslot_is_readonly(slot); +#endif +} + static unsigned long __gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn, gfn_t *nr_pages, bool write) { if (!slot || slot->flags & KVM_MEMSLOT_INVALID) return KVM_HVA_ERR_BAD; - if (memslot_is_readonly(slot) && write) + if (gfn_is_readonly(slot, gfn) && write) return KVM_HVA_ERR_RO_BAD; if (nr_pages) @@ -1320,7 +1348,7 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, unsigned long hva = __gfn_to_hva_many(slot, gfn, NULL, false); if (!kvm_is_error_hva(hva) && writable) - *writable = !memslot_is_readonly(slot); + *writable = !gfn_is_readonly(slot, gfn); return hva; } @@ -1558,7 +1586,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, } /* Do not map writable pfn in the readonly memslot. */ - if (writable && memslot_is_readonly(slot)) { + if (writable && gfn_is_readonly(slot, gfn)) { *writable = false; writable = NULL; } From patchwork Sat Oct 20 22:21:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Soliman X-Patchwork-Id: 10650669 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D72F613A9 for ; Sat, 20 Oct 2018 22:23:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C729628497 for ; Sat, 20 Oct 2018 22:23:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BB187286BD; Sat, 20 Oct 2018 22:23:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id E878F28497 for ; Sat, 20 Oct 2018 22:23:47 +0000 (UTC) Received: (qmail 13923 invoked by uid 550); 20 Oct 2018 22:23:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 13830 invoked from network); 20 Oct 2018 22:23:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=oMeRV5A7WDQaaL2fpv9x+GSsG+GWxl+T/DjHh5n82iQ=; b=kvGpFGlaKSTeGOmHIAL4L1eM3qb4+ls/S37jPmBv+9K6Ksrm28wfcxlcolRXhOGkEm 97aq5JTjTjsrg+9w7LniY83jOYMzaeF97QkcSKcB9ZH0O+h8zjqelwVq0gQ7muIayRiL w7jH58/ugQ2Kl3BY6gy4QCikUyeGeuJFv0BeZyG6HtsXnJrp2X73UGHRhcs06s+4We7j 5rQCC1EEMRwJ06z8szpWssjbyl72N6BGpZinZz08X+CHOBMbAAxuJvTfycZXDGUbN0Ef /e96Mh8aOsb7W+x8mGDOn0kBw1pqelMafH5ax0+lkjZcWJ1MddApLnBbfUGMjE3XRhS1 jLIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=oMeRV5A7WDQaaL2fpv9x+GSsG+GWxl+T/DjHh5n82iQ=; b=LrfNTjpMtwBtXIqePvaJfCrlvBCQ8kHVlBg0T3FYDKCJHVqEalME5u3ARXgm0aBBR7 TIfoHmXDeQipP6oz5B3UT2l+hlFmQ1XV+Zepo0ShhXwgI4+VRUxjXtLF7MCdPsLS6w5x DeBg789I2iJ6000b7khPTvcM3Wtl6V6Is3SdlxaVYsg9sBJ6vCaK3+Mh3niyvjUKIVXc wqixGBNukXnuD23dxel1UZLB2rHO4/EMfZ4frf4SoZsZbc9agZMoEkJ+Jox5/Bz6Ixqr JLnh2Vx2Mn0Ajb/ApzuBvtobGWDBR+tTUhO2JgivP0cbiLEmMJeHk68HeN/dSAKIKROg zjuw== X-Gm-Message-State: ABuFfohZun8Z6O5VAWRbzHZKrEUUnxx3c8sUtWp+TC/flsNullHdj4tg /Fv1eVq+cx74q6uSF+rb4GE= X-Google-Smtp-Source: ACcGV61WHSatlg2SUUzUXYTjE4TP4WJyczmqiSQ2uZPClVqmAAUlI7+2hVXppct2Xnz4Om/ZlfpW/A== X-Received: by 2002:adf:8547:: with SMTP id 65-v6mr39806669wrh.69.1540074192129; Sat, 20 Oct 2018 15:23:12 -0700 (PDT) From: Ahmed Abd El Mawgood To: Paolo Bonzini , rkrcmar@redhat.com, Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , hpa@zytor.com, x86@kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, ahmedsoliman0x666@gmail.com, Ovich00@gmail.com, kernel-hardening@lists.openwall.com, nigel.edwards@hpe.com, Boris Lukashev , Hossam Hassan <7ossam9063@gmail.com>, Ahmed Lotfy Subject: [PATCH V4 4/5] KVM: X86: Adding support for byte granular memory ROE Date: Sun, 21 Oct 2018 00:21:26 +0200 Message-Id: <20181020222127.6368-5-ahmedsoliman0x666@gmail.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> References: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> X-Virus-Scanned: ClamAV using ClamSMTP This patch documents and implements ROE_MPROTECT_CHUNK, a part of ROE hypercall designed to protect regions of a memory page with byte granularity. This feature provides a key primitive to protect against attacks involving pages remapping. However this attack will be addressed in future patches. Signed-off-by: Ahmed Abd El Mawgood --- Documentation/virtual/kvm/hypercalls.txt | 9 ++ arch/x86/kvm/mmu.c | 6 +- arch/x86/kvm/x86.c | 156 +++++++++++++++++++++-- include/linux/kvm_host.h | 26 ++++ include/uapi/linux/kvm_para.h | 1 + virt/kvm/kvm_main.c | 88 +++++++++++-- 6 files changed, 266 insertions(+), 20 deletions(-) diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt index 8af64d826f03..8708d69a7725 100644 --- a/Documentation/virtual/kvm/hypercalls.txt +++ b/Documentation/virtual/kvm/hypercalls.txt @@ -164,6 +164,15 @@ This configuration lets a guest kernel have part of its read/write memory converted into read-only. This action is irreversible. Upon successful run, the number of pages protected is returned. +Usage 3: + a0: ROE_MPROTECT_CHUNK (requires version >= 2) + a1: Start address aligned to page boundary. + a2: Number of bytes to be protected. +This configuration lets a guest kernel have part of its read/write memory +converted into read-only with bytes granularity. ROE_MPROTECT_CHUNK is +relatively slow compared to ROE_MPROTECT. This action is irreversible. +Upon successful run, the number of pages protected is returned. + Error codes: -KVM_ENOSYS: system call being triggered from ring 3 or it is not implemented. diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c54aa5287e14..c3d681bfa105 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1507,9 +1507,11 @@ static bool __rmap_write_protect_roe(struct kvm *kvm, struct rmap_iterator iter; bool prot; bool flush = false; - + void *full_bmp = d->memslot->roe_bitmap; + void *part_bmp = d->memslot->partial_roe_bitmap; for_each_rmap_spte(rmap_head, &iter, sptep) { - prot = !test_bit(d->i, d->memslot->roe_bitmap) && pt_protect; + prot = !(test_bit(d->i, full_bmp) || test_bit(d->i, part_bmp)); + prot = prot && pt_protect; flush |= spte_write_protect(sptep, prot); d->i++; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 70f2b42a2f91..0c767ddd26a2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6800,17 +6800,23 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr, #ifdef CONFIG_KVM_ROE static void kvm_roe_protect_slot(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, u64 npages) + gfn_t gfn, u64 npages, bool partial) { int i; + void *bitmap; + if (partial) + bitmap = slot->partial_roe_bitmap; + else + bitmap = slot->roe_bitmap; for (i = gfn - slot->base_gfn; i < gfn + npages - slot->base_gfn; i++) - set_bit(i, slot->roe_bitmap); + set_bit(i, bitmap); kvm_mmu_slot_apply_write_access(kvm, slot); kvm_arch_flush_shadow_memslot(kvm, slot); } -static int __kvm_roe_protect_range(struct kvm *kvm, gpa_t gpa, u64 npages) +static int __kvm_roe_protect_range(struct kvm *kvm, gpa_t gpa, u64 npages, + bool partial) { struct kvm_memory_slot *slot; gfn_t gfn = gpa >> PAGE_SHIFT; @@ -6826,12 +6832,12 @@ static int __kvm_roe_protect_range(struct kvm *kvm, gpa_t gpa, u64 npages) if (gfn + npages > slot->base_gfn + slot->npages) { u64 _npages = slot->base_gfn + slot->npages - gfn; - kvm_roe_protect_slot(kvm, slot, gfn, _npages); + kvm_roe_protect_slot(kvm, slot, gfn, _npages, partial); gfn += _npages; count += _npages; npages -= _npages; } else { - kvm_roe_protect_slot(kvm, slot, gfn, npages); + kvm_roe_protect_slot(kvm, slot, gfn, npages, partial); count += npages; npages = 0; } @@ -6841,12 +6847,13 @@ static int __kvm_roe_protect_range(struct kvm *kvm, gpa_t gpa, u64 npages) return count; } -static int kvm_roe_protect_range(struct kvm *kvm, gpa_t gpa, u64 npages) +static int kvm_roe_protect_range(struct kvm *kvm, gpa_t gpa, u64 npages, + bool partial) { int r; mutex_lock(&kvm->slots_lock); - r = __kvm_roe_protect_range(kvm, gpa, npages); + r = __kvm_roe_protect_range(kvm, gpa, npages, partial); mutex_unlock(&kvm->slots_lock); return r; } @@ -6895,7 +6902,7 @@ static int kvm_roe_full_protect_range(struct kvm_vcpu *vcpu, u64 gva, continue; if (!access_ok(VERIFY_WRITE, hva, 1 << PAGE_SHIFT)) continue; - status = kvm_roe_protect_range(vcpu->kvm, gpa, 1); + status = kvm_roe_protect_range(vcpu->kvm, gpa, 1, false); if (status > 0) count += status; } @@ -6903,7 +6910,135 @@ static int kvm_roe_full_protect_range(struct kvm_vcpu *vcpu, u64 gva, return -EINVAL; return count; } +static int kvm_roe_insert_chunk_next(struct list_head *pos, u64 gpa, u64 size) +{ + struct protected_chunk *chunk; + + chunk = kvzalloc(sizeof(struct protected_chunk), GFP_KERNEL); + chunk->gpa = gpa; + chunk->size = size; + INIT_LIST_HEAD(&chunk->list); + list_add(&chunk->list, pos); + return size; +} +static int kvm_roe_expand_chunk(struct protected_chunk *pos, u64 gpa, u64 size) +{ + u64 old_ptr = pos->gpa; + u64 old_size = pos->size; + + if (gpa < old_ptr) + pos->gpa = gpa; + if (gpa + size > old_ptr + old_size) + pos->size = gpa + size - pos->gpa; + return size; +} + +static bool kvm_roe_merge_chunks(struct protected_chunk *chunk) +{ + /*attempt merging 2 consecutive given the first one*/ + struct protected_chunk *next = list_next_entry(chunk, list); + + if (!kvm_roe_range_overlap(chunk, next->gpa, next->size)) + return false; + kvm_roe_expand_chunk(chunk, next->gpa, next->size); + list_del(&next->list); + kvfree(next); + return true; +} +static int __kvm_roe_insert_chunk(struct kvm_memory_slot *slot, u64 gpa, + u64 size) +{ + /* kvm->slots_lock must be acquired*/ + struct protected_chunk *pos; + struct list_head *head = slot->prot_list; + + if (list_empty(head)) + return kvm_roe_insert_chunk_next(head, gpa, size); + /* + * pos here will never get deleted maybe the next one will + * that is why list_for_each_entry_safe is completely unsafe + */ + list_for_each_entry(pos, head, list) { + if (kvm_roe_range_overlap(pos, gpa, size)) { + int ret = kvm_roe_expand_chunk(pos, gpa, size); + + while (head != pos->list.next) + if (!kvm_roe_merge_chunks(pos)) + break; + return ret; + } + if (pos->gpa > gpa) { + struct protected_chunk *prev; + prev = list_prev_entry(pos, list); + return kvm_roe_insert_chunk_next(&prev->list, gpa, + size); + } + } + pos = list_last_entry(head, struct protected_chunk, list); + + return kvm_roe_insert_chunk_next(&pos->list, gpa, size); +} +static int kvm_roe_insert_chunk(struct kvm *kvm, u64 gpa, u64 size) +{ + struct kvm_memory_slot *slot; + gfn_t gfn = gpa >> PAGE_SHIFT; + int ret; + + mutex_lock(&kvm->slots_lock); + slot = gfn_to_memslot(kvm, gfn); + ret = __kvm_roe_insert_chunk(slot, gpa, size); + mutex_unlock(&kvm->slots_lock); + return ret; +} + +static int kvm_roe_partial_page_protect(struct kvm_vcpu *vcpu, u64 gva, + u64 size) +{ + gpa_t gpa = kvm_mmu_gva_to_gpa_system(vcpu, gva, NULL); + + kvm_roe_protect_range(vcpu->kvm, gpa, 1, true); + return kvm_roe_insert_chunk(vcpu->kvm, gpa, size); +} + +static int kvm_roe_partial_protect(struct kvm_vcpu *vcpu, u64 gva, u64 size) +{ + u64 gva_start = gva; + u64 gva_end = gva+size; + u64 gpn_start = gva_start >> PAGE_SHIFT; + u64 gpn_end = gva_end >> PAGE_SHIFT; + u64 _size; + int count = 0; + // We need to make sure that there will be no overflow or zero size + if (gva_end <= gva_start) + return -EINVAL; + + // protect the partial page at the start + if (gpn_end > gpn_start) + _size = PAGE_SIZE - (gva_start & PAGE_MASK) + 1; + else + _size = size; + size -= _size; + count += kvm_roe_partial_page_protect(vcpu, gva_start, _size); + // full protect in the middle pages + if (gpn_end - gpn_start > 1) { + int ret; + u64 _gva = (gpn_start + 1) << PAGE_SHIFT; + u64 npages = gpn_end - gpn_start - 1; + + size -= npages << PAGE_SHIFT; + ret = kvm_roe_full_protect_range(vcpu, _gva, npages); + if (ret > 0) + count += ret << PAGE_SHIFT; + } + // protect the partial page at the end + if (size != 0) + count += kvm_roe_partial_page_protect(vcpu, + gpn_end << PAGE_SHIFT, size); + if (count == 0) + return -EINVAL; + return count; +} static int kvm_roe(struct kvm_vcpu *vcpu, u64 a0, u64 a1, u64 a2, u64 a3) { int ret; @@ -6915,11 +7050,14 @@ static int kvm_roe(struct kvm_vcpu *vcpu, u64 a0, u64 a1, u64 a2, u64 a3) return -KVM_ENOSYS; switch (a0) { case ROE_VERSION: - ret = 1; //current version + ret = 2; //current version break; case ROE_MPROTECT: ret = kvm_roe_full_protect_range(vcpu, a1, a2); break; + case ROE_MPROTECT_CHUNK: + ret = kvm_roe_partial_protect(vcpu, a1, a2); + break; default: ret = -EINVAL; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index be6885bc28bc..a6749a52386b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -294,11 +294,37 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu) */ #define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1) +#ifdef CONFIG_KVM_ROE +/* + * This structure is used to hold memory areas that are to be protected in a + * memory frame with mixed page permissions. + **/ +struct protected_chunk { + gpa_t gpa; + u64 size; + struct list_head list; +}; + +static inline bool kvm_roe_range_overlap(struct protected_chunk *chunk, + gpa_t gpa, int len) { + /* + * https://stackoverflow.com/questions/325933/ + * determine-whether-two-date-ranges-overlap + * Assuming that it works, that link ^ provides a solution that is + * better than anything I would ever come up with. + */ + return (gpa <= chunk->gpa + chunk->size - 1) && + (gpa + len - 1 >= chunk->gpa); +} +#endif + struct kvm_memory_slot { gfn_t base_gfn; unsigned long npages; #ifdef CONFIG_KVM_ROE unsigned long *roe_bitmap; + unsigned long *partial_roe_bitmap; + struct list_head *prot_list; #endif unsigned long *dirty_bitmap; struct kvm_arch_memory_slot arch; diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index e6004e0750fd..4a84f974bc58 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -33,6 +33,7 @@ /* ROE Functionality parameters */ #define ROE_VERSION 0 #define ROE_MPROTECT 1 +#define ROE_MPROTECT_CHUNK 2 /* * hypercalls use architecture specific */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 423a9c014120..d4f36faacd29 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -555,10 +555,19 @@ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, struct kvm_memory_slot *dont) { #ifdef CONFIG_KVM_ROE - if (!dont) + if (!dont) { + //TODO still this might leak + struct protected_chunk *pos, *n; + struct list_head *head = free->prot_list; kvfree(free->roe_bitmap); + kvfree(free->partial_roe_bitmap); + list_for_each_entry_safe(pos, n, head, list) { + list_del(&pos->list); + kvfree(pos); + } + kvfree(free->prot_list); + } #endif - if (!dont || free->dirty_bitmap != dont->dirty_bitmap) kvm_destroy_dirty_bitmap(free); @@ -805,13 +814,22 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot) return 0; } -static int kvm_init_roe_bitmap(struct kvm_memory_slot *slot) +static int kvm_init_roe(struct kvm_memory_slot *slot) { #ifdef CONFIG_KVM_ROE slot->roe_bitmap = kvzalloc(BITS_TO_LONGS(slot->npages) * sizeof(unsigned long), GFP_KERNEL); if (!slot->roe_bitmap) return -ENOMEM; + slot->partial_roe_bitmap = kvzalloc(BITS_TO_LONGS(slot->npages) * + sizeof(unsigned long), GFP_KERNEL); + if (!slot->partial_roe_bitmap) { + kvfree(slot->roe_bitmap); + return -ENOMEM; + } + slot->prot_list = kvzalloc(sizeof(struct list_head), GFP_KERNEL); + INIT_LIST_HEAD(slot->prot_list); + #endif return 0; } @@ -1033,7 +1051,7 @@ int __kvm_set_memory_region(struct kvm *kvm, if (kvm_create_dirty_bitmap(&new) < 0) goto out_free; } - if (kvm_init_roe_bitmap(&new) < 0) + if (kvm_init_roe(&new) < 0) goto out_free; slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL); @@ -1287,26 +1305,37 @@ static bool memslot_is_readonly(struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_READONLY; } +#ifdef CONFIG_KVM_ROE +static bool gfn_is_partially_protected(struct kvm_memory_slot *slot, gfn_t gfn) +{ + + return test_bit(gfn - slot->base_gfn, slot->partial_roe_bitmap); +} +static bool gfn_is_fully_protected(struct kvm_memory_slot *slot, gfn_t gfn) +{ + return test_bit(gfn - slot->base_gfn, slot->roe_bitmap); +} +#endif static bool gfn_is_readonly(struct kvm_memory_slot *slot, gfn_t gfn) { #ifdef CONFIG_KVM_ROE - return test_bit(gfn - slot->base_gfn, slot->roe_bitmap) || - memslot_is_readonly(slot); + return gfn_is_fully_protected(slot, gfn) || + gfn_is_partially_protected(slot, gfn) || + memslot_is_readonly(slot); #else return memslot_is_readonly(slot); #endif } + static unsigned long __gfn_to_hva_many(struct kvm_memory_slot *slot, gfn_t gfn, gfn_t *nr_pages, bool write) { if (!slot || slot->flags & KVM_MEMSLOT_INVALID) return KVM_HVA_ERR_BAD; - if (gfn_is_readonly(slot, gfn) && write) return KVM_HVA_ERR_RO_BAD; - if (nr_pages) *nr_pages = slot->npages - (gfn - slot->base_gfn); @@ -1864,14 +1893,55 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa, return __kvm_read_guest_atomic(slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); +#ifdef CONFIG_KVM_ROE +static bool kvm_roe_protected_range(struct kvm_memory_slot *slot, gpa_t gpa, + int len) +{ + struct list_head *pos; + struct protected_chunk *cur_chunk; + + list_for_each(pos, slot->prot_list) { + cur_chunk = list_entry(pos, struct protected_chunk, list); + if (kvm_roe_range_overlap(cur_chunk, gpa, len)) + return true; + } + return false; +} +static bool kvm_roe_check_range(struct kvm_memory_slot *slot, + gfn_t gfn, int offset, int len) +{ + gpa_t gpa = (gfn << PAGE_SHIFT) + offset; + + if (!gfn_is_partially_protected(slot, gfn)) + return false; + return kvm_roe_protected_range(slot, gpa, len); +} +#endif +static u64 roe_gfn_to_hva(struct kvm_memory_slot *slot, gfn_t gfn, int offset, + int len) +{ + u64 addr; +#ifdef CONFIG_KVM_ROE + if (kvm_roe_check_range(slot, gfn, offset, len)) + return KVM_HVA_ERR_RO_BAD; + if (memslot_is_readonly(slot)) + return KVM_HVA_ERR_RO_BAD; + if (gfn_is_fully_protected(slot, gfn)) + return KVM_HVA_ERR_RO_BAD; + addr = __gfn_to_hva_many(slot, gfn, NULL, false); +#else + addr = gfn_to_hva_memslot(slot, gfn); +#endif + return addr; +} static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, const void *data, int offset, int len) { int r; unsigned long addr; - addr = gfn_to_hva_memslot(memslot, gfn); + addr = roe_gfn_to_hva(memslot, gfn, offset, len); if (kvm_is_error_hva(addr)) return -EFAULT; r = __copy_to_user((void __user *)addr + offset, data, len); From patchwork Sat Oct 20 22:21:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Soliman X-Patchwork-Id: 10650677 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D69414E2 for ; Sat, 20 Oct 2018 22:24:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1EB8F28497 for ; Sat, 20 Oct 2018 22:24:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 12714286B6; Sat, 20 Oct 2018 22:24:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 448A8286A1 for ; Sat, 20 Oct 2018 22:23:59 +0000 (UTC) Received: (qmail 14213 invoked by uid 550); 20 Oct 2018 22:23:26 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 14127 invoked from network); 20 Oct 2018 22:23:26 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=2RYRBcvxGrFKD/6dlykR+YG9oy1kUHkBBuwGADsyTj4=; b=A+tu/bKx7vAfhQ5tXcFPWSfghHl9ghA2XBIfNj23pmFtXJr+PJc4ig1lgqEKvk8+Z8 A8V/qV/shaBfPqCNkdkhTjmB5UAZxhuvcjsv2MF+84CQoXHyZwxH1UCjfUbhULXabLOr obPZgx4IR+tvE9V6LAmTt6ZIhwY0Sb5oPJ1mDLrpl6qrwQ5/ZV4LbrZAhpMbRYlvtoml 1svNbNacnW9kfDex5ZZTaJS3q0rHhUWTX2Ut+8WeTtwcqMMPvK96MXnkgIYtcXT18KhI cxjxm71DYHEEcQiyly2ZlDusHavlNuXFMQzl7utWi+rZmvVYBWF3E8LGSy79G0bTqy5x VmRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=2RYRBcvxGrFKD/6dlykR+YG9oy1kUHkBBuwGADsyTj4=; b=GnhbrLNo807KH9sKHwf+RKeZ8dbNC8h05DIVcYuFhgUb+SCE3ZPsfIdWPNi2zqOqxf Wr4yQq4l7xkattcWzsf93Ox5qAeuTyJgMDMWvcH5iIhDaOBcWf6iepKebu6wrGLISnwj lFqCLbsoKBbeLrQktXvKX2LHqIrITe2wE5KvimhBixJtm7U33LweHPH+0/q9rmDZe1pR ryhy0MpHU8TH2BshGIRBxqEsPYPXEMauCDn4GP+9bPu243IYSzEOisA9VXeRl8IQCgDL EEYT9nWqhhaAQCXTC2n3dFasyR3DZ6/Op0V5rcVOIr6BY8l+w0cb37AW18yN+89iuF1u bnkQ== X-Gm-Message-State: ABuFfohH2kPsNNto0InIwBpHwpnWgVrQXZyA0LT+NkKWDtHEElhOqd3J 3GthX9XvUWVMEu0fU/l0yJf/tBz08nE= X-Google-Smtp-Source: ACcGV62jnzBOxHD9qHTvVAzL0r8x5JBVha2dIj9SgrnspF6+7UX2Y/FVrxLS6vEW7WYohcNNJXAE3g== X-Received: by 2002:a5d:400d:: with SMTP id n13-v6mr13339265wrp.185.1540074194870; Sat, 20 Oct 2018 15:23:14 -0700 (PDT) From: Ahmed Abd El Mawgood To: Paolo Bonzini , rkrcmar@redhat.com, Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , hpa@zytor.com, x86@kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, ahmedsoliman0x666@gmail.com, Ovich00@gmail.com, kernel-hardening@lists.openwall.com, nigel.edwards@hpe.com, Boris Lukashev , Hossam Hassan <7ossam9063@gmail.com>, Ahmed Lotfy Subject: [PATCH V4 5/5] KVM: Small Refactoring to kvm_free_memslot Date: Sun, 21 Oct 2018 00:21:27 +0200 Message-Id: <20181020222127.6368-6-ahmedsoliman0x666@gmail.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> References: <20181020222127.6368-1-ahmedsoliman0x666@gmail.com> X-Virus-Scanned: ClamAV using ClamSMTP This should be a little bit more readable and prone to memory leaks Signed-off-by: Ahmed Abd El Mawgood --- virt/kvm/kvm_main.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d4f36faacd29..75b5b2c987e9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -552,11 +552,11 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) * Free any memory in @free but not in @dont. */ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, - struct kvm_memory_slot *dont) + struct kvm_memory_slot *dont, + enum kvm_mr_change change) { + if (change == KVM_MR_DELETE) { #ifdef CONFIG_KVM_ROE - if (!dont) { - //TODO still this might leak struct protected_chunk *pos, *n; struct list_head *head = free->prot_list; kvfree(free->roe_bitmap); @@ -566,10 +566,9 @@ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, kvfree(pos); } kvfree(free->prot_list); - } #endif - if (!dont || free->dirty_bitmap != dont->dirty_bitmap) kvm_destroy_dirty_bitmap(free); + } kvm_arch_free_memslot(kvm, free, dont); @@ -584,7 +583,7 @@ static void kvm_free_memslots(struct kvm *kvm, struct kvm_memslots *slots) return; kvm_for_each_memslot(memslot, slots) - kvm_free_memslot(kvm, memslot, NULL); + kvm_free_memslot(kvm, memslot, NULL, KVM_MR_DELETE); kvfree(slots); } @@ -1097,14 +1096,14 @@ int __kvm_set_memory_region(struct kvm *kvm, kvm_arch_commit_memory_region(kvm, mem, &old, &new, change); - kvm_free_memslot(kvm, &old, &new); + kvm_free_memslot(kvm, &old, &new, change); kvfree(old_memslots); return 0; out_slots: kvfree(slots); out_free: - kvm_free_memslot(kvm, &new, &old); + kvm_free_memslot(kvm, &new, &old, change); out: return r; }