From patchwork Fri Sep 21 10:01:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Mackerras X-Patchwork-Id: 10609595 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A33F8161F for ; Fri, 21 Sep 2018 10:02:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 93B502D879 for ; Fri, 21 Sep 2018 10:02:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8827D2D876; Fri, 21 Sep 2018 10:02:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36F8B2D879 for ; Fri, 21 Sep 2018 10:02:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389673AbeIUPud (ORCPT ); Fri, 21 Sep 2018 11:50:33 -0400 Received: from ozlabs.org ([203.11.71.1]:33301 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389636AbeIUPud (ORCPT ); Fri, 21 Sep 2018 11:50:33 -0400 Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPSA id 42Gq005TMhz9sNQ; Fri, 21 Sep 2018 20:02:24 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ozlabs.org; s=201707; t=1537524144; bh=IpmQ20lAPd9DgnlOHLxL6VQbToYR9d+Qv638+nmKGrU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wfNnslS2gGwuR0nQYd+ovLkCjIjjCcLvDPEgc3MTBYOqMDsqMlYP7TLauuqqIvVGq jHdGNwicPzdx5pgH4dq48bU5aSU0ag6xCDIX+aU6jU69bMjyjO9GoTSzwWRrR4yStF fJQr6SXFtY+EQb1SwBVvf/mcRvmcET0hBn8RJCO/N43ydJGwbQA/t4L90D6ThrUt4z 2wInDPcUqGvNgrFwP2MFfheKxHrh5fr5xFdQigdfEYM2bbRt0TENCC5sj6RGDyoC22 5rB56lFgLrxrA9+d8qtCmzpV/5PQocXAiTWttQn7WJdKP4QCBjrBL/0IFYmWUehNED 39/Ir4C+5u/Fw== From: Paul Mackerras To: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org Cc: David Gibson Subject: [RFC PATCH 15/32] KVM: PPC: Book3S HV: Clear partition table entry on vm teardown Date: Fri, 21 Sep 2018 20:01:46 +1000 Message-Id: <1537524123-9578-16-git-send-email-paulus@ozlabs.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1537524123-9578-1-git-send-email-paulus@ozlabs.org> References: <1537524123-9578-1-git-send-email-paulus@ozlabs.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Suraj Jitindar Singh When destroying a VM we return the LPID to the pool, however we never zero the partition table entry. This is instead done when we reallocate the LPID. Zero the partition table entry on VM teardown before returning the LPID to the pool. This means if we were running as a nested hypervisor the real hypervisor could use this to determine when it can free resources. Signed-off-by: Suraj Jitindar Singh Signed-off-by: Paul Mackerras Reviewed-by: David Gibson --- arch/powerpc/kvm/book3s_hv.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index be8c863..82d6668 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4499,13 +4499,19 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm) kvmppc_free_vcores(kvm); - kvmppc_free_lpid(kvm->arch.lpid); if (kvm_is_radix(kvm)) kvmppc_free_radix(kvm); else kvmppc_free_hpt(&kvm->arch.hpt); + /* Perform global invalidation and return lpid to the pool */ + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + kvm->arch.process_table = 0; + kvmppc_setup_partition_table(kvm); + } + kvmppc_free_lpid(kvm->arch.lpid); + kvmppc_free_pimap(kvm); }