From patchwork Thu Jul 19 02:25:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sam Bobroff X-Patchwork-Id: 10533521 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 31A206053F for ; Thu, 19 Jul 2018 02:25:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20479298C2 for ; Thu, 19 Jul 2018 02:25:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1288829992; Thu, 19 Jul 2018 02:25:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36AB4298C2 for ; Thu, 19 Jul 2018 02:25:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731252AbeGSDGG (ORCPT ); Wed, 18 Jul 2018 23:06:06 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:56840 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731001AbeGSDGF (ORCPT ); Wed, 18 Jul 2018 23:06:05 -0400 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w6J2NlVx067992 for ; Wed, 18 Jul 2018 22:25:20 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ka9xh0en4-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 18 Jul 2018 22:25:20 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 19 Jul 2018 03:25:17 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Thu, 19 Jul 2018 03:25:14 +0100 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w6J2PDDr13435088 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 19 Jul 2018 02:25:13 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2E56D42042; Thu, 19 Jul 2018 05:25:31 +0100 (BST) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 89F5D4203F; Thu, 19 Jul 2018 05:25:30 +0100 (BST) Received: from ozlabs.au.ibm.com (unknown [9.192.253.14]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Thu, 19 Jul 2018 05:25:30 +0100 (BST) Received: from tungsten.ibm.com (unknown [9.102.52.124]) (using TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.au.ibm.com (Postfix) with ESMTPSA id B2290A020A; Thu, 19 Jul 2018 12:25:10 +1000 (AEST) From: Sam Bobroff To: linuxppc-dev@lists.ozlabs.org Cc: kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, paulus@samba.org, david@gibson.dropbear.id.au, clg@kaod.org Subject: [PATCH v3 1/1] KVM: PPC: Book3S HV: pack VCORE IDs to access full VCPU ID space Date: Thu, 19 Jul 2018 12:25:10 +1000 X-Mailer: git-send-email 2.16.1.74.g9b0b1f47b X-TM-AS-GCONF: 00 x-cbid: 18071902-0012-0000-0000-0000028BE944 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18071902-0013-0000-0000-000020BDB002 Message-Id: <1fb3aea5f44f1029866ee10db40abde7e18b24ad.1531967105.git.sbobroff@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-07-18_07:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1806210000 definitions=main-1807190022 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Sam Bobroff It is not currently possible to create the full number of possible VCPUs (KVM_MAX_VCPUS) on Power9 with KVM-HV when the guest uses less threads per core than it's core stride (or "VSMT mode"). This is because the VCORE ID and XIVE offsets to grow beyond KVM_MAX_VCPUS even though the VCPU ID is less than KVM_MAX_VCPU_ID. To address this, "pack" the VCORE ID and XIVE offsets by using knowledge of the way the VCPU IDs will be used when there are less guest threads per core than the core stride. The primary thread of each core will always be used first. Then, if the guest uses more than one thread per core, these secondary threads will sequentially follow the primary in each core. So, the only way an ID above KVM_MAX_VCPUS can be seen, is if the VCPUs are being spaced apart, so at least half of each core is empty and IDs between KVM_MAX_VCPUS and (KVM_MAX_VCPUS * 2) can be mapped into the second half of each core (4..7, in an 8-thread core). Similarly, if IDs above KVM_MAX_VCPUS * 2 are seen, at least 3/4 of each core is being left empty, and we can map down into the second and third quarters of each core (2, 3 and 5, 6 in an 8-thread core). Lastly, if IDs above KVM_MAX_VCPUS * 4 are seen, only the primary threads are being used and 7/8 of the core is empty, allowing use of the 1, 3, 5 and 7 thread slots. (Strides less than 8 are handled similarly.) This allows the VCORE ID or offset to be calculated quickly from the VCPU ID or XIVE server numbers, without access to the VCPU structure. Signed-off-by: Sam Bobroff Reviewed-by: David Gibson Reviewed-by: Cédric Le Goater --- Hello everyone, I've completed a trial merge with the guest native-XIVE code and found no problems; it's no more difficult than the host side and only requires a few calls to xive_vp(). On that basis, here is v3 (unchanged from v2) as non-RFC and it seems to be ready to go. Patch set v3: Patch 1/1: KVM: PPC: Book3S HV: pack VCORE IDs to access full VCPU ID space Patch set v2: Patch 1/1: KVM: PPC: Book3S HV: pack VCORE IDs to access full VCPU ID space * Corrected places in kvm/book3s_xive.c where IDs weren't packed. * Because kvmppc_pack_vcpu_id() is only called on P9, there is no need to test "emul_smt_mode > 1", so remove it. * Re-ordered block_offsets[] to be more ascending. * Added more detailed description of the packing algorithm. Patch set v1: Patch 1/1: KVM: PPC: Book3S HV: pack VCORE IDs to access full VCPU ID space arch/powerpc/include/asm/kvm_book3s.h | 44 +++++++++++++++++++++++++++++++++++ arch/powerpc/kvm/book3s_hv.c | 14 +++++++---- arch/powerpc/kvm/book3s_xive.c | 19 +++++++++------ 3 files changed, 66 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 1f345a0b6ba2..ba4b6e00fca7 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -390,4 +390,48 @@ extern int kvmppc_h_logical_ci_store(struct kvm_vcpu *vcpu); #define SPLIT_HACK_MASK 0xff000000 #define SPLIT_HACK_OFFS 0xfb000000 +/* Pack a VCPU ID from the [0..KVM_MAX_VCPU_ID) space down to the + * [0..KVM_MAX_VCPUS) space, while using knowledge of the guest's core stride + * (but not it's actual threading mode, which is not available) to avoid + * collisions. + * + * The implementation leaves VCPU IDs from the range [0..KVM_MAX_VCPUS) (block + * 0) unchanged: if the guest is filling each VCORE completely then it will be + * using consecutive IDs and it will fill the space without any packing. + * + * For higher VCPU IDs, the packed ID is based on the VCPU ID modulo + * KVM_MAX_VCPUS (effectively masking off the top bits) and then an offset is + * added to avoid collisions. + * + * VCPU IDs in the range [KVM_MAX_VCPUS..(KVM_MAX_VCPUS*2)) (block 1) are only + * possible if the guest is leaving at least 1/2 of each VCORE empty, so IDs + * can be safely packed into the second half of each VCORE by adding an offset + * of (stride / 2). + * + * Similarly, if VCPU IDs in the range [(KVM_MAX_VCPUS*2)..(KVM_MAX_VCPUS*4)) + * (blocks 2 and 3) are seen, the guest must be leaving at least 3/4 of each + * VCORE empty so packed IDs can be offset by (stride / 4) and (stride * 3 / 4). + * + * Finally, VCPU IDs from blocks 5..7 will only be seen if the guest is using a + * stride of 8 and 1 thread per core so the remaining offsets of 1, 3, 5 and 7 + * must be free to use. + * + * (The offsets for each block are stored in block_offsets[], indexed by the + * block number if the stride is 8. For cases where the guest's stride is less + * than 8, we can re-use the block_offsets array by multiplying the block + * number by (MAX_SMT_THREADS / stride) to reach the correct entry.) + */ +static inline u32 kvmppc_pack_vcpu_id(struct kvm *kvm, u32 id) +{ + const int block_offsets[MAX_SMT_THREADS] = {0, 4, 2, 6, 1, 3, 5, 7}; + int stride = kvm->arch.emul_smt_mode; + int block = (id / KVM_MAX_VCPUS) * (MAX_SMT_THREADS / stride); + u32 packed_id; + + BUG_ON(block >= MAX_SMT_THREADS); + packed_id = (id % KVM_MAX_VCPUS) + block_offsets[block]; + BUG_ON(packed_id >= KVM_MAX_VCPUS); + return packed_id; +} + #endif /* __ASM_KVM_BOOK3S_H__ */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index de686b340f4a..363c2fb0d89e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1816,7 +1816,7 @@ static int threads_per_vcore(struct kvm *kvm) return threads_per_subcore; } -static struct kvmppc_vcore *kvmppc_vcore_create(struct kvm *kvm, int core) +static struct kvmppc_vcore *kvmppc_vcore_create(struct kvm *kvm, int id) { struct kvmppc_vcore *vcore; @@ -1830,7 +1830,7 @@ static struct kvmppc_vcore *kvmppc_vcore_create(struct kvm *kvm, int core) init_swait_queue_head(&vcore->wq); vcore->preempt_tb = TB_NIL; vcore->lpcr = kvm->arch.lpcr; - vcore->first_vcpuid = core * kvm->arch.smt_mode; + vcore->first_vcpuid = id; vcore->kvm = kvm; INIT_LIST_HEAD(&vcore->preempt_list); @@ -2048,12 +2048,18 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm, mutex_lock(&kvm->lock); vcore = NULL; err = -EINVAL; - core = id / kvm->arch.smt_mode; + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + BUG_ON(kvm->arch.smt_mode != 1); + core = kvmppc_pack_vcpu_id(kvm, id); + } else { + core = id / kvm->arch.smt_mode; + } if (core < KVM_MAX_VCORES) { vcore = kvm->arch.vcores[core]; + BUG_ON(cpu_has_feature(CPU_FTR_ARCH_300) && vcore); if (!vcore) { err = -ENOMEM; - vcore = kvmppc_vcore_create(kvm, core); + vcore = kvmppc_vcore_create(kvm, id & ~(kvm->arch.smt_mode - 1)); kvm->arch.vcores[core] = vcore; kvm->arch.online_vcores++; } diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c index f9818d7d3381..dbd5887daf4a 100644 --- a/arch/powerpc/kvm/book3s_xive.c +++ b/arch/powerpc/kvm/book3s_xive.c @@ -317,6 +317,11 @@ static int xive_select_target(struct kvm *kvm, u32 *server, u8 prio) return -EBUSY; } +static u32 xive_vp(struct kvmppc_xive *xive, u32 server) +{ + return xive->vp_base + kvmppc_pack_vcpu_id(xive->kvm, server); +} + static u8 xive_lock_and_mask(struct kvmppc_xive *xive, struct kvmppc_xive_src_block *sb, struct kvmppc_xive_irq_state *state) @@ -362,7 +367,7 @@ static u8 xive_lock_and_mask(struct kvmppc_xive *xive, */ if (xd->flags & OPAL_XIVE_IRQ_MASK_VIA_FW) { xive_native_configure_irq(hw_num, - xive->vp_base + state->act_server, + xive_vp(xive, state->act_server), MASKED, state->number); /* set old_p so we can track if an H_EOI was done */ state->old_p = true; @@ -418,7 +423,7 @@ static void xive_finish_unmask(struct kvmppc_xive *xive, */ if (xd->flags & OPAL_XIVE_IRQ_MASK_VIA_FW) { xive_native_configure_irq(hw_num, - xive->vp_base + state->act_server, + xive_vp(xive, state->act_server), state->act_priority, state->number); /* If an EOI is needed, do it here */ if (!state->old_p) @@ -495,7 +500,7 @@ static int xive_target_interrupt(struct kvm *kvm, kvmppc_xive_select_irq(state, &hw_num, NULL); return xive_native_configure_irq(hw_num, - xive->vp_base + server, + xive_vp(xive, server), prio, state->number); } @@ -883,7 +888,7 @@ int kvmppc_xive_set_mapped(struct kvm *kvm, unsigned long guest_irq, * which is fine for a never started interrupt. */ xive_native_configure_irq(hw_irq, - xive->vp_base + state->act_server, + xive_vp(xive, state->act_server), state->act_priority, state->number); /* @@ -959,7 +964,7 @@ int kvmppc_xive_clr_mapped(struct kvm *kvm, unsigned long guest_irq, /* Reconfigure the IPI */ xive_native_configure_irq(state->ipi_number, - xive->vp_base + state->act_server, + xive_vp(xive, state->act_server), state->act_priority, state->number); /* @@ -1084,7 +1089,7 @@ int kvmppc_xive_connect_vcpu(struct kvm_device *dev, pr_devel("Duplicate !\n"); return -EEXIST; } - if (cpu >= KVM_MAX_VCPUS) { + if (cpu >= KVM_MAX_VCPU_ID) { pr_devel("Out of bounds !\n"); return -EINVAL; } @@ -1098,7 +1103,7 @@ int kvmppc_xive_connect_vcpu(struct kvm_device *dev, xc->xive = xive; xc->vcpu = vcpu; xc->server_num = cpu; - xc->vp_id = xive->vp_base + cpu; + xc->vp_id = xive_vp(xive, cpu); xc->mfrr = 0xff; xc->valid = true;