From patchwork Wed Nov 11 20:07:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11898475 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9BA2697 for ; Wed, 11 Nov 2020 20:08:18 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 99474207F7 for ; Wed, 11 Nov 2020 20:08:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 99474207F7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.25206.52813 (Exim 4.92) (envelope-from ) id 1kcwOr-0003ED-Lo; Wed, 11 Nov 2020 20:07:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 25206.52813; Wed, 11 Nov 2020 20:07:25 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcwOr-0003Dz-GW; Wed, 11 Nov 2020 20:07:25 +0000 Received: by outflank-mailman (input) for mailman id 25206; Wed, 11 Nov 2020 20:07:24 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcwOq-0003DB-DZ for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:24 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kcwOq-0000pY-45; Wed, 11 Nov 2020 20:07:24 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcwOq-0003DB-DZ for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:24 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kcwOq-0000pY-45; Wed, 11 Nov 2020 20:07:24 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Wei Liu , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH 01/10] viridian: move flush hypercall implementation into separate function Date: Wed, 11 Nov 2020 20:07:12 +0000 Message-Id: <20201111200721.30551-2-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201111200721.30551-1-paul@xen.org> References: <20201111200721.30551-1-paul@xen.org> MIME-Version: 1.0 From: Paul Durrant This patch moves the implementation of HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST that is currently inline in viridian_hypercall() into a new hvcall_flush() function. The new function returns Xen erro values which are then dealt with appropriately. A return value of -ERESTART translates to viridian_hypercall() returning HVM_HCALL_preempted. Other return values translate to status codes and viridian_hypercall() returning HVM_HCALL_completed. Currently the only values, other than -ERESTART, returned by hvcall_flush() are 0 (indicating success) or -EINVAL. Signed-off-by: Paul Durrant --- Cc: Wei Liu Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" --- xen/arch/x86/hvm/viridian/viridian.c | 130 ++++++++++++++++----------- 1 file changed, 78 insertions(+), 52 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index dc7183a54627..f1a1b6a8af82 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -518,6 +518,69 @@ static bool need_flush(void *ctxt, struct vcpu *v) return vcpu_mask & (1ul << v->vcpu_id); } +union hypercall_input { + uint64_t raw; + struct { + uint16_t call_code; + uint16_t fast:1; + uint16_t rsvd1:15; + uint16_t rep_count:12; + uint16_t rsvd2:4; + uint16_t rep_start:12; + uint16_t rsvd3:4; + }; +}; + +union hypercall_output { + uint64_t raw; + struct { + uint16_t result; + uint16_t rsvd1; + uint32_t rep_complete:12; + uint32_t rsvd2:20; + }; +}; + +static int hvcall_flush(union hypercall_input *input, + union hypercall_output *output, + unsigned long input_params_gpa, + unsigned long output_params_gpa) +{ + struct { + uint64_t address_space; + uint64_t flags; + uint64_t vcpu_mask; + } input_params; + + /* These hypercalls should never use the fast-call convention. */ + if ( input->fast ) + return -EINVAL; + + /* Get input parameters. */ + if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa, + sizeof(input_params)) != HVMTRANS_okay ) + return -EINVAL; + + /* + * It is not clear from the spec. if we are supposed to + * include current virtual CPU in the set or not in this case, + * so err on the safe side. + */ + if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS ) + input_params.vcpu_mask = ~0ul; + + /* + * A false return means that another vcpu is currently trying + * a similar operation, so back off. + */ + if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) ) + return -ERESTART; + + output->rep_complete = input->rep_count; + + return 0; +} + int viridian_hypercall(struct cpu_user_regs *regs) { struct vcpu *curr = current; @@ -525,29 +588,8 @@ int viridian_hypercall(struct cpu_user_regs *regs) int mode = hvm_guest_x86_mode(curr); unsigned long input_params_gpa, output_params_gpa; uint16_t status = HV_STATUS_SUCCESS; - - union hypercall_input { - uint64_t raw; - struct { - uint16_t call_code; - uint16_t fast:1; - uint16_t rsvd1:15; - uint16_t rep_count:12; - uint16_t rsvd2:4; - uint16_t rep_start:12; - uint16_t rsvd3:4; - }; - } input; - - union hypercall_output { - uint64_t raw; - struct { - uint16_t result; - uint16_t rsvd1; - uint32_t rep_complete:12; - uint32_t rsvd2:20; - }; - } output = { 0 }; + union hypercall_input input; + union hypercall_output output = {}; ASSERT(is_viridian_domain(currd)); @@ -580,41 +622,25 @@ int viridian_hypercall(struct cpu_user_regs *regs) case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE: case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST: { - struct { - uint64_t address_space; - uint64_t flags; - uint64_t vcpu_mask; - } input_params; + int rc = hvcall_flush(&input, &output, input_params_gpa, + output_params_gpa); - /* These hypercalls should never use the fast-call convention. */ - status = HV_STATUS_INVALID_PARAMETER; - if ( input.fast ) + switch ( rc ) + { + case 0: break; - /* Get input parameters. */ - if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa, - sizeof(input_params)) != - HVMTRANS_okay ) - break; - - /* - * It is not clear from the spec. if we are supposed to - * include current virtual CPU in the set or not in this case, - * so err on the safe side. - */ - if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS ) - input_params.vcpu_mask = ~0ul; - - /* - * A false return means that another vcpu is currently trying - * a similar operation, so back off. - */ - if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) ) + case -ERESTART: return HVM_HCALL_preempted; - output.rep_complete = input.rep_count; + default: + ASSERT_UNREACHABLE(); + /* Fallthrough */ + case -EINVAL: + status = HV_STATUS_INVALID_PARAMETER; + break; + } - status = HV_STATUS_SUCCESS; break; }