From patchwork Fri Jan 29 01:48:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46742C43381 for ; Fri, 29 Jan 2021 01:59:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2F0E64DD8 for ; Fri, 29 Jan 2021 01:59:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2F0E64DD8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77625.140728 (Exim 4.92) (envelope-from ) id 1l5J3y-0006rL-Jo; Fri, 29 Jan 2021 01:59:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77625.140728; Fri, 29 Jan 2021 01:59:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J3y-0006rB-Fk; Fri, 29 Jan 2021 01:59:06 +0000 Received: by outflank-mailman (input) for mailman id 77625; Fri, 29 Jan 2021 01:59:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IvW-0004da-Lj for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:22 +0000 Received: from mail-lj1-x231.google.com (unknown [2a00:1450:4864:20::231]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id bedd6eb8-f922-434f-9d06-20c2c723c811; Fri, 29 Jan 2021 01:49:22 +0000 (UTC) Received: by mail-lj1-x231.google.com with SMTP id f19so8727026ljn.5 for ; Thu, 28 Jan 2021 17:49:22 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:20 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bedd6eb8-f922-434f-9d06-20c2c723c811 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=J8Q+NbbfOYPSxHuBJx+zfYkWiARdAR8S5Uvx+xX0BfA=; b=Jjep4zr3UhqrSeIHdGssd6JksZpaqYeuPw9KyUEeBCN7elRBbF0moB865viv3iHgSo +JGDIbFDCHf+RNOUUFkmZGVWvil8uiUKuCXXE9yURSa+Ft/2MlgI4H4YM9SXsWfNexz9 ssF9caR+lwPVQq30Ayns6EOYhDnKygKeO2dy9mH3u2OMyRncBnve2fA0Gwr+s16ljMLK vyTYbfwdFo1KOKE6CkN8BqAwOhhgH/pfsXmzEipqV4nmnz9fCAdvM8n9IlgX4QnbmrfX +YviyhQ5WNv+RQQWL3VB1cHeCIby7BsxXNAotJcfTPrrUq6FzhDw8hlcnqe0sqZ8bTJy xSMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=J8Q+NbbfOYPSxHuBJx+zfYkWiARdAR8S5Uvx+xX0BfA=; b=sylojfWhdZVgno7JNnrXKyCPtZ096evPCjB4ZEDJvkth3EVY8psoBCsXBrFRrv58Tp IsZY0M8EBMpFsY+X/sl6750J+9qJ+3uWjKvQDlKfPzmJjYgZ+chrA/0kmGwZw/ckH7or 8wavSQBiZvmm414UEz6+7OPSWoQH5KtaezahVCRaa1azrWnte6fyLDbh+YgscVMXSQsZ cJ1+0p+T+hxICr4sj6Bj4VbT6t5V7bRZYi4kc1cYQmOlnhrgolzCCUUEJ41s2ME0n2S8 i4ZKblTNOd3nFAOKdoTk5md78ideO2csPEaB50NWiugSB6vWssmS1O5hs+e8ZQ5UbYOa bhIw== X-Gm-Message-State: AOAM530i2xRo41b+fZ3/raNvwMApw5h8KXHXM+nf83R4d12/yeeQkg2d dMTq/qklNLCg1UafP6vDSLV7yAdLiusvlA== X-Google-Smtp-Source: ABdhPJzEBOBx24c+tcYLLrrcPo5wbVjdcfZT3+cHR9T25pn+6+yxuJkL9xX8SKTw6mYHmAU22DMeVA== X-Received: by 2002:a2e:9c88:: with SMTP id x8mr1166570lji.409.1611884961181; Thu, 28 Jan 2021 17:49:21 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V6 15/24] xen/arm: Call vcpu_ioreq_handle_completion() in check_for_vcpu_work() Date: Fri, 29 Jan 2021 03:48:43 +0200 Message-Id: <1611884932-1851-16-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch adds remaining bits needed for the IOREQ support on Arm. Besides just calling vcpu_ioreq_handle_completion() we need to handle it's return value to make sure that all the vCPU works are done before we return to the guest (the vcpu_ioreq_handle_completion() may return false if there is vCPU work to do or IOREQ state is invalid). For that reason we use an unbounded loop in leave_hypervisor_to_guest(). The worse that can happen here if the vCPU will never run again (the I/O will never complete). But, in Xen case, if the I/O never completes then it most likely means that something went horribly wrong with the Device Emulator. And it is most likely not safe to continue. So letting the vCPU to spin forever if the I/O never completes is a safer action than letting it continue and leaving the guest in unclear state and is the best what we can do for now. Please note, using this loop we will not spin forever on a pCPU, preventing any other vCPUs from being scheduled. At every loop we will call check_for_pcpu_work() that will process pending softirqs. In case of failure, the guest will crash and the vCPU will be unscheduled. In normal case, if the rescheduling is necessary the vCPU will be rescheduled to give place to someone else. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Stefano Stabellini Acked-by: Julien Grall CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch, changes were derived from (+ new explanation): arm/ioreq: Introduce arch specific bits for IOREQ/DM features Changes V2 -> V3: - update patch description Changes V3 -> V4: - update patch description and comment in code Changes V4 -> V5: - add Stefano's R-b - update patch subject/description and comment in code, was "xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed" - change loop logic a bit - squash with changes to check_for_vcpu_work() from patch #14 Changes V5 -> V6: - add Julien's A-b --- --- xen/arch/arm/traps.c | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 8848764..cb37a45 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -2269,12 +2270,23 @@ static void check_for_pcpu_work(void) * Process pending work for the vCPU. Any call should be fast or * implement preemption. */ -static void check_for_vcpu_work(void) +static bool check_for_vcpu_work(void) { struct vcpu *v = current; +#ifdef CONFIG_IOREQ_SERVER + bool handled; + + local_irq_enable(); + handled = vcpu_ioreq_handle_completion(v); + local_irq_disable(); + + if ( !handled ) + return true; +#endif + if ( likely(!v->arch.need_flush_to_ram) ) - return; + return false; /* * Give a chance for the pCPU to process work before handling the vCPU @@ -2285,6 +2297,8 @@ static void check_for_vcpu_work(void) local_irq_enable(); p2m_flush_vm(v); local_irq_disable(); + + return false; } /* @@ -2297,7 +2311,13 @@ void leave_hypervisor_to_guest(void) { local_irq_disable(); - check_for_vcpu_work(); + /* + * check_for_vcpu_work() may return true if there are more work to before + * the vCPU can safely resume. This gives us an opportunity to deschedule + * the vCPU if needed. + */ + while ( check_for_vcpu_work() ) + check_for_pcpu_work(); check_for_pcpu_work(); vgic_sync_to_lrs();