From patchwork Fri Jun 5 07:50:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11589187 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB248739 for ; Fri, 5 Jun 2020 07:51:29 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF2E8206E6 for ; Fri, 5 Jun 2020 07:51:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF2E8206E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jh77N-00011K-0A; Fri, 05 Jun 2020 07:50:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jh77L-00011F-U7 for xen-devel@lists.xenproject.org; Fri, 05 Jun 2020 07:50:19 +0000 X-Inumbo-ID: 333bf8ae-a701-11ea-9947-bc764e2007e4 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 333bf8ae-a701-11ea-9947-bc764e2007e4; Fri, 05 Jun 2020 07:50:18 +0000 (UTC) Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: UfAALRlWE6qiXUKQ5xipb/JQm0FMueBWje/OyMcykw03jdL2F2idFe4vUdael29xJf9Vyr7k+H lmM3UupEoir8p1ECyAb6y1PN69hiI8soAIx4AqSdkBHz+E4taCqw+yZdieHY4g28bCYp3wFVAX R1iC1mHd7ZLoUWTUJiWiCyxSvPVMEamSh1ZYmhUKRl+StjY4wyfI93DM2nRowGpYHXC5v8k1w7 Z+ywdco4eJ+pd6Hkahn5p1FtQ6STwgVDcfrQzRDdjqMaJBEYVBonj70/bg1BhwHlguZQo8srOk UN8= X-SBRS: 2.7 X-MesageID: 19550453 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.73,475,1583211600"; d="scan'208";a="19550453" From: Roger Pau Monne To: Subject: [PATCH for-4.14] x86/rtc: provide mediated access to RTC for PVH dom0 Date: Fri, 5 Jun 2020 09:50:06 +0200 Message-ID: <20200605075006.51238-1-roger.pau@citrix.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Roger Pau Monne , Wei Liu , Jan Beulich , paul@xen.org Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" At some point (maybe PVHv1?) mediated access to the RTC was provided for PVH dom0 using the PV code paths (guest_io_{write/read}). At some point this code has been made PV specific and unhooked from the current PVH IO path. This patch provides such mediated access to the RTC for PVH dom0, just like it's provided for a classic PV dom0. Instead of re-using the PV paths implement such handler together with the vRTC code for HVM, so that calling rtc_init will setup the appropriate handlers for all HVM based guests. Without this a Linux PVH dom0 will read garbage when trying to access the RTC, and one vCPU will be constantly looping in rtc_timer_do_work. Note that such issue doesn't happen on domUs because the ACPI NO_CMOS_RTC flag is set in FADT, which prevents the OS from accessing the RTC. Signed-off-by: Roger Pau Monné --- for-4.14 reasoning: the fix is completely isolated to PVH dom0, and as such the risk is very low of causing issues to other guests types, but without this fix one vCPU when using a Linux dom0 will be constantly looping over rtc_timer_do_work with 100% CPU usage, at least when using Linux 4.19 or newer. --- xen/arch/x86/hvm/rtc.c | 69 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c index 5bbbdc0e0f..5d637cf018 100644 --- a/xen/arch/x86/hvm/rtc.c +++ b/xen/arch/x86/hvm/rtc.c @@ -808,10 +808,79 @@ void rtc_reset(struct domain *d) s->pt.source = PTSRC_isa; } +/* RTC mediator for HVM hardware domain. */ +static unsigned int hw_read(unsigned int port) +{ + const struct domain *currd = current->domain; + unsigned long flags; + unsigned int data = 0; + + switch ( port ) + { + case RTC_PORT(0): + data = currd->arch.cmos_idx; + break; + + case RTC_PORT(1): + spin_lock_irqsave(&rtc_lock, flags); + outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0)); + data = inb(RTC_PORT(1)); + spin_unlock_irqrestore(&rtc_lock, flags); + break; + } + + return data; +} + +static void hw_write(unsigned int port, unsigned int data) +{ + struct domain *currd = current->domain; + unsigned long flags; + + switch ( port ) + { + case RTC_PORT(0): + currd->arch.cmos_idx = data; + break; + + case RTC_PORT(1): + spin_lock_irqsave(&rtc_lock, flags); + outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0)); + outb(data, RTC_PORT(1)); + spin_unlock_irqrestore(&rtc_lock, flags); + break; + } +} + +static int hw_rtc_io(int dir, unsigned int port, unsigned int size, + uint32_t *val) +{ + if ( size != 1 ) + { + gdprintk(XENLOG_WARNING, "bad RTC access size (%u)\n", size); + *val = ~0; + return X86EMUL_OKAY; + } + + if ( dir == IOREQ_WRITE ) + hw_write(port, *val); + else + *val = hw_read(port); + + return X86EMUL_OKAY; +} + void rtc_init(struct domain *d) { RTCState *s = domain_vrtc(d); + if ( is_hardware_domain(d) ) + { + /* Hardware domain gets mediated access to the physical RTC. */ + register_portio_handler(d, RTC_PORT(0), 2, hw_rtc_io); + return; + } + if ( !has_vrtc(d) ) return;