From patchwork Thu Apr 28 09:33:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 8967421 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 1E2B6BF29F for ; Thu, 28 Apr 2016 09:35:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 97F0B202A1 for ; Thu, 28 Apr 2016 09:35:26 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 064F720219 for ; Thu, 28 Apr 2016 09:35:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aviK8-0003VA-BY; Thu, 28 Apr 2016 09:33:28 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aviK6-0003Ur-Pw for xen-devel@lists.xenproject.org; Thu, 28 Apr 2016 09:33:26 +0000 Received: from [85.158.143.35] by server-2.bemta-6.messagelabs.com id 43/D5-09532-6E8D1275; Thu, 28 Apr 2016 09:33:26 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrNIsWRWlGSWpSXmKPExsXS6fjDS/fJDcV wg/uPFSy+b5nM5MDocfjDFZYAxijWzLyk/IoE1oxni9oYCy7IVJxo38/awDhTrIuRk0NIIE9i zu5zzCA2r4CdxJcda9lAbAkBQ4l981eB2SwCqhLvdh1kArHZBNQl2p5tZ+1i5OAQETCQOHc0C STMLOAnsepACyOILSwQInHuwRpmiPF2EpNabzGClHMK2EusfsILYvIKCEr83SEM0Wkn0X56G/ MERp5ZCJlZSDIQtpbEw1+3WCBsbYllC18zg5QzC0hLLP/HARF2lVg3+xkjqhIQO0DizII29gW MHKsY1YtTi8pSi3QN9ZKKMtMzSnITM3N0DQ3M9HJTi4sT01NzEpOK9ZLzczcxAgOVAQh2MO58 7nSIUZKDSUmUt+uQYrgQX1J+SmVGYnFGfFFpTmrxIUYZDg4lCV7f60A5waLU9NSKtMwcYMzAp CU4eJREeE2vAqV5iwsSc4sz0yFSpxgVpcR5I0D6BEASGaV5cG2wOL3EKCslzMsIdIgQT0FqUW 5mCar8K0ZxDkYlYYgpPJl5JXDTXwEtZgJaLLAJbHFJIkJKqoGRd82OaSU6uuc8s39anNiSOue xlrFJwbPdW16YqNt09gtZt4kxd/IwbLkf8Lhxc+XfWXrtMuVTZPZrsN6aKf9bP2f37cg33/2Z ypoWGfwPYecOD5F9Wdv963ISy/aJVR6C6yctPdbxKDH908Mg8+Q/Owu8GFYcdH6UV8QtbcW+5 9c1L8UX824rsRRnJBpqMRcVJwIAVOCKu84CAAA= X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-12.tower-21.messagelabs.com!1461836002!11802734!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 40414 invoked from network); 28 Apr 2016 09:33:24 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 28 Apr 2016 09:33:24 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Thu, 28 Apr 2016 03:33:22 -0600 Message-Id: <5721F50602000078000E6ACB@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Thu, 28 Apr 2016 03:33:26 -0600 From: "Jan Beulich" To: "xen-devel" References: <5721F23302000078000E6AA4@prv-mh.provo.novell.com> In-Reply-To: <5721F23302000078000E6AA4@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: Paul Durrant , Wei Liu Subject: [Xen-devel] [PATCH 2/2] x86/HVM: fix forwarding of internally cached requests (part 2) X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit 96ae556569 ("x86/HVM: fix forwarding of internally cached requests") wasn't quite complete: hvmemul_do_io() also needs to propagate up the clipped count. (I really should have re-tested the forward port resulting in the earlier change, instead of relying on the testing done on the older version of Xen which the fix was first needed for.) Signed-off-by: Jan Beulich x86/HVM: fix forwarding of internally cached requests (part 2) Commit 96ae556569 ("x86/HVM: fix forwarding of internally cached requests") wasn't quite complete: hvmemul_do_io() also needs to propagate up the clipped count. (I really should have re-tested the forward port resulting in the earlier change, instead of relying on the testing done on the older version of Xen which the fix was first needed for.) Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -137,7 +137,7 @@ static int hvmemul_do_io( if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) || (p.addr != addr) || (p.size != size) || - (p.count != *reps) || + (p.count > *reps) || (p.dir != dir) || (p.df != df) || (p.data_is_ptr != data_is_addr) ) @@ -145,6 +145,8 @@ static int hvmemul_do_io( if ( data_is_addr ) return X86EMUL_UNHANDLEABLE; + + *reps = p.count; goto finish_access; default: return X86EMUL_UNHANDLEABLE; @@ -162,6 +164,13 @@ static int hvmemul_do_io( rc = hvm_io_intercept(&p); + /* + * p.count may have got reduced (see hvm_process_io_intercept()) - inform + * our callers and mirror this into latched state. + */ + ASSERT(p.count <= *reps); + *reps = vio->io_req.count = p.count; + switch ( rc ) { case X86EMUL_OKAY: Reviewed-by: Paul Durrant Reviewed-by: Andrew Cooper --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -137,7 +137,7 @@ static int hvmemul_do_io( if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) || (p.addr != addr) || (p.size != size) || - (p.count != *reps) || + (p.count > *reps) || (p.dir != dir) || (p.df != df) || (p.data_is_ptr != data_is_addr) ) @@ -145,6 +145,8 @@ static int hvmemul_do_io( if ( data_is_addr ) return X86EMUL_UNHANDLEABLE; + + *reps = p.count; goto finish_access; default: return X86EMUL_UNHANDLEABLE; @@ -162,6 +164,13 @@ static int hvmemul_do_io( rc = hvm_io_intercept(&p); + /* + * p.count may have got reduced (see hvm_process_io_intercept()) - inform + * our callers and mirror this into latched state. + */ + ASSERT(p.count <= *reps); + *reps = vio->io_req.count = p.count; + switch ( rc ) { case X86EMUL_OKAY: