From patchwork Fri Aug 9 05:48:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Wang X-Patchwork-Id: 11085367 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E249912 for ; Fri, 9 Aug 2019 05:49:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E6B128BB8 for ; Fri, 9 Aug 2019 05:49:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 022BA28C28; Fri, 9 Aug 2019 05:49:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A7FA128BB8 for ; Fri, 9 Aug 2019 05:49:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405782AbfHIFtg (ORCPT ); Fri, 9 Aug 2019 01:49:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49328 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405755AbfHIFtf (ORCPT ); Fri, 9 Aug 2019 01:49:35 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A097D300C768; Fri, 9 Aug 2019 05:49:35 +0000 (UTC) Received: from hp-dl380pg8-01.lab.eng.pek2.redhat.com (hp-dl380pg8-01.lab.eng.pek2.redhat.com [10.73.8.10]) by smtp.corp.redhat.com (Postfix) with ESMTP id 184825D9CC; Fri, 9 Aug 2019 05:49:32 +0000 (UTC) From: Jason Wang To: mst@redhat.com, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, jgg@ziepe.ca, Jason Wang Subject: [PATCH V5 9/9] vhost: do not return -EAGAIN for non blocking invalidation too early Date: Fri, 9 Aug 2019 01:48:51 -0400 Message-Id: <20190809054851.20118-10-jasowang@redhat.com> In-Reply-To: <20190809054851.20118-1-jasowang@redhat.com> References: <20190809054851.20118-1-jasowang@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Fri, 09 Aug 2019 05:49:35 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of returning -EAGAIN unconditionally, we'd better do that only we're sure the range is overlapped with the metadata area. Reported-by: Jason Gunthorpe Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtual address") Signed-off-by: Jason Wang --- drivers/vhost/vhost.c | 32 +++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index d8863aaaf0f6..f98155f28f02 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -371,16 +371,19 @@ static void inline vhost_vq_access_map_end(struct vhost_virtqueue *vq) spin_unlock(&vq->mmu_lock); } -static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, - int index, - unsigned long start, - unsigned long end) +static int vhost_invalidate_vq_start(struct vhost_virtqueue *vq, + int index, + unsigned long start, + unsigned long end, + bool blockable) { struct vhost_uaddr *uaddr = &vq->uaddrs[index]; struct vhost_map *map; if (!vhost_map_range_overlap(uaddr, start, end)) - return; + return 0; + else if (!blockable) + return -EAGAIN; spin_lock(&vq->mmu_lock); ++vq->invalidate_count; @@ -394,6 +397,8 @@ static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, vhost_set_map_dirty(vq, map, index); vhost_map_unprefetch(map); } + + return 0; } static void vhost_invalidate_vq_end(struct vhost_virtqueue *vq, @@ -414,18 +419,19 @@ static int vhost_invalidate_range_start(struct mmu_notifier *mn, { struct vhost_dev *dev = container_of(mn, struct vhost_dev, mmu_notifier); - int i, j; - - if (!mmu_notifier_range_blockable(range)) - return -EAGAIN; + bool blockable = mmu_notifier_range_blockable(range); + int i, j, ret; for (i = 0; i < dev->nvqs; i++) { struct vhost_virtqueue *vq = dev->vqs[i]; - for (j = 0; j < VHOST_NUM_ADDRS; j++) - vhost_invalidate_vq_start(vq, j, - range->start, - range->end); + for (j = 0; j < VHOST_NUM_ADDRS; j++) { + ret = vhost_invalidate_vq_start(vq, j, + range->start, + range->end, blockable); + if (ret) + return ret; + } } return 0;