From patchwork Mon May 21 19:35:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 10416247 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1C57660545 for ; Mon, 21 May 2018 19:42:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B42028A00 for ; Mon, 21 May 2018 19:42:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F133B28A04; Mon, 21 May 2018 19:42:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A8AFB28A29 for ; Mon, 21 May 2018 19:41:57 +0000 (UTC) Received: from localhost ([::1]:52173 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fKqgu-0006ND-So for patchwork-qemu-devel@patchwork.kernel.org; Mon, 21 May 2018 15:41:56 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41773) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fKqaR-0000gR-W9 for qemu-devel@nongnu.org; Mon, 21 May 2018 15:35:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fKqaP-0003Kn-Px for qemu-devel@nongnu.org; Mon, 21 May 2018 15:35:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:60402) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fKqaP-0003I6-EO for qemu-devel@nongnu.org; Mon, 21 May 2018 15:35:13 -0400 Received: from sstabellini-ThinkPad-X260.xilinx.com (unknown [149.199.62.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4261F2087B; Mon, 21 May 2018 19:35:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1526931312; bh=44XX7DTHyN+3CrOh92I4rhmTr6fnDeOkZl4/zQbpoYk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SW2RwSjNvNrbHg6FEkydOYe+OysH4uk5CQx/1CnnZJABNsFch0rhX+Dc4gD50B15L Tdmrs0DffSrVx1w3gBoi0UnhIN+wq/YAb/hSNt4oA2aQAMvmbRZORMzzh0DgWmT5Tq +EQiNUnYzKMEQhmY6MEBb/TSvZSb3b/m3bzAQFx8= From: Stefano Stabellini To: peter.maydell@linaro.org, stefanha@gmail.com Date: Mon, 21 May 2018 12:35:03 -0700 Message-Id: <1526931304-7289-14-git-send-email-sstabellini@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 198.145.29.99 Subject: [Qemu-devel] [PULL 14/15] xen_disk: use a single entry iovec X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: sstabellini@kernel.org, qemu-devel@nongnu.org, Paul Durrant , stefanha@redhat.com, anthony.perard@citrix.com, xen-devel@lists.xenproject.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Paul Durrant Since xen_disk now always copies data to and from a guest there is no need to maintain a vector entry corresponding to every page of a request. This means there is less per-request state to maintain so the ioreq structure can shrink significantly. Signed-off-by: Paul Durrant Acked-by: Anthony Perard Signed-off-by: Stefano Stabellini --- hw/block/xen_disk.c | 76 +++++++++++++++-------------------------------------- 1 file changed, 21 insertions(+), 55 deletions(-) diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c index 28be8b6..28651c5 100644 --- a/hw/block/xen_disk.c +++ b/hw/block/xen_disk.c @@ -46,13 +46,10 @@ struct ioreq { /* parsed request */ off_t start; QEMUIOVector v; + void *buf; + size_t size; int presync; - /* grant mapping */ - uint32_t refs[BLKIF_MAX_SEGMENTS_PER_REQUEST]; - void *page[BLKIF_MAX_SEGMENTS_PER_REQUEST]; - void *pages; - /* aio status */ int aio_inflight; int aio_errors; @@ -110,12 +107,10 @@ static void ioreq_reset(struct ioreq *ioreq) memset(&ioreq->req, 0, sizeof(ioreq->req)); ioreq->status = 0; ioreq->start = 0; + ioreq->buf = NULL; + ioreq->size = 0; ioreq->presync = 0; - memset(ioreq->refs, 0, sizeof(ioreq->refs)); - memset(ioreq->page, 0, sizeof(ioreq->page)); - ioreq->pages = NULL; - ioreq->aio_inflight = 0; ioreq->aio_errors = 0; @@ -138,7 +133,7 @@ static struct ioreq *ioreq_start(struct XenBlkDev *blkdev) ioreq = g_malloc0(sizeof(*ioreq)); ioreq->blkdev = blkdev; blkdev->requests_total++; - qemu_iovec_init(&ioreq->v, BLKIF_MAX_SEGMENTS_PER_REQUEST); + qemu_iovec_init(&ioreq->v, 1); } else { /* get one from freelist */ ioreq = QLIST_FIRST(&blkdev->freelist); @@ -183,7 +178,6 @@ static void ioreq_release(struct ioreq *ioreq, bool finish) static int ioreq_parse(struct ioreq *ioreq) { struct XenBlkDev *blkdev = ioreq->blkdev; - uintptr_t mem; size_t len; int i; @@ -230,13 +224,10 @@ static int ioreq_parse(struct ioreq *ioreq) goto err; } - ioreq->refs[i] = ioreq->req.seg[i].gref; - - mem = ioreq->req.seg[i].first_sect * blkdev->file_blk; len = (ioreq->req.seg[i].last_sect - ioreq->req.seg[i].first_sect + 1) * blkdev->file_blk; - qemu_iovec_add(&ioreq->v, (void*)mem, len); + ioreq->size += len; } - if (ioreq->start + ioreq->v.size > blkdev->file_size) { + if (ioreq->start + ioreq->size > blkdev->file_size) { xen_pv_printf(&blkdev->xendev, 0, "error: access beyond end of file\n"); goto err; } @@ -247,35 +238,6 @@ err: return -1; } -static void ioreq_free_copy_buffers(struct ioreq *ioreq) -{ - int i; - - for (i = 0; i < ioreq->v.niov; i++) { - ioreq->page[i] = NULL; - } - - qemu_vfree(ioreq->pages); -} - -static int ioreq_init_copy_buffers(struct ioreq *ioreq) -{ - int i; - - if (ioreq->v.niov == 0) { - return 0; - } - - ioreq->pages = qemu_memalign(XC_PAGE_SIZE, ioreq->v.niov * XC_PAGE_SIZE); - - for (i = 0; i < ioreq->v.niov; i++) { - ioreq->page[i] = ioreq->pages + i * XC_PAGE_SIZE; - ioreq->v.iov[i].iov_base = ioreq->page[i]; - } - - return 0; -} - static int ioreq_grant_copy(struct ioreq *ioreq) { struct XenBlkDev *blkdev = ioreq->blkdev; @@ -284,25 +246,27 @@ static int ioreq_grant_copy(struct ioreq *ioreq) int i, count, rc; int64_t file_blk = ioreq->blkdev->file_blk; bool to_domain = (ioreq->req.operation == BLKIF_OP_READ); + void *virt = ioreq->buf; - if (ioreq->v.niov == 0) { + if (ioreq->req.nr_segments == 0) { return 0; } - count = ioreq->v.niov; + count = ioreq->req.nr_segments; for (i = 0; i < count; i++) { if (to_domain) { - segs[i].dest.foreign.ref = ioreq->refs[i]; + segs[i].dest.foreign.ref = ioreq->req.seg[i].gref; segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk; - segs[i].source.virt = ioreq->v.iov[i].iov_base; + segs[i].source.virt = virt; } else { - segs[i].source.foreign.ref = ioreq->refs[i]; + segs[i].source.foreign.ref = ioreq->req.seg[i].gref; segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk; - segs[i].dest.virt = ioreq->v.iov[i].iov_base; + segs[i].dest.virt = virt; } segs[i].len = (ioreq->req.seg[i].last_sect - ioreq->req.seg[i].first_sect + 1) * file_blk; + virt += segs[i].len; } rc = xen_be_copy_grant_refs(xendev, to_domain, segs, count); @@ -348,14 +312,14 @@ static void qemu_aio_complete(void *opaque, int ret) if (ret == 0) { ioreq_grant_copy(ioreq); } - ioreq_free_copy_buffers(ioreq); + qemu_vfree(ioreq->buf); break; case BLKIF_OP_WRITE: case BLKIF_OP_FLUSH_DISKCACHE: if (!ioreq->req.nr_segments) { break; } - ioreq_free_copy_buffers(ioreq); + qemu_vfree(ioreq->buf); break; default: break; @@ -423,12 +387,12 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) { struct XenBlkDev *blkdev = ioreq->blkdev; - ioreq_init_copy_buffers(ioreq); + ioreq->buf = qemu_memalign(XC_PAGE_SIZE, ioreq->size); if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE || ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) && ioreq_grant_copy(ioreq)) { - ioreq_free_copy_buffers(ioreq); + qemu_vfree(ioreq->buf); goto err; } @@ -440,6 +404,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) switch (ioreq->req.operation) { case BLKIF_OP_READ: + qemu_iovec_add(&ioreq->v, ioreq->buf, ioreq->size); block_acct_start(blk_get_stats(blkdev->blk), &ioreq->acct, ioreq->v.size, BLOCK_ACCT_READ); ioreq->aio_inflight++; @@ -452,6 +417,7 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) break; } + qemu_iovec_add(&ioreq->v, ioreq->buf, ioreq->size); block_acct_start(blk_get_stats(blkdev->blk), &ioreq->acct, ioreq->v.size, ioreq->req.operation == BLKIF_OP_WRITE ?