From patchwork Tue Aug 16 06:55:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paulina Szubarczyk X-Patchwork-Id: 9282717 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2C579607FD for ; Tue, 16 Aug 2016 06:59:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C37D28D67 for ; Tue, 16 Aug 2016 06:59:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1074B28D98; Tue, 16 Aug 2016 06:59:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3C18528D67 for ; Tue, 16 Aug 2016 06:59:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bZYIN-0003DI-4i; Tue, 16 Aug 2016 06:56:19 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bZYIL-0003D9-Bt for xen-devel@lists.xenproject.org; Tue, 16 Aug 2016 06:56:17 +0000 Received: from [85.158.139.211] by server-7.bemta-5.messagelabs.com id 79/35-05127-019B2B75; Tue, 16 Aug 2016 06:56:16 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrMIsWRWlGSWpSXmKPExsXiVRvkosu7c1O 4wZk5ihbft0xmcmD0OPzhCksAYxRrZl5SfkUCa0bfskOMBQ88Kw5O2MXYwHjPvIuRi0NIYAaj xNclH5hBHBaBlywS3dO3soI4EgL9rBIrvl0GcjiBnBiJniM7mCHsConDj3azgNhCAloSl1auY IEYNZtJYmHLb0aQBJuAmcTMyT/BikQELCVOdD5jAiliFjjLKHHw5hP2LkYODmEBH4mfh1NBal gEVCX2r3/EDmLzCnhKdDxaywaxTE7i5LHJYEdwCnhJ7Nl/kRVisafEn/3rWScwCixgZFjFqFG cWlSWWqRraKmXVJSZnlGSm5iZo2toYKqXm1pcnJiempOYVKyXnJ+7iREYXPUMDIw7GB/1+x1i lORgUhLlnTlxY7gQX1J+SmVGYnFGfFFpTmrxIUYNDg6BvjWrLzBKseTl56UqSfDO374pXEiwK DU9tSItMwcY/jClEhw8SiK8fjuA0rzFBYm5xZnpEKlTjMYcW6beW8vE8WfD5CtMQmCTpMR5nU BKBUBKM0rz4AbB4vISo6yUMC8jAwODEE9BalFuZgmq/CtGcQ5GJWHeyyD38GTmlcDtewV0ChP QKfrSG0BOKUlESEk1MHZo7ZteKthkqdUzeQ/3R8U9F1cnbPOexeGQlr4gi+mtwoKzXXO/78io fTLXIzJj8xyTvWGzFxx8tFXIfGqABePxx13PlD+fVlzV6MnIdyBQJNPujsdko7i31y+tlBd0n PtASWGjhdG7TRvn3Xl98Plqu2fTzkZXW3Qyv9nwlfd1t4PF24u7soqUWIozEg21mIuKEwFqBv ZnxgIAAA== X-Env-Sender: paulinaszubarczyk@gmail.com X-Msg-Ref: server-9.tower-206.messagelabs.com!1471330573!54766463!1 X-Originating-IP: [74.125.82.68] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 61263 invoked from network); 16 Aug 2016 06:56:13 -0000 Received: from mail-wm0-f68.google.com (HELO mail-wm0-f68.google.com) (74.125.82.68) by server-9.tower-206.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 16 Aug 2016 06:56:13 -0000 Received: by mail-wm0-f68.google.com with SMTP id o80so14745516wme.0 for ; Mon, 15 Aug 2016 23:56:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/7W2PpukUQnwdOBK5VQeKdMVuLbwqKcISfeyYiQn4GA=; b=aKjA5dq4mUvGp62wkpJVLNKmr5BWIfVIO1IF00jG0IWs3eXWqXKnkxVaHBbA+TFCTa ZRcLr95tAK/nKtjXheyhwDO1LDyH8F3+WWk7pF+6VME2PvR+UWjFk0of5aAozge7HrpP l8RDALES3YHU2zGxCvYtpL3bB6TpW9c9b2ZEtS053y2d8YnPEjW3GEcN/tFVEZ3yEgzX gGTQPfw2f2HUA7Q3Q/VURv/rasbCmpiLFix9lI4bxpytZ3mB4l2xgaJrEBj+SD5ZdEVw fA9FSUyA/O1mjkoH2ytTllfAxKMKn88HE3Zd0n6fZh3gL3t5VcJsx65dz+0N60TY0pbZ wfvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/7W2PpukUQnwdOBK5VQeKdMVuLbwqKcISfeyYiQn4GA=; b=fSks1BwXILbpbkBP2I/C0gPiqrK0yvP9IrxVy3oHQOM/n+hx7ox0ymBEKeMFDMSi6R MKEw9D/vVtJMDQ5seJMW6Mx0ti45nfSvb9VF9rmp8xatcRUkmyjSsGZphDYwikHJEoBr gHDTy/8lhkhuVZJowrD3nSiKvMaovZChZqmok1JUbUIBboeW2bKGHrIs+2968fDmUbe/ smz9Af1SVJlACreiYzsYV9hBy+k9oLERr2OlY8FPhUClAl3wUR8gsqURHwaMKXzvrDpv JrY62saJBgs12QwlRgHT/MRyTQwp1hHEFVsmZ6OmY6J3YESZ8FESln3pX8Ioj9V2+Sc4 CdSw== X-Gm-Message-State: AEkoousHDMaUEx/DlF9/Y/oD9dDTzFcMJzTRWMN67cW/H+4pKp9v7+hagnryG0iumQbB0Q== X-Received: by 10.25.22.152 with SMTP id 24mr5437055lfw.180.1471330572906; Mon, 15 Aug 2016 23:56:12 -0700 (PDT) Received: from localhost.localdomain (apn-95-41-17-77.dynamic.gprs.plus.pl. [95.41.17.77]) by smtp.gmail.com with ESMTPSA id d15sm3934655lfg.0.2016.08.15.23.56.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Aug 2016 23:56:11 -0700 (PDT) From: Paulina Szubarczyk To: xen-devel@lists.xenproject.org, roger.pau@citrix.com Date: Tue, 16 Aug 2016 08:55:56 +0200 Message-Id: <1471330556-782-3-git-send-email-paulinaszubarczyk@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1471330556-782-1-git-send-email-paulinaszubarczyk@gmail.com> References: <1471330556-782-1-git-send-email-paulinaszubarczyk@gmail.com> Cc: sstabellini@kernel.org, wei.liu2@citrix.com, Paulina Szubarczyk , ian.jackson@eu.citrix.com, qemu-devel@nongnu.org, david.vrabel@citrix.com, anthony.perard@citrix.com Subject: [Xen-devel] [PATCH v5 2/2] qdisk - hw/block/xen_disk: grant copy implementation X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Copy data operated on during request from/to local buffers to/from the grant references. Before grant copy operation local buffers must be allocated what is done by calling ioreq_init_copy_buffers. For the 'read' operation, first, the qemu device invokes the read operation on local buffers and on the completion grant copy is called and buffers are freed. For the 'write' operation grant copy is performed before invoking write by qemu device. A new value 'feature_grant_copy' is added to recognize when the grant copy operation is supported by a guest. Signed-off-by: Paulina Szubarczyk --- Changes since v4: - in the configure file check only if xengnttab_grant_copy is implemented to verify 480 version of Xen. - remove r variable and initialization of count to 0 in ioreq_copy. - surround free_buffers, ioreq_init_copy_buffers and ioreq_copy by "#if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 480" abort in else path because the function should not be called in that case. - replace the definition of struct xengnttab_grant_copy_segment and a typedef to it with 'typedef void* xengnttab_grant_copy_segment_t'. - moved the new code in the xen_common.h to the end of the file. --- configure | 37 +++++++++++ hw/block/xen_disk.c | 157 ++++++++++++++++++++++++++++++++++++++++++-- include/hw/xen/xen_common.h | 14 ++++ 3 files changed, 203 insertions(+), 5 deletions(-) diff --git a/configure b/configure index 8d84919..68b3374 100755 --- a/configure +++ b/configure @@ -1956,6 +1956,43 @@ EOF /* * If we have stable libs the we don't want the libxc compat * layers, regardless of what CFLAGS we may have been given. + * + * Also, check if xengnttab_grant_copy_segment_t is defined and + * grant copy operation is implemented. + */ +#undef XC_WANT_COMPAT_EVTCHN_API +#undef XC_WANT_COMPAT_GNTTAB_API +#undef XC_WANT_COMPAT_MAP_FOREIGN_API +#include +#include +#include +#include +#include +#include +#include +#if !defined(HVM_MAX_VCPUS) +# error HVM_MAX_VCPUS not defined +#endif +int main(void) { + xengnttab_handle *xg; + xengnttab_grant_copy_segment_t* seg = NULL; + + xg = xengnttab_open(0, 0); + xengnttab_map_grant_ref(xg, 0, 0, 0); + xengnttab_grant_copy(xg, 0, seg); + + return 0; +} +EOF + compile_prog "" "$xen_libs $xen_stable_libs" + then + xen_ctrl_version=480 + xen=yes + elif + cat > $TMPC <= 480 + +static void free_buffers(struct ioreq *ioreq) +{ + int i; + + for (i = 0; i < ioreq->v.niov; i++) { + ioreq->page[i] = NULL; + } + + qemu_vfree(ioreq->pages); +} + +static int ioreq_init_copy_buffers(struct ioreq *ioreq) +{ + int i; + + if (ioreq->v.niov == 0) { + return 0; + } + + ioreq->pages = qemu_memalign(XC_PAGE_SIZE, ioreq->v.niov * XC_PAGE_SIZE); + + for (i = 0; i < ioreq->v.niov; i++) { + ioreq->page[i] = ioreq->pages + i * XC_PAGE_SIZE; + ioreq->v.iov[i].iov_base = ioreq->page[i]; + } + + return 0; +} + +static int ioreq_copy(struct ioreq *ioreq) +{ + xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev; + xengnttab_grant_copy_segment_t segs[BLKIF_MAX_SEGMENTS_PER_REQUEST]; + int i, count, rc; + int64_t file_blk = ioreq->blkdev->file_blk; + + if (ioreq->v.niov == 0) { + return 0; + } + + count = ioreq->v.niov; + + for (i = 0; i < count; i++) { + + if (ioreq->req.operation == BLKIF_OP_READ) { + segs[i].flags = GNTCOPY_dest_gref; + segs[i].dest.foreign.ref = ioreq->refs[i]; + segs[i].dest.foreign.domid = ioreq->domids[i]; + segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk; + segs[i].source.virt = ioreq->v.iov[i].iov_base; + } else { + segs[i].flags = GNTCOPY_source_gref; + segs[i].source.foreign.ref = ioreq->refs[i]; + segs[i].source.foreign.domid = ioreq->domids[i]; + segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk; + segs[i].dest.virt = ioreq->v.iov[i].iov_base; + } + segs[i].len = (ioreq->req.seg[i].last_sect + - ioreq->req.seg[i].first_sect + 1) * file_blk; + + } + + rc = xengnttab_grant_copy(gnt, count, segs); + + if (rc) { + xen_be_printf(&ioreq->blkdev->xendev, 0, + "failed to copy data %d\n", rc); + ioreq->aio_errors++; + return -1; + } + + for (i = 0; i < count; i++) { + if (segs[i].status != GNTST_okay) { + xen_be_printf(&ioreq->blkdev->xendev, 3, + "failed to copy data %d for gref %d, domid %d\n", + segs[i].status, ioreq->refs[i], ioreq->domids[i]); + ioreq->aio_errors++; + rc = -1; + } + } + + return rc; +} +#else +static void free_buffers(struct ioreq *ioreq) +{ + abort(); +} + +static int ioreq_init_copy_buffers(struct ioreq *ioreq) +{ + abort(); +} + +static int ioreq_copy(struct ioreq *ioreq) +{ + abort(); +} +#endif + static int ioreq_runio_qemu_aio(struct ioreq *ioreq); static void qemu_aio_complete(void *opaque, int ret) @@ -511,8 +616,31 @@ static void qemu_aio_complete(void *opaque, int ret) return; } + if (ioreq->blkdev->feature_grant_copy) { + switch (ioreq->req.operation) { + case BLKIF_OP_READ: + /* in case of failure ioreq->aio_errors is increased */ + if (ret == 0) { + ioreq_copy(ioreq); + } + free_buffers(ioreq); + break; + case BLKIF_OP_WRITE: + case BLKIF_OP_FLUSH_DISKCACHE: + if (!ioreq->req.nr_segments) { + break; + } + free_buffers(ioreq); + break; + default: + break; + } + } + ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY; - ioreq_unmap(ioreq); + if (!ioreq->blkdev->feature_grant_copy) { + ioreq_unmap(ioreq); + } ioreq_finish(ioreq); switch (ioreq->req.operation) { case BLKIF_OP_WRITE: @@ -538,8 +666,20 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) { struct XenBlkDev *blkdev = ioreq->blkdev; - if (ioreq->req.nr_segments && ioreq_map(ioreq) == -1) { - goto err_no_map; + if (ioreq->blkdev->feature_grant_copy) { + ioreq_init_copy_buffers(ioreq); + if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE || + ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE)) { + if (ioreq_copy(ioreq)) { + free_buffers(ioreq); + goto err; + } + } + + } else { + if (ioreq->req.nr_segments && ioreq_map(ioreq)) { + goto err; + } } ioreq->aio_inflight++; @@ -582,6 +722,9 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) } default: /* unknown operation (shouldn't happen -- parse catches this) */ + if (!ioreq->blkdev->feature_grant_copy) { + ioreq_unmap(ioreq); + } goto err; } @@ -590,8 +733,6 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) return 0; err: - ioreq_unmap(ioreq); -err_no_map: ioreq_finish(ioreq); ioreq->status = BLKIF_RSP_ERROR; return -1; @@ -1032,6 +1173,12 @@ static int blk_connect(struct XenDevice *xendev) xen_be_bind_evtchn(&blkdev->xendev); + blkdev->feature_grant_copy = + (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0); + + xen_be_printf(&blkdev->xendev, 3, "grant copy operation %s\n", + blkdev->feature_grant_copy ? "enabled" : "disabled"); + xen_be_printf(&blkdev->xendev, 1, "ok: proto %s, ring-ref %d, " "remote port %d, local port %d\n", blkdev->xendev.protocol, blkdev->ring_ref, diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h index 640c31e..6b0776f 100644 --- a/include/hw/xen/xen_common.h +++ b/include/hw/xen/xen_common.h @@ -369,4 +369,18 @@ static inline int xen_domain_create(xc_interface *xc, uint32_t ssidref, #endif #endif +/* Xen before 4.8 */ + +#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 480 + + +typedef void *xengnttab_grant_copy_segment_t; + +static inline int xengnttab_grant_copy(xengnttab_handle *xgt, uint32_t count, + xengnttab_grant_copy_segment_t *segs) +{ + return -ENOSYS; +} +#endif + #endif /* QEMU_HW_XEN_COMMON_H */