From patchwork Tue Aug 2 14:06:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paulina Szubarczyk X-Patchwork-Id: 9257563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 897DF60865 for ; Tue, 2 Aug 2016 14:09:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78717276AE for ; Tue, 2 Aug 2016 14:09:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6CE2328536; Tue, 2 Aug 2016 14:09:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AD845276AE for ; Tue, 2 Aug 2016 14:09:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bUaLV-0001e5-GI; Tue, 02 Aug 2016 14:07:01 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bUaLU-0001d8-Qk for xen-devel@lists.xenproject.org; Tue, 02 Aug 2016 14:07:01 +0000 Received: from [193.109.254.147] by server-12.bemta-14.messagelabs.com id E4/1E-28536-409A0A75; Tue, 02 Aug 2016 14:07:00 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrJIsWRWlGSWpSXmKPExsVyMfS6ky7zygX hBn8P81h83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBl/m3+wFGwMrPj6bS9rA+MV2y5GLg4hgemM Eo9WXGQHcVgEXrJInJvWxQLiSAj0s0q0vXzF1sXICeTESDSsuMICYVdJ7Gs7yQ5iCwloSVxau YIFYtQmJolNn/6AFbEJmEnMnPwTzBYRUJK4t2oyE4jNLPCEUeLtGhcQW1jAR2LZxEmsIDaLgK rE04O9jCA2r4CXxJUn+xghlslJnDw2GayGU8BbYvrbdcwQi70kvrbtZJ3AKLCAkWEVo0ZxalF ZapGuobFeUlFmekZJbmJmjq6hoYlebmpxcWJ6ak5iUrFecn7uJkZgcNUzMDDuYNy13fMQoyQH k5Ior8uX+eFCfEn5KZUZicUZ8UWlOanFhxg1ODgE+tasvsAoxZKXn5eqJMHbvXxBuJBgUWp6a kVaZg4w/GFKJTh4lER4hUHSvMUFibnFmekQqVOMlhxbfl9by8Sxbeo9IPlnw+QrTEJg86TEeT VBGgRAGjJK8+DGwWL0EqOslDAvIwMDgxBPQWpRbmYJqvwrRnEORiVh3o0gU3gy80rgtr4COog J6KATBmAHlSQipKQaGCdMld+lVSfcFv79jHzStcpt6ptVOdoXnH4yuVF6+Sr/SiX1Z37vPnhP f7XyBTNv2MzzgtlLXO6obHN8UNp6PJqL7duSZsurilKryj+9nP5v5pM9ph+7njPJM3/4xn3sQ VGb8Gz5W4+k1cua98RsYhfr7ngryPpF9o7/+U83Hc/893BYtn4N4wclluKMREMt5qLiRABknE xZzAIAAA== X-Env-Sender: paulinaszubarczyk@gmail.com X-Msg-Ref: server-5.tower-27.messagelabs.com!1470146818!57361535!1 X-Originating-IP: [209.85.215.66] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.77; banners=-,-,- X-VirusChecked: Checked Received: (qmail 46768 invoked from network); 2 Aug 2016 14:06:58 -0000 Received: from mail-lf0-f66.google.com (HELO mail-lf0-f66.google.com) (209.85.215.66) by server-5.tower-27.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 2 Aug 2016 14:06:58 -0000 Received: by mail-lf0-f66.google.com with SMTP id 33so10133830lfw.3 for ; Tue, 02 Aug 2016 07:06:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5wrU3226EAQQxCQZ7MWZX85uocTIacpx7/ne+RIuZh0=; b=btue8h9QwWqewpIjXcaCcJXGto2u6L/fuilkeAlWwe/Ve7FOcuof2IFQDWoUz7wFRU /shmOr+qPXKu6aFZCNoCKvNhZkJuPHEfgzevMGCIYluhFC7kilvbqxZ5OdiMFK05gIbR FMUSYdFg3k13QRGqBCvkPBRKEqMI/aidWmvrsVLSfMq54fjzJup2o4s7XPHbYRjcfi8k bMSz/EU+mm82BDz4WTke37CfYHjJrpd1cNHmlYPObD6o/uM+u4BWYlRMtb4F1glQfGpw Gx0oXO13aZ2e8MDxznCMRQhF6rkbtMhWwmcjhSNoiFZags3TjpF5lO94XdWYA6CW0aNN cNCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5wrU3226EAQQxCQZ7MWZX85uocTIacpx7/ne+RIuZh0=; b=a9ANjkhDl99UzIKyadAEZ8nRTdmJ9yZ2jjtSs/a66pdZtTnnQBCqi5NJlIw0L1YNl8 uHYjKpYDaKfVe1fukCWL5ySNR1OtK0ok86ZNQjJnwJd5/TASR5vbiLcWCFZfJpuBmzP4 jvJpr0GnZ/+fDoCRSCgVGmMOXCgt72ZZ9oKVZy77ZG1iLTN8Oft52ifKEazuoGpCtKu0 Y1D1aez8v30R1cs0RXHRqYl1XPm6cqL/m84qbxyjQ/iC8SJrTlKArjJYFunqOz5B1oAU kl0V2m6J51BZwGicaImqlU/ntm0lXMpPl8YC7u981EEeYTuupSTX9/62F+K2qcx6Arz8 2zlw== X-Gm-Message-State: AEkoouuUvjZl6efalAGV5ndjX6CYLfrg6g9o8V5JLXPeU8rnz265WAcjxE/RXZR1kXaB1w== X-Received: by 10.25.91.132 with SMTP id p126mr20595522lfb.15.1470146817860; Tue, 02 Aug 2016 07:06:57 -0700 (PDT) Received: from paulina.NAT.bialapodlaska.vectranet.pl (088156138244.dynamic-ra-01.vectranet.pl. [88.156.138.244]) by smtp.gmail.com with ESMTPSA id u11sm524634lja.12.2016.08.02.07.06.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 02 Aug 2016 07:06:56 -0700 (PDT) From: Paulina Szubarczyk To: xen-devel@lists.xenproject.org Date: Tue, 2 Aug 2016 16:06:30 +0200 Message-Id: <1470146790-6168-3-git-send-email-paulinaszubarczyk@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1470146790-6168-1-git-send-email-paulinaszubarczyk@gmail.com> References: <1470146790-6168-1-git-send-email-paulinaszubarczyk@gmail.com> Cc: sstabellini@kernel.org, wei.liu2@citrix.com, Paulina Szubarczyk , ian.jackson@eu.citrix.com, qemu-devel@nongnu.org, david.vrabel@citrix.com, anthony.perard@citrix.com, roger.pau@citrix.com Subject: [Xen-devel] [PATCH v4 2/2] qdisk - hw/block/xen_disk: grant copy implementation X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Copy data operated on during request from/to local buffers to/from the grant references. Before grant copy operation local buffers must be allocated what is done by calling ioreq_init_copy_buffers. For the 'read' operation, first, the qemu device invokes the read operation on local buffers and on the completion grant copy is called and buffers are freed. For the 'write' operation grant copy is performed before invoking write by qemu device. A new value 'feature_grant_copy' is added to recognize when the grant copy operation is supported by a guest. Signed-off-by: Paulina Szubarczyk --- Changes since v3: - qemu_memalign/qemu_free is used instead function allocating memory from xc. - removed the get_buffer function instead there is a direct call to qemu_memalign. - moved ioreq_copy for write operation to ioreq_runio_qemu_aio. - added struct xengnttab_grant_copy_segment_t and stub in xen_common.h for version of xen earlier then 480. - added checking for version 480 to configure. The test repeats all the operation that are required for version < 480 and checks if xengnttab_grant_copy() is implemented. * I did not change the way of testing if grant_copy operation is implemented. As far as I understand if the code from gnttab_unimp.c is used then the gnttab device is unavailable and the handler to gntdev would be invalid. But if the handler is valid then the ioctl should return operation unimplemented if the gntdev does not implement the operation. --- configure | 56 +++++++++++++++++ hw/block/xen_disk.c | 142 ++++++++++++++++++++++++++++++++++++++++++-- include/hw/xen/xen_common.h | 25 ++++++++ 3 files changed, 218 insertions(+), 5 deletions(-) diff --git a/configure b/configure index f57fcc6..b5bf7d4 100755 --- a/configure +++ b/configure @@ -1956,6 +1956,62 @@ EOF /* * If we have stable libs the we don't want the libxc compat * layers, regardless of what CFLAGS we may have been given. + * + * Also, check if xengnttab_grant_copy_segment_t is defined and + * grant copy operation is implemented. + */ +#undef XC_WANT_COMPAT_EVTCHN_API +#undef XC_WANT_COMPAT_GNTTAB_API +#undef XC_WANT_COMPAT_MAP_FOREIGN_API +#include +#include +#include +#include +#include +#include +#include +#if !defined(HVM_MAX_VCPUS) +# error HVM_MAX_VCPUS not defined +#endif +int main(void) { + xc_interface *xc = NULL; + xenforeignmemory_handle *xfmem; + xenevtchn_handle *xe; + xengnttab_handle *xg; + xen_domain_handle_t handle; + xengnttab_grant_copy_segment_t* seg = NULL; + + xs_daemon_open(); + + xc = xc_interface_open(0, 0, 0); + xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0); + xc_domain_add_to_physmap(0, 0, XENMAPSPACE_gmfn, 0, 0); + xc_hvm_inject_msi(xc, 0, 0xf0000000, 0x00000000); + xc_hvm_create_ioreq_server(xc, 0, HVM_IOREQSRV_BUFIOREQ_ATOMIC, NULL); + xc_domain_create(xc, 0, handle, 0, NULL, NULL); + + xfmem = xenforeignmemory_open(0, 0); + xenforeignmemory_map(xfmem, 0, 0, 0, 0, 0); + + xe = xenevtchn_open(0, 0); + xenevtchn_fd(xe); + + xg = xengnttab_open(0, 0); + xengnttab_map_grant_ref(xg, 0, 0, 0); + xengnttab_grant_copy(xg, 0, seg); + + return 0; +} +EOF + compile_prog "" "$xen_libs $xen_stable_libs" + then + xen_ctrl_version=480 + xen=yes + elif + cat > $TMPC <v.niov; i++) { + ioreq->page[i] = NULL; + } + + qemu_vfree(ioreq->pages); +} + +static int ioreq_init_copy_buffers(struct ioreq *ioreq) +{ + int i; + + if (ioreq->v.niov == 0) { + return 0; + } + + ioreq->pages = qemu_memalign(XC_PAGE_SIZE, ioreq->v.niov * XC_PAGE_SIZE); + if (!ioreq->pages) { + return -1; + } + + for (i = 0; i < ioreq->v.niov; i++) { + ioreq->page[i] = ioreq->pages + i * XC_PAGE_SIZE; + ioreq->v.iov[i].iov_base = ioreq->page[i]; + } + + return 0; +} + +static int ioreq_copy(struct ioreq *ioreq) +{ + xengnttab_handle *gnt = ioreq->blkdev->xendev.gnttabdev; + xengnttab_grant_copy_segment_t segs[BLKIF_MAX_SEGMENTS_PER_REQUEST]; + int i, count = 0, r, rc; + int64_t file_blk = ioreq->blkdev->file_blk; + + if (ioreq->v.niov == 0) { + return 0; + } + + count = ioreq->v.niov; + + for (i = 0; i < count; i++) { + + if (ioreq->req.operation == BLKIF_OP_READ) { + segs[i].flags = GNTCOPY_dest_gref; + segs[i].dest.foreign.ref = ioreq->refs[i]; + segs[i].dest.foreign.domid = ioreq->domids[i]; + segs[i].dest.foreign.offset = ioreq->req.seg[i].first_sect * file_blk; + segs[i].source.virt = ioreq->v.iov[i].iov_base; + } else { + segs[i].flags = GNTCOPY_source_gref; + segs[i].source.foreign.ref = ioreq->refs[i]; + segs[i].source.foreign.domid = ioreq->domids[i]; + segs[i].source.foreign.offset = ioreq->req.seg[i].first_sect * file_blk; + segs[i].dest.virt = ioreq->v.iov[i].iov_base; + } + segs[i].len = (ioreq->req.seg[i].last_sect + - ioreq->req.seg[i].first_sect + 1) * file_blk; + + } + + rc = xengnttab_grant_copy(gnt, count, segs); + + if (rc) { + xen_be_printf(&ioreq->blkdev->xendev, 0, + "failed to copy data %d\n", rc); + ioreq->aio_errors++; + return -1; + } else { + r = 0; + } + + for (i = 0; i < count; i++) { + if (segs[i].status != GNTST_okay) { + xen_be_printf(&ioreq->blkdev->xendev, 3, + "failed to copy data %d for gref %d, domid %d\n", rc, + ioreq->refs[i], ioreq->domids[i]); + ioreq->aio_errors++; + r = -1; + } + } + + return r; +} + static int ioreq_runio_qemu_aio(struct ioreq *ioreq); static void qemu_aio_complete(void *opaque, int ret) @@ -511,8 +603,29 @@ static void qemu_aio_complete(void *opaque, int ret) return; } + if (ioreq->blkdev->feature_grant_copy) { + switch (ioreq->req.operation) { + case BLKIF_OP_READ: + /* in case of failure ioreq->aio_errors is increased */ + ioreq_copy(ioreq); + free_buffers(ioreq); + break; + case BLKIF_OP_WRITE: + case BLKIF_OP_FLUSH_DISKCACHE: + if (!ioreq->req.nr_segments) { + break; + } + free_buffers(ioreq); + break; + default: + break; + } + } + ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY; - ioreq_unmap(ioreq); + if (!ioreq->blkdev->feature_grant_copy) { + ioreq_unmap(ioreq); + } ioreq_finish(ioreq); switch (ioreq->req.operation) { case BLKIF_OP_WRITE: @@ -538,8 +651,20 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) { struct XenBlkDev *blkdev = ioreq->blkdev; - if (ioreq->req.nr_segments && ioreq_map(ioreq) == -1) { - goto err_no_map; + if (ioreq->blkdev->feature_grant_copy) { + ioreq_init_copy_buffers(ioreq); + if (ioreq->req.nr_segments && (ioreq->req.operation == BLKIF_OP_WRITE || + ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE)) { + if (ioreq_copy(ioreq)) { + free_buffers(ioreq); + goto err; + } + } + + } else { + if (ioreq->req.nr_segments && ioreq_map(ioreq)) { + goto err; + } } ioreq->aio_inflight++; @@ -582,6 +707,9 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) } default: /* unknown operation (shouldn't happen -- parse catches this) */ + if (!ioreq->blkdev->feature_grant_copy) { + ioreq_unmap(ioreq); + } goto err; } @@ -590,8 +718,6 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) return 0; err: - ioreq_unmap(ioreq); -err_no_map: ioreq_finish(ioreq); ioreq->status = BLKIF_RSP_ERROR; return -1; @@ -1032,6 +1158,12 @@ static int blk_connect(struct XenDevice *xendev) xen_be_bind_evtchn(&blkdev->xendev); + blkdev->feature_grant_copy = + (xengnttab_grant_copy(blkdev->xendev.gnttabdev, 0, NULL) == 0); + + xen_be_printf(&blkdev->xendev, 3, "grant copy operation %s\n", + blkdev->feature_grant_copy ? "enabled" : "disabled"); + xen_be_printf(&blkdev->xendev, 1, "ok: proto %s, ring-ref %d, " "remote port %d, local port %d\n", blkdev->xendev.protocol, blkdev->ring_ref, diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h index 640c31e..e80c61f 100644 --- a/include/hw/xen/xen_common.h +++ b/include/hw/xen/xen_common.h @@ -25,6 +25,31 @@ */ /* Xen 4.2 through 4.6 */ +#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 480 + +struct xengnttab_grant_copy_segment { + union xengnttab_copy_ptr { + void *virt; + struct { + uint32_t ref; + uint16_t offset; + uint16_t domid; + } foreign; + } source, dest; + uint16_t len; + uint16_t flags; + int16_t status; +}; +typedef struct xengnttab_grant_copy_segment xengnttab_grant_copy_segment_t; + +static inline int xengnttab_grant_copy(xengnttab_handle *xgt, uint32_t count, + xengnttab_grant_copy_segment_t *segs) +{ + return -1; +} + +#endif + #if CONFIG_XEN_CTRL_INTERFACE_VERSION < 471 typedef xc_interface xenforeignmemory_handle;