From patchwork Thu Nov 21 08:19:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: dongmao zhang X-Patchwork-Id: 3217131 Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 137B5C045B for ; Thu, 21 Nov 2013 08:24:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C007A2076C for ; Thu, 21 Nov 2013 08:24:06 +0000 (UTC) Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) by mail.kernel.org (Postfix) with ESMTP id 3BCA720738 for ; Thu, 21 Nov 2013 08:24:05 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id rAL8JgVe013565; Thu, 21 Nov 2013 03:19:44 -0500 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id rAL8JdUB021692 for ; Thu, 21 Nov 2013 03:19:39 -0500 Received: from mx1.redhat.com (ext-mx13.extmail.prod.ext.phx2.redhat.com [10.5.110.18]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id rAL8Jd9H023807 for ; Thu, 21 Nov 2013 03:19:39 -0500 Received: from victor.provo.novell.com (victor.provo.novell.com [137.65.250.26]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id rAL8Ja5D011190 for ; Thu, 21 Nov 2013 03:19:36 -0500 Received: from localhost.localdomain (prv-ext-foundry1int.gns.novell.com [137.65.251.240]) by victor.provo.novell.com with ESMTP (TLS encrypted); Thu, 21 Nov 2013 01:19:31 -0700 From: dongmao zhang To: dm-devel@redhat.com Date: Thu, 21 Nov 2013 16:19:21 +0800 Message-Id: X-RedHat-Spam-Score: -4.2 (BAYES_00, RCVD_IN_DNSWL_MED, SPF_PASS, URIBL_BLOCKED) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.18 X-loop: dm-devel@redhat.com Subject: [dm-devel] [PATCH 1/1] improve the performance of dm-log-userspace X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP hi, Brassow, I changed some indent and use "strstr" to detect the integrated_flush. So, please review :) Signed-off-by: dongmao zhang --- drivers/md/dm-log-userspace-base.c | 118 ++++++++++++++++++++++++++++----- include/uapi/linux/dm-log-userspace.h | 10 ++- 2 files changed, 106 insertions(+), 22 deletions(-) diff --git a/drivers/md/dm-log-userspace-base.c b/drivers/md/dm-log-userspace-base.c index 9429159..b47b56d 100644 --- a/drivers/md/dm-log-userspace-base.c +++ b/drivers/md/dm-log-userspace-base.c @@ -11,9 +11,10 @@ #include #include +#include #include "dm-log-userspace-transfer.h" -#define DM_LOG_USERSPACE_VSN "1.1.0" +#define DM_LOG_USERSPACE_VSN "1.2.0" struct flush_entry { int type; @@ -58,6 +59,12 @@ struct log_c { spinlock_t flush_lock; struct list_head mark_list; struct list_head clear_list; + + /*work queue for flush clear region*/ + struct workqueue_struct *dmlog_wq; + struct delayed_work flush_log_work; + atomic_t sched_flush; + uint32_t integrated_flush; }; static mempool_t *flush_entry_pool; @@ -141,6 +148,17 @@ static int build_constructor_string(struct dm_target *ti, return str_size; } +static void do_flush(struct delayed_work *work) +{ + int r; + struct log_c *lc = container_of(work, struct log_c, flush_log_work); + r = userspace_do_request(lc, lc->uuid, DM_ULOG_FLUSH, + NULL, 0, NULL, NULL); + atomic_set(&lc->sched_flush, 0); + if (r) + dm_table_event(lc->ti->table); +} + /* * userspace_ctr * @@ -199,6 +217,10 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti, return str_size; } + lc->integrated_flush = 0; + if (strstr(ctr_str, "integrated_flush")) + lc->integrated_flush = 1; + devices_rdata = kzalloc(devices_rdata_size, GFP_KERNEL); if (!devices_rdata) { DMERR("Failed to allocate memory for device information"); @@ -246,6 +268,20 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti, DMERR("Failed to register %s with device-mapper", devices_rdata); } + + + if (lc->integrated_flush) { + lc->dmlog_wq = alloc_workqueue("dmlogd", WQ_NON_REENTRANT | WQ_MEM_RECLAIM, 0); + if (!lc->dmlog_wq) { + DMERR("couldn't start dmlogd"); + r = -ENOMEM; + goto out; + } + + INIT_DELAYED_WORK(&lc->flush_log_work, do_flush); + atomic_set(&lc->sched_flush, 0); + } + out: kfree(devices_rdata); if (r) { @@ -264,6 +300,15 @@ static void userspace_dtr(struct dm_dirty_log *log) { struct log_c *lc = log->context; + if (lc->integrated_flush) { + /* flush workqueue */ + if (atomic_read(&lc->sched_flush)) + flush_delayed_work(&lc->flush_log_work); + + flush_workqueue(lc->dmlog_wq); + destroy_workqueue(lc->dmlog_wq); + } + (void) dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_DTR, NULL, 0, NULL, NULL); @@ -294,6 +339,10 @@ static int userspace_postsuspend(struct dm_dirty_log *log) int r; struct log_c *lc = log->context; + /* run planed flush earlier */ + if (lc->integrated_flush && atomic_read(&lc->sched_flush)) + flush_delayed_work(&lc->flush_log_work); + r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_POSTSUSPEND, NULL, 0, NULL, NULL); @@ -405,7 +454,8 @@ static int flush_one_by_one(struct log_c *lc, struct list_head *flush_list) return r; } -static int flush_by_group(struct log_c *lc, struct list_head *flush_list) +static int flush_by_group(struct log_c *lc, struct list_head *flush_list, + int flush_with_payload) { int r = 0; int count; @@ -431,15 +481,25 @@ static int flush_by_group(struct log_c *lc, struct list_head *flush_list) break; } - r = userspace_do_request(lc, lc->uuid, type, - (char *)(group), - count * sizeof(uint64_t), - NULL, NULL); - if (r) { - /* Group send failed. Attempt one-by-one. */ - list_splice_init(&tmp_list, flush_list); - r = flush_one_by_one(lc, flush_list); - break; + if (flush_with_payload) { + r = userspace_do_request(lc, lc->uuid, DM_ULOG_FLUSH, + (char *)(group), + count * sizeof(uint64_t), + NULL, NULL); + /* integrated flush failed */ + if (r) + return r; + } else { + r = userspace_do_request(lc, lc->uuid, type, + (char *)(group), + count * sizeof(uint64_t), + NULL, NULL); + if (r) { + /* Group send failed. Attempt one-by-one. */ + list_splice_init(&tmp_list, flush_list); + r = flush_one_by_one(lc, flush_list); + break; + } } } @@ -474,6 +534,8 @@ static int userspace_flush(struct dm_dirty_log *log) int r = 0; unsigned long flags; struct log_c *lc = log->context; + int is_mark_list_empty; + int is_clear_list_empty; LIST_HEAD(mark_list); LIST_HEAD(clear_list); struct flush_entry *fe, *tmp_fe; @@ -483,19 +545,39 @@ static int userspace_flush(struct dm_dirty_log *log) list_splice_init(&lc->clear_list, &clear_list); spin_unlock_irqrestore(&lc->flush_lock, flags); - if (list_empty(&mark_list) && list_empty(&clear_list)) + is_mark_list_empty = list_empty(&mark_list); + is_clear_list_empty = list_empty(&clear_list); + + if (is_mark_list_empty && is_clear_list_empty) return 0; - r = flush_by_group(lc, &mark_list); - if (r) - goto fail; + r = flush_by_group(lc, &clear_list, 0); - r = flush_by_group(lc, &clear_list); if (r) goto fail; - r = userspace_do_request(lc, lc->uuid, DM_ULOG_FLUSH, - NULL, 0, NULL, NULL); + if (lc->integrated_flush) { + /* send flush request with mark_list as payload*/ + r = flush_by_group(lc, &mark_list, 1); + if (r) + goto fail; + /* when only has clear region,we plan a flush in the furture */ + if (!is_clear_list_empty && is_mark_list_empty && + !atomic_read(&lc->sched_flush)) { + queue_delayed_work(lc->dmlog_wq, &lc->flush_log_work, 3 * HZ); + atomic_set(&lc->sched_flush, 1); + /* cancel pending flush because we have already flushed in mark_region*/ + } else { + cancel_delayed_work(&lc->flush_log_work); + atomic_set(&lc->sched_flush, 0); + } + } else { + r = flush_by_group(lc, &mark_list, 0); + if (r) + goto fail; + r = userspace_do_request(lc, lc->uuid, DM_ULOG_FLUSH, + NULL, 0, NULL, NULL); + } fail: /* diff --git a/include/uapi/linux/dm-log-userspace.h b/include/uapi/linux/dm-log-userspace.h index 0678c2a..d1352ae 100644 --- a/include/uapi/linux/dm-log-userspace.h +++ b/include/uapi/linux/dm-log-userspace.h @@ -201,12 +201,12 @@ * int (*flush)(struct dm_dirty_log *log); * * Payload-to-userspace: - * None. + * if DM_INTEGRATED_FLUSH is set, payload is as same as DM_ULOG_MARK_REGION + * uint64_t [] - region(s) to mark + * else None * Payload-to-kernel: * None. * - * No incoming or outgoing payload. Simply flush log state to disk. - * * When the request has been processed, user-space must return the * dm_ulog_request to the kernel - setting the 'error' field and clearing * 'data_size' appropriately. @@ -385,8 +385,10 @@ * version 2: DM_ULOG_CTR allowed to return a string containing a * device name that is to be registered with DM via * 'dm_get_device'. + * version 3: DM_ULOG_FLUSH is capable to carry payload for marking + * regions. */ -#define DM_ULOG_REQUEST_VERSION 2 +#define DM_ULOG_REQUEST_VERSION 3 struct dm_ulog_request { /*