From patchwork Wed May 30 17:43:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Dryomov X-Patchwork-Id: 10439489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 04F31602CC for ; Wed, 30 May 2018 17:44:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6C9D28A4A for ; Wed, 30 May 2018 17:44:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DB2EB28C6B; Wed, 30 May 2018 17:44:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75FCC28A4A for ; Wed, 30 May 2018 17:44:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753731AbeE3RoM (ORCPT ); Wed, 30 May 2018 13:44:12 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:50227 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932117AbeE3Rnz (ORCPT ); Wed, 30 May 2018 13:43:55 -0400 Received: by mail-wm0-f65.google.com with SMTP id t11-v6so49367664wmt.0 for ; Wed, 30 May 2018 10:43:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AWDQSpWu32i+HV+qhL2ZOZ0VmFwOS1m+t2gua2aUHV8=; b=ImgI06PvQeBfSQlDQCzIR+0EkEVXUTzlZnDXN15zdRs/+9qPveXWNIBflSDCYubDXD hX94GpwEDMdb5AelFw87UeuI++ir0JBSBbKBKkvF5bLdJHGS9h/Ko68CT5eXDOIxKuBw QWRAsb0gDhdLo0P9uiB2sUTsXDTDHIe11pqZCObggKckRCIR4TBVWJbMLiE8+jXcdqFW kXHjGEo6JIJSTSFxo/H/XFvaeoYwDLrR2nU+SRuu4FhmUSqsQ/VXHzVWTZQGRGDwsMpu YfKu0qxiaqWx/BuBEFhfKXjxaCsb5tDQW0McuFL4kszuIPcyPaFx6NAyZ341uUjREucl LQKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AWDQSpWu32i+HV+qhL2ZOZ0VmFwOS1m+t2gua2aUHV8=; b=lmQwGVmwMoeVVXtlyzDC+HOnI1iSDsVgYtRyfbJAIvtf84giZwBWwPzfU2YIXsUd9M oBNHnotjRVXUx53BXZ7CKO39wMm2aJ3D7ZxRwIhUXDrgXj08TTIcRk8hoqa3HWJM0bk4 1KsUD9mEx2pbXVbitNpGH+FeTXYOdUx1DR0pK3TIuqXL81Wt9gb+8dqDIEWQbWQoeUCx uBI2DwPyWIgxDX4BvHv1VqkA3KVaqc6gGGyEK+XV2yarpVnXhTPMhtVFafk2KmFRyJb1 wrUp2kp0qEgYUVKgtiDnGD2JEJxGPhpxuHaoxvDkUUPpdVk5toBmptyV8fxMelsTrYMA /ZkA== X-Gm-Message-State: ALKqPwd9CFDgEa/pEaJGLuYuYUW/b/Sd4qAk7Smw1wi1orFd5WcN7P6R I1aEhK3Nrs2BQ6B/k9cCbOeZMKky X-Google-Smtp-Source: ADUXVKK9KYGTLvDkU/mCcXHzsrHBqCGQgfU7T60O7l9N3P7+Dal85CKxnwmvnVTDl4kU1Gx7nqx4zg== X-Received: by 2002:a1c:8f8f:: with SMTP id r137-v6mr2070188wmd.103.1527702233878; Wed, 30 May 2018 10:43:53 -0700 (PDT) Received: from orange.brq.redhat.com. (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id p3-v6sm18647658wrn.31.2018.05.30.10.43.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 30 May 2018 10:43:53 -0700 (PDT) From: Ilya Dryomov To: ceph-devel@vger.kernel.org Cc: Jeff Layton Subject: [PATCH 3/7] libceph: use for_each_request() in ceph_osdc_abort_on_full() Date: Wed, 30 May 2018 19:43:38 +0200 Message-Id: <1527702222-8232-4-git-send-email-idryomov@gmail.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1527702222-8232-1-git-send-email-idryomov@gmail.com> References: <1527702222-8232-1-git-send-email-idryomov@gmail.com> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Scanning the trees just to see if there is anything to abort is unnecessary -- all that is needed here is to update the epoch barrier first, before we start aborting. Simplify and do the update inside the loop before calling abort_request() for the first time. The switch to for_each_request() also fixes a bug: homeless requests weren't even considered for aborting. Signed-off-by: Ilya Dryomov --- net/ceph/osd_client.c | 79 +++++++++++++++++---------------------------------- 1 file changed, 26 insertions(+), 53 deletions(-) diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index a4c12c37aa90..be274ab43d01 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -2434,6 +2434,30 @@ void ceph_osdc_update_epoch_barrier(struct ceph_osd_client *osdc, u32 eb) EXPORT_SYMBOL(ceph_osdc_update_epoch_barrier); /* + * We can end up releasing caps as a result of abort_request(). + * In that case, we probably want to ensure that the cap release message + * has an updated epoch barrier in it, so set the epoch barrier prior to + * aborting the first request. + */ +static int abort_on_full_fn(struct ceph_osd_request *req, void *arg) +{ + struct ceph_osd_client *osdc = req->r_osdc; + bool *victims = arg; + + if (req->r_abort_on_full && + (ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) || + pool_full(osdc, req->r_t.target_oloc.pool))) { + if (!*victims) { + update_epoch_barrier(osdc, osdc->osdmap->epoch); + *victims = true; + } + abort_request(req, -ENOSPC); + } + + return 0; /* continue iteration */ +} + +/* * Drop all pending requests that are stalled waiting on a full condition to * clear, and complete them with ENOSPC as the return code. Set the * osdc->epoch_barrier to the latest map epoch that we've seen if any were @@ -2441,61 +2465,10 @@ EXPORT_SYMBOL(ceph_osdc_update_epoch_barrier); */ static void ceph_osdc_abort_on_full(struct ceph_osd_client *osdc) { - struct rb_node *n; bool victims = false; - dout("enter abort_on_full\n"); - - if (!ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) && !have_pool_full(osdc)) - goto out; - - /* Scan list and see if there is anything to abort */ - for (n = rb_first(&osdc->osds); n; n = rb_next(n)) { - struct ceph_osd *osd = rb_entry(n, struct ceph_osd, o_node); - struct rb_node *m; - - m = rb_first(&osd->o_requests); - while (m) { - struct ceph_osd_request *req = rb_entry(m, - struct ceph_osd_request, r_node); - m = rb_next(m); - - if (req->r_abort_on_full) { - victims = true; - break; - } - } - if (victims) - break; - } - - if (!victims) - goto out; - - /* - * Update the barrier to current epoch if it's behind that point, - * since we know we have some calls to be aborted in the tree. - */ - update_epoch_barrier(osdc, osdc->osdmap->epoch); - - for (n = rb_first(&osdc->osds); n; n = rb_next(n)) { - struct ceph_osd *osd = rb_entry(n, struct ceph_osd, o_node); - struct rb_node *m; - - m = rb_first(&osd->o_requests); - while (m) { - struct ceph_osd_request *req = rb_entry(m, - struct ceph_osd_request, r_node); - m = rb_next(m); - - if (req->r_abort_on_full && - (ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) || - pool_full(osdc, req->r_t.target_oloc.pool))) - abort_request(req, -ENOSPC); - } - } -out: - dout("return abort_on_full barrier=%u\n", osdc->epoch_barrier); + if (ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) || have_pool_full(osdc)) + for_each_request(osdc, abort_on_full_fn, &victims); } static void check_pool_dne(struct ceph_osd_request *req)