From patchwork Fri Dec 6 01:50:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 11275547 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4B7914BD for ; Fri, 6 Dec 2019 01:50:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B923C2075C for ; Fri, 6 Dec 2019 01:50:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hz1Gfzya" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726198AbfLFBuf (ORCPT ); Thu, 5 Dec 2019 20:50:35 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:23021 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725988AbfLFBuf (ORCPT ); Thu, 5 Dec 2019 20:50:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1575597033; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=bxumC1nxv85LWjRRBAVATKhFR9Ndy6C1YHqBpY+ltRw=; b=hz1GfzyawHX05/xAq8oAHvTspnIIlHLHjjTfT3X8K4/4g3nu185ZYNQePqGIA435lvIoan BsFbH+iT/pQ6GrjvO28ktkHOuR+qS+15lt+cJMaBQwYRSE2kwXXCDzVppj7IKZ35BSh8XO gHaBFwJ6acHx+AgkQ19ERAK/HA5i9XM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-249-4GX_6LVRMdaZCXjIuwOnrQ-1; Thu, 05 Dec 2019 20:50:32 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5F29E8017DF; Fri, 6 Dec 2019 01:50:31 +0000 (UTC) Received: from localhost.localdomain (ovpn-12-69.pek2.redhat.com [10.72.12.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 26B2260C87; Fri, 6 Dec 2019 01:50:25 +0000 (UTC) From: xiubli@redhat.com To: jlayton@kernel.org Cc: sage@redhat.com, idryomov@gmail.com, zyan@redhat.com, pdonnell@redhat.com, ceph-devel@vger.kernel.org, Xiubo Li Subject: [PATCH] ceph: add __send_request helper Date: Thu, 5 Dec 2019 20:50:21 -0500 Message-Id: <20191206015021.31611-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-MC-Unique: 4GX_6LVRMdaZCXjIuwOnrQ-1 X-Mimecast-Spam-Score: 0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org From: Xiubo Li Signed-off-by: Xiubo Li --- fs/ceph/mds_client.c | 47 +++++++++++++++++++++++--------------------- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index e47341da5a71..82dfc85b24ee 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -2514,6 +2514,26 @@ static int __prepare_send_request(struct ceph_mds_client *mdsc, return 0; } +/* + * called under mdsc->mutex + */ +static int __send_request(struct ceph_mds_client *mdsc, + struct ceph_mds_session *session, + struct ceph_mds_request *req, + bool drop_cap_releases) +{ + int err; + + err = __prepare_send_request(mdsc, req, session->s_mds, + drop_cap_releases); + if (!err) { + ceph_msg_get(req->r_request); + ceph_con_send(&session->s_con, req->r_request); + } + + return err; +} + /* * send request, or put it on the appropriate wait list. */ @@ -2603,11 +2623,7 @@ static void __do_request(struct ceph_mds_client *mdsc, if (req->r_request_started == 0) /* note request start time */ req->r_request_started = jiffies; - err = __prepare_send_request(mdsc, req, mds, false); - if (!err) { - ceph_msg_get(req->r_request); - ceph_con_send(&session->s_con, req->r_request); - } + err = __send_request(mdsc, session, req, false); out_session: ceph_put_mds_session(session); @@ -3210,7 +3226,6 @@ static void handle_session(struct ceph_mds_session *session, return; } - /* * called under session->mutex. */ @@ -3219,18 +3234,12 @@ static void replay_unsafe_requests(struct ceph_mds_client *mdsc, { struct ceph_mds_request *req, *nreq; struct rb_node *p; - int err; dout("replay_unsafe_requests mds%d\n", session->s_mds); mutex_lock(&mdsc->mutex); - list_for_each_entry_safe(req, nreq, &session->s_unsafe, r_unsafe_item) { - err = __prepare_send_request(mdsc, req, session->s_mds, true); - if (!err) { - ceph_msg_get(req->r_request); - ceph_con_send(&session->s_con, req->r_request); - } - } + list_for_each_entry_safe(req, nreq, &session->s_unsafe, r_unsafe_item) + __send_request(mdsc, session, req, true); /* * also re-send old requests when MDS enters reconnect stage. So that MDS @@ -3245,14 +3254,8 @@ static void replay_unsafe_requests(struct ceph_mds_client *mdsc, if (req->r_attempts == 0) continue; /* only old requests */ if (req->r_session && - req->r_session->s_mds == session->s_mds) { - err = __prepare_send_request(mdsc, req, - session->s_mds, true); - if (!err) { - ceph_msg_get(req->r_request); - ceph_con_send(&session->s_con, req->r_request); - } - } + req->r_session->s_mds == session->s_mds) + __send_request(mdsc, session, req, true); } mutex_unlock(&mdsc->mutex); }