From patchwork Sun Mar 10 20:36:32 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 2245841 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 22EFBDF24C for ; Sun, 10 Mar 2013 20:36:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751227Ab3CJUgh (ORCPT ); Sun, 10 Mar 2013 16:36:37 -0400 Received: from mail-qe0-f52.google.com ([209.85.128.52]:35897 "EHLO mail-qe0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751487Ab3CJUgg (ORCPT ); Sun, 10 Mar 2013 16:36:36 -0400 Received: by mail-qe0-f52.google.com with SMTP id s14so1923895qeb.39 for ; Sun, 10 Mar 2013 13:36:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=xuNVZwLJ88ZzsUbcJC2Yhb/ZOTpSfhJF3XS35mXmPWE=; b=hI1v5hDpCyUsfIFP7pc4YLUrD1+uBeSqZt4CjkV9I/xnbIwnlcNtqKXhQrLiGDYjot qy3u8MuGco31YkEgySBmAbD2zFFWLgdLRJOQ/U5FqFA0eCzwZ3SJnAKDWsY5VH8K0UpP j7qXFa0kOma+gs92BdMqDe+AWfRcytciC+bbxBIhOz0XjEGlBuKyJpVI2tS9urhxHBA9 eB7MkHRvSYWoZ2+04STz1gErJknpwnSogbBdX5L4FJzz1Y1iZYvIyyvAC9ni60BDivdw xXE3YQY1zKwPSQQLpyVWn68KK4dFxk+Fx/0UpmCEuLCokuF+CnvjP3oq01fUdyc2YITm ir/g== X-Received: by 10.224.17.198 with SMTP id t6mr14200647qaa.84.1362947795292; Sun, 10 Mar 2013 13:36:35 -0700 (PDT) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPS id hr3sm21647612qab.4.2013.03.10.13.36.33 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 10 Mar 2013 13:36:33 -0700 (PDT) Message-ID: <513CEED0.9040107@inktank.com> Date: Sun, 10 Mar 2013 15:36:32 -0500 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130221 Thunderbird/17.0.3 MIME-Version: 1.0 To: ceph-devel@vger.kernel.org Subject: [PATCH 2/4] libceph: kill osd request r_trail References: <513CEE83.4040900@inktank.com> In-Reply-To: <513CEE83.4040900@inktank.com> X-Gm-Message-State: ALoCoQlOcYRNj9ZAVa7QBDT7WQaR8K16iiVLfeykrG1kQjq3y2tsb1MhOQJKJA5NBHHCoDfErv+Z Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org The osd trail is a pagelist, used only for a CALL osd operation to hold the class and method names, along with any input data for the call. It is only currently used by the rbd client, and when it's used it is the only bit of outbound data in the osd request. Since we already support (non-trail) pagelist data in a message, we can just save this outbound CALL data in the "normal" pagelist rather than the trail, and get rid of the trail entirely. The existing pagelist support depends on the pagelist being dynamically allocated, and ownership of it is passed to the messenger once it's been attached to a message. (That is to say, the messenger releases and frees the pagelist when it's done with it). That means we need to dynamically allocate the pagelist also. Note that we simply assert that the allocation of a pagelist structure succeeds. Appending to a pagelist might require a dynamic allocation, so we're already assuming we won't run into trouble doing so (we're just ignore any failures--and that should be fixed at some point). This resolves: http://tracker.ceph.com/issues/4407 Signed-off-by: Alex Elder --- include/linux/ceph/osd_client.h | 1 - net/ceph/osd_client.c | 23 ++++++++++++----------- 2 files changed, 12 insertions(+), 12 deletions(-) if (use_mempool) @@ -227,7 +225,7 @@ static u64 osd_req_encode_op(struct ceph_osd_request *req, struct ceph_osd_req_op *src) { u64 out_data_len = 0; - u64 tmp; + struct ceph_pagelist *pagelist; dst->op = cpu_to_le16(src->op); @@ -246,18 +244,23 @@ static u64 osd_req_encode_op(struct ceph_osd_request *req, cpu_to_le32(src->extent.truncate_seq); break; case CEPH_OSD_OP_CALL: + pagelist = kmalloc(sizeof (*pagelist), GFP_NOFS); + BUG_ON(!pagelist); + ceph_pagelist_init(pagelist); + dst->cls.class_len = src->cls.class_len; dst->cls.method_len = src->cls.method_len; dst->cls.indata_len = cpu_to_le32(src->cls.indata_len); - - tmp = req->r_trail.length; - ceph_pagelist_append(&req->r_trail, src->cls.class_name, + ceph_pagelist_append(pagelist, src->cls.class_name, src->cls.class_len); - ceph_pagelist_append(&req->r_trail, src->cls.method_name, + ceph_pagelist_append(pagelist, src->cls.method_name, src->cls.method_len); - ceph_pagelist_append(&req->r_trail, src->cls.indata, + ceph_pagelist_append(pagelist, src->cls.indata, src->cls.indata_len); - out_data_len = req->r_trail.length - tmp; + + req->r_data_out.type = CEPH_OSD_DATA_TYPE_PAGELIST; + req->r_data_out.pagelist = pagelist; + out_data_len = pagelist->length; break; case CEPH_OSD_OP_STARTSYNC: break; @@ -1782,8 +1785,6 @@ int ceph_osdc_start_request(struct ceph_osd_client *osdc, ceph_osdc_msg_data_set(req->r_reply, &req->r_data_in); ceph_osdc_msg_data_set(req->r_request, &req->r_data_out); - if (req->r_trail.length) - ceph_msg_data_set_trail(req->r_request, &req->r_trail); register_request(osdc, req); diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h index cf0ba93..1dab291 100644 --- a/include/linux/ceph/osd_client.h +++ b/include/linux/ceph/osd_client.h @@ -134,7 +134,6 @@ struct ceph_osd_request { struct ceph_osd_data r_data_in; struct ceph_osd_data r_data_out; - struct ceph_pagelist r_trail; /* trailing part of data out */ }; struct ceph_osd_event { diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index 8fa3300..d0a9fc4 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -138,7 +138,6 @@ void ceph_osdc_release_request(struct kref *kref) } ceph_put_snap_context(req->r_snapc); - ceph_pagelist_release(&req->r_trail); if (req->r_mempool) mempool_free(req, req->r_osdc->req_mempool); else @@ -202,7 +201,6 @@ struct ceph_osd_request *ceph_osdc_alloc_request(struct ceph_osd_client *osdc, req->r_data_in.type = CEPH_OSD_DATA_TYPE_NONE; req->r_data_out.type = CEPH_OSD_DATA_TYPE_NONE; - ceph_pagelist_init(&req->r_trail); /* create request message; allow space for oid */