From patchwork Sun Mar 10 19:17:20 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 2245761 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 0D78ADF24C for ; Sun, 10 Mar 2013 19:17:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753195Ab3CJTRX (ORCPT ); Sun, 10 Mar 2013 15:17:23 -0400 Received: from mail-ia0-f174.google.com ([209.85.210.174]:63592 "EHLO mail-ia0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753189Ab3CJTRX (ORCPT ); Sun, 10 Mar 2013 15:17:23 -0400 Received: by mail-ia0-f174.google.com with SMTP id k38so1275626iah.19 for ; Sun, 10 Mar 2013 12:17:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=myT+wTWzaIWtPdgresEfrhZSzbRgBmJqbizzpmlTCR4=; b=SDH+GXLYNbsPAPe9cjYNzp/Efg3qswmk7BMCDA/rpLrQVn8UWfxsbZ+4bOSg0G/jaZ 51olZmQ97ZyAdkAB1O6/CSWSWDVcuM2Pl387IXKS7WO+Q3k3yVzQVydizeJStkxqyzPi x4PIQEH39LYEt1uWb4Aod7AnXf0GIDRcZ1bJ7bPeYQ2C85D+GVEjfYGB9FXnkXjdckHi +WbIlpIUajfzjkoTMkr9jxXwAhcSDgrDDG3hS5n0NuvloY3e3g75C0zug7bxqh8jECN1 3ktBXzonVapM5AwncSvZSx3v87Qu9xQlmLUv3uaCmvGqiMkjzZBkSGHdFFvVFQHcYzdl MucA== X-Received: by 10.50.196.234 with SMTP id ip10mr5261766igc.27.1362943042659; Sun, 10 Mar 2013 12:17:22 -0700 (PDT) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPS id ur12sm9659883igb.8.2013.03.10.12.17.21 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 10 Mar 2013 12:17:21 -0700 (PDT) Message-ID: <513CDC40.4020105@inktank.com> Date: Sun, 10 Mar 2013 14:17:20 -0500 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130221 Thunderbird/17.0.3 MIME-Version: 1.0 To: ceph-devel@vger.kernel.org Subject: [PATCH 6/8] libceph: use data cursor for message pagelist References: <513CD9BE.1070505@inktank.com> In-Reply-To: <513CD9BE.1070505@inktank.com> X-Gm-Message-State: ALoCoQlU9wiZ/px5XeCHYOjxKYshJWjJ6fhuPHAQnkACj3OBPq+F/bf4Klc2qY/lXoPfDkxeBnzQ Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Switch to using the message cursor for the (non-trail) outgoing pagelist data item in a message if present. Notes on the logic changes in out_msg_pos_next(): - only the mds client uses a ceph pagelist for message data; - if the mds client ever uses a pagelist, it never uses a page array (or anything else, for that matter) for data in the same message; - only the osd client uses the trail portion of a message data, and when it does, it never uses any other data fields for outgoing data in the same message; and finally - only the rbd client uses bio message data (never pagelist). Therefore out_msg_pos_next() can assume: - if we're in the trail portion of a message, the message data pagelist, data, and bio can be ignored; and - if there is a page list, there will never be any a bio or page array data, and vice-versa. Signed-off-by: Alex Elder --- net/ceph/messenger.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) @@ -1229,13 +1232,10 @@ static void out_msg_pos_next(struct ceph_connection *con, struct page *page, msg_pos->page_pos = 0; msg_pos->page++; msg_pos->did_page_crc = false; - if (ceph_msg_has_pagelist(msg)) { - list_rotate_left(&msg->l.pagelist->head); #ifdef CONFIG_BLOCK - } else if (ceph_msg_has_bio(msg)) { + if (ceph_msg_has_bio(msg)) iter_bio_next(&msg->b.bio_iter, &msg->b.bio_seg); #endif - } } static void in_msg_pos_next(struct ceph_connection *con, size_t len, @@ -1330,8 +1330,9 @@ static int write_partial_message_data(struct ceph_connection *con) } else if (ceph_msg_has_pages(msg)) { page = msg->p.pages[msg_pos->page]; } else if (ceph_msg_has_pagelist(msg)) { - page = list_first_entry(&msg->l.pagelist->head, - struct page, lru); + use_cursor = true; + page = ceph_msg_data_next(&msg->l, &page_offset, + &length, &last_piece); #ifdef CONFIG_BLOCK } else if (ceph_msg_has_bio(msg)) { struct bio_vec *bv; diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index ce81164..c0f89c1 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -921,8 +921,10 @@ static void prepare_message_data(struct ceph_msg *msg, #endif msg_pos->data_pos = 0; - /* If there's a trail, initialize its cursor */ + /* Initialize data cursors */ + if (ceph_msg_has_pagelist(msg)) + ceph_msg_data_cursor_init(&msg->l); if (ceph_msg_has_trail(msg)) ceph_msg_data_cursor_init(&msg->t); @@ -1210,18 +1212,19 @@ static void out_msg_pos_next(struct ceph_connection *con, struct page *page, { struct ceph_msg *msg = con->out_msg; struct ceph_msg_pos *msg_pos = &con->out_msg_pos; + bool need_crc = false; BUG_ON(!msg); BUG_ON(!sent); msg_pos->data_pos += sent; msg_pos->page_pos += sent; - if (in_trail) { - bool need_crc; - + if (in_trail) need_crc = ceph_msg_data_advance(&msg->t, sent); - BUG_ON(need_crc && sent != len); - } + else if (ceph_msg_has_pagelist(msg)) + need_crc = ceph_msg_data_advance(&msg->l, sent); + BUG_ON(need_crc && sent != len); + if (sent < len) return;