From patchwork Wed Aug 24 13:18:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Dryomov X-Patchwork-Id: 9297801 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CE3B3607F0 for ; Wed, 24 Aug 2016 13:37:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BE10F28F2A for ; Wed, 24 Aug 2016 13:37:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B06EC28F3B; Wed, 24 Aug 2016 13:37:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 29D8F28F2A for ; Wed, 24 Aug 2016 13:37:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932739AbcHXNhA (ORCPT ); Wed, 24 Aug 2016 09:37:00 -0400 Received: from mail-yb0-f194.google.com ([209.85.213.194]:33431 "EHLO mail-yb0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754342AbcHXNg7 (ORCPT ); Wed, 24 Aug 2016 09:36:59 -0400 Received: by mail-yb0-f194.google.com with SMTP id e31so460387ybi.0 for ; Wed, 24 Aug 2016 06:36:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:subject:date:message-id:in-reply-to:references; bh=k/mh9dghTzWJluWar2zHJLk07Mw3RKJzr0Sc9Km7Y3o=; b=xlRBHeQUtSx9jAL3rrAR0dd7IzVaG1yHNBXb1cL549OF4Gf4r4YPmuEzo/KTTTiChd r8enEeSea+c2bHVNKcgW6IUgpavHMQVYIBMLAd34SnVYXEDMuj55h2IRWEOdPnIEkEAH t9JnBZjdPNdYk7ndW8T6B+Nbhd6yuIKMzEbltbPGoviLXw6lpFh6VtyERf+vfLGaRgXm V8U2BjHIRvML8ETAsG4EMaH0aR9yEjvrS+Roe7MHZBbm1nnBWkzP5HygnCUUEFD8WnZh YR0TNhfXBVh8Tb6EPqXA9T1btFs21md2avPQoXeT2LuUdbcgCql226nkALQ/Up3Be5Ab Nkeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=k/mh9dghTzWJluWar2zHJLk07Mw3RKJzr0Sc9Km7Y3o=; b=QJgNKHQysW5hxz1t0Sxo/5rZ++U2GG0PFIyqHGMD/H1pjus6C0wi6/2scgNyN6gWTe AbWT14X8WSJJcEL4J1d2KjEQUrrb325PgW/ZYQr46mUXDMwIvnEpWTc7fV/aFUzWc7ke Luscmya2jZ+c7Qq+woZSGQM5zRMQSDM3FqgL/NKArargX+Z3kJnZslxyxrvGfWrotLQn R5pOuEfvv+GCW5gcLXhY9ARtBm7XHY/bMcSp62jov43+RGoE9oZnEyKdKoMzcqBPGl/v EmZEKhHjeeaEmwa7rNlRW/mxZwd3UDZfm4DsMjFkx3JxnIP2Qw9Co7U4ub0OI/cWs0C6 k7cg== X-Gm-Message-State: AEkoouvoPw4xq9uGVgAmFoOYEPJt+tHuyNmL4tlfrfDNZB6ousfSDbwF3ncBH8SLNB9xdA== X-Received: by 10.37.96.69 with SMTP id u66mr2373089ybb.7.1472044734373; Wed, 24 Aug 2016 06:18:54 -0700 (PDT) Received: from dhcp-1-219.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id e197sm5371294ywa.16.2016.08.24.06.18.53 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Aug 2016 06:18:54 -0700 (PDT) From: Ilya Dryomov To: ceph-devel@vger.kernel.org Subject: [PATCH 02/16] libceph: support for CEPH_OSD_OP_LIST_WATCHERS Date: Wed, 24 Aug 2016 15:18:26 +0200 Message-Id: <1472044720-29116-3-git-send-email-idryomov@gmail.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1472044720-29116-1-git-send-email-idryomov@gmail.com> References: <1472044720-29116-1-git-send-email-idryomov@gmail.com> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Douglas Fuller Add support for this Ceph OSD op, needed to support the RBD exclusive lock feature. Signed-off-by: Douglas Fuller [idryomov@gmail.com: refactor, misc fixes throughout] Signed-off-by: Ilya Dryomov Reviewed-by: Alex Elder --- include/linux/ceph/osd_client.h | 15 +++++- net/ceph/osd_client.c | 117 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 131 insertions(+), 1 deletion(-) diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h index 858932304260..19821a191732 100644 --- a/include/linux/ceph/osd_client.h +++ b/include/linux/ceph/osd_client.h @@ -121,6 +121,9 @@ struct ceph_osd_req_op { struct ceph_osd_data response_data; } notify; struct { + struct ceph_osd_data response_data; + } list_watchers; + struct { u64 expected_object_size; u64 expected_write_size; } alloc_hint; @@ -249,6 +252,12 @@ struct ceph_osd_linger_request { size_t *preply_len; }; +struct ceph_watch_item { + struct ceph_entity_name name; + u64 cookie; + struct ceph_entity_addr addr; +}; + struct ceph_osd_client { struct ceph_client *client; @@ -346,7 +355,6 @@ extern void osd_req_op_cls_response_data_pages(struct ceph_osd_request *, struct page **pages, u64 length, u32 alignment, bool pages_from_pool, bool own_pages); - extern void osd_req_op_cls_init(struct ceph_osd_request *osd_req, unsigned int which, u16 opcode, const char *class, const char *method); @@ -434,5 +442,10 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc, size_t *preply_len); int ceph_osdc_watch_check(struct ceph_osd_client *osdc, struct ceph_osd_linger_request *lreq); +int ceph_osdc_list_watchers(struct ceph_osd_client *osdc, + struct ceph_object_id *oid, + struct ceph_object_locator *oloc, + struct ceph_watch_item **watchers, + u32 *num_watchers); #endif diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index a97e7b506612..dd51ec8ce97f 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -338,6 +338,9 @@ static void osd_req_op_data_release(struct ceph_osd_request *osd_req, ceph_osd_data_release(&op->notify.request_data); ceph_osd_data_release(&op->notify.response_data); break; + case CEPH_OSD_OP_LIST_WATCHERS: + ceph_osd_data_release(&op->list_watchers.response_data); + break; default: break; } @@ -863,6 +866,8 @@ static u32 osd_req_encode_op(struct ceph_osd_op *dst, case CEPH_OSD_OP_NOTIFY: dst->notify.cookie = cpu_to_le64(src->notify.cookie); break; + case CEPH_OSD_OP_LIST_WATCHERS: + break; case CEPH_OSD_OP_SETALLOCHINT: dst->alloc_hint.expected_object_size = cpu_to_le64(src->alloc_hint.expected_object_size); @@ -1445,6 +1450,10 @@ static void setup_request_data(struct ceph_osd_request *req, ceph_osdc_msg_data_add(req->r_reply, &op->extent.osd_data); break; + case CEPH_OSD_OP_LIST_WATCHERS: + ceph_osdc_msg_data_add(req->r_reply, + &op->list_watchers.response_data); + break; /* both */ case CEPH_OSD_OP_CALL: @@ -3891,6 +3900,114 @@ int ceph_osdc_watch_check(struct ceph_osd_client *osdc, return ret; } +static int decode_watcher(void **p, void *end, struct ceph_watch_item *item) +{ + u8 struct_v; + u32 struct_len; + int ret; + + ret = ceph_start_decoding(p, end, 2, "watch_item_t", + &struct_v, &struct_len); + if (ret) + return ret; + + ceph_decode_copy(p, &item->name, sizeof(item->name)); + item->cookie = ceph_decode_64(p); + *p += 4; /* skip timeout_seconds */ + if (struct_v >= 2) { + ceph_decode_copy(p, &item->addr, sizeof(item->addr)); + ceph_decode_addr(&item->addr); + } + + dout("%s %s%llu cookie %llu addr %s\n", __func__, + ENTITY_NAME(item->name), item->cookie, + ceph_pr_addr(&item->addr.in_addr)); + return 0; +} + +static int decode_watchers(void **p, void *end, + struct ceph_watch_item **watchers, + u32 *num_watchers) +{ + u8 struct_v; + u32 struct_len; + int i; + int ret; + + ret = ceph_start_decoding(p, end, 1, "obj_list_watch_response_t", + &struct_v, &struct_len); + if (ret) + return ret; + + *num_watchers = ceph_decode_32(p); + *watchers = kcalloc(*num_watchers, sizeof(**watchers), GFP_NOIO); + if (!*watchers) + return -ENOMEM; + + for (i = 0; i < *num_watchers; i++) { + ret = decode_watcher(p, end, *watchers + i); + if (ret) { + kfree(*watchers); + return ret; + } + } + + return 0; +} + +/* + * On success, the caller is responsible for: + * + * kfree(watchers); + */ +int ceph_osdc_list_watchers(struct ceph_osd_client *osdc, + struct ceph_object_id *oid, + struct ceph_object_locator *oloc, + struct ceph_watch_item **watchers, + u32 *num_watchers) +{ + struct ceph_osd_request *req; + struct page **pages; + int ret; + + req = ceph_osdc_alloc_request(osdc, NULL, 1, false, GFP_NOIO); + if (!req) + return -ENOMEM; + + ceph_oid_copy(&req->r_base_oid, oid); + ceph_oloc_copy(&req->r_base_oloc, oloc); + req->r_flags = CEPH_OSD_FLAG_READ; + + ret = ceph_osdc_alloc_messages(req, GFP_NOIO); + if (ret) + goto out_put_req; + + pages = ceph_alloc_page_vector(1, GFP_NOIO); + if (IS_ERR(pages)) { + ret = PTR_ERR(pages); + goto out_put_req; + } + + osd_req_op_init(req, 0, CEPH_OSD_OP_LIST_WATCHERS, 0); + ceph_osd_data_pages_init(osd_req_op_data(req, 0, list_watchers, + response_data), + pages, PAGE_SIZE, 0, false, true); + + ceph_osdc_start_request(osdc, req, false); + ret = ceph_osdc_wait_request(osdc, req); + if (ret >= 0) { + void *p = page_address(pages[0]); + void *const end = p + req->r_ops[0].outdata_len; + + ret = decode_watchers(&p, end, watchers, num_watchers); + } + +out_put_req: + ceph_osdc_put_request(req); + return ret; +} +EXPORT_SYMBOL(ceph_osdc_list_watchers); + /* * Call all pending notify callbacks - for use after a watch is * unregistered, to make sure no more callbacks for it will be invoked