From patchwork Mon Aug 17 01:34:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Perches X-Patchwork-Id: 11716259 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3342116B1 for ; Mon, 17 Aug 2020 01:34:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 12CFC2075D for ; Mon, 17 Aug 2020 01:34:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726705AbgHQBei (ORCPT ); Sun, 16 Aug 2020 21:34:38 -0400 Received: from smtprelay0062.hostedemail.com ([216.40.44.62]:59552 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726698AbgHQBeh (ORCPT ); Sun, 16 Aug 2020 21:34:37 -0400 Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60]) by smtprelay04.hostedemail.com (Postfix) with ESMTP id 1C04C180A7FE9; Mon, 17 Aug 2020 01:34:34 +0000 (UTC) X-Session-Marker: 6A6F6540706572636865732E636F6D X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:69:327:355:379:541:617:960:966:968:973:981:988:989:1260:1311:1314:1345:1359:1437:1461:1515:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2689:2898:2904:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3872:4321:4385:4605:5007:6261:7875:7903:7904:8603:8957:9389:10004:10848:11026:11914:12043:12291:12296:12297:12438:12555:12679:12683:12895:12986:13894:13972:14110:14394:21080:21324:21325:21433:21451:21611:21627:21740:21987:21990:30012:30029:30030:30045:30054:30056:30062:30070,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: plant34_26173d727012 X-Filterd-Recvd-Size: 45149 Received: from joe-laptop.perches.com (unknown [47.151.133.149]) (Authenticated sender: joe@perches.com) by omf01.hostedemail.com (Postfix) with ESMTPA; Mon, 17 Aug 2020 01:34:32 +0000 (UTC) From: Joe Perches To: Jeff Layton , Ilya Dryomov Cc: ceph-devel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 2/6] ceph: Remove embedded function names from pr_debug uses Date: Sun, 16 Aug 2020 18:34:05 -0700 Message-Id: X-Mailer: git-send-email 2.26.0 In-Reply-To: References: MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Use "%s: " ..., __func__ so function renaming changes the logging too. Signed-off-by: Joe Perches --- fs/ceph/addr.c | 57 +++++++++++++------------- fs/ceph/caps.c | 92 ++++++++++++++++++++++-------------------- fs/ceph/debugfs.c | 2 +- fs/ceph/dir.c | 16 ++++---- fs/ceph/file.c | 6 +-- fs/ceph/inode.c | 18 ++++----- fs/ceph/locks.c | 15 ++++--- fs/ceph/mds_client.c | 95 ++++++++++++++++++++++---------------------- fs/ceph/snap.c | 22 +++++----- fs/ceph/super.c | 10 ++--- fs/ceph/xattr.c | 25 ++++++------ 11 files changed, 182 insertions(+), 176 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 12ae6f7874fb..4f0f177428c7 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -311,8 +311,8 @@ static void finish_read(struct ceph_osd_request *req) int num_pages; int i; - pr_debug("finish_read %p req %p rc %d bytes %d\n", - inode, req, rc, bytes); + pr_debug("%s: %p req %p rc %d bytes %d\n", + __func__, inode, req, rc, bytes); if (rc == -EBLACKLISTED) ceph_inode_to_client(inode)->blacklisted = true; @@ -333,8 +333,8 @@ static void finish_read(struct ceph_osd_request *req) int s = bytes < 0 ? 0 : bytes; zero_user_segment(page, s, PAGE_SIZE); } - pr_debug("finish_read %p uptodate %p idx %lu\n", - inode, page, page->index); + pr_debug("%s: %p uptodate %p idx %lu\n", + __func__, inode, page, page->index); flush_dcache_page(page); SetPageUptodate(page); ceph_readpage_to_fscache(inode, page); @@ -379,9 +379,10 @@ static int start_read(struct inode *inode, struct ceph_rw_context *rw_ctx, ret = ceph_try_get_caps(inode, CEPH_CAP_FILE_RD, want, true, &got); if (ret < 0) { - pr_debug("start_read %p, error getting cap\n", inode); + pr_debug("%s: %p, error getting cap\n", + __func__, inode); } else if (!(got & want)) { - pr_debug("start_read %p, no cache cap\n", inode); + pr_debug("%s: %p, no cache cap\n", __func__, inode); ret = 0; } if (ret <= 0) { @@ -409,8 +410,8 @@ static int start_read(struct inode *inode, struct ceph_rw_context *rw_ctx, break; } len = nr_pages << PAGE_SHIFT; - pr_debug("start_read %p nr_pages %d is %lld~%lld\n", - inode, nr_pages, off, len); + pr_debug("%s: %p nr_pages %d is %lld~%lld\n", + __func__, inode, nr_pages, off, len); vino = ceph_vino(inode); req = ceph_osdc_new_request(osdc, &ci->i_layout, vino, off, &len, 0, 1, CEPH_OSD_OP_READ, @@ -434,14 +435,14 @@ static int start_read(struct inode *inode, struct ceph_rw_context *rw_ctx, BUG_ON(PageLocked(page)); list_del(&page->lru); - pr_debug("start_read %p adding %p idx %lu\n", - inode, page, page->index); + pr_debug("%s: %p adding %p idx %lu\n", + __func__, inode, page, page->index); if (add_to_page_cache_lru(page, &inode->i_data, page->index, GFP_KERNEL)) { ceph_fscache_uncache_page(inode, page); put_page(page); - pr_debug("start_read %p add_to_page_cache failed %p\n", - inode, page); + pr_debug("%s: %p add_to_page_cache failed %p\n", + __func__, inode, page); nr_pages = i; if (nr_pages > 0) { len = nr_pages << PAGE_SHIFT; @@ -456,7 +457,8 @@ static int start_read(struct inode *inode, struct ceph_rw_context *rw_ctx, req->r_callback = finish_read; req->r_inode = inode; - pr_debug("start_read %p starting %p %lld~%lld\n", inode, req, off, len); + pr_debug("%s: %p starting %p %lld~%lld\n", + __func__, inode, req, off, len); ret = ceph_osdc_start_request(osdc, req, false); if (ret < 0) goto out_pages; @@ -798,7 +800,7 @@ static void writepages_finish(struct ceph_osd_request *req) struct ceph_fs_client *fsc = ceph_inode_to_client(inode); bool remove_page; - pr_debug("writepages_finish %p rc %d\n", inode, rc); + pr_debug("%s: %p rc %d\n", __func__, inode, rc); if (rc < 0) { mapping_set_error(mapping, rc); ceph_set_error_write(ci); @@ -853,8 +855,9 @@ static void writepages_finish(struct ceph_osd_request *req) unlock_page(page); } - pr_debug("writepages_finish %p wrote %llu bytes cleaned %d pages\n", - inode, osd_data->length, rc >= 0 ? num_pages : 0); + pr_debug("%s: %p wrote %llu bytes cleaned %d pages\n", + __func__, inode, osd_data->length, + rc >= 0 ? num_pages : 0); release_pages(osd_data->pages, num_pages); } @@ -1952,11 +1955,10 @@ static int __ceph_pool_perm_get(struct ceph_inode_info *ci, goto out; if (pool_ns) - pr_debug("__ceph_pool_perm_get pool %lld ns %.*s no perm cached\n", - pool, (int)pool_ns->len, pool_ns->str); + pr_debug("%s: %lld ns %.*s no perm cached\n", + __func__, pool, (int)pool_ns->len, pool_ns->str); else - pr_debug("__ceph_pool_perm_get pool %lld no perm cached\n", - pool); + pr_debug("%s: %lld no perm cached\n", __func__, pool); down_write(&mdsc->pool_perm_rwsem); p = &mdsc->pool_perm_tree.rb_node; @@ -2083,11 +2085,10 @@ static int __ceph_pool_perm_get(struct ceph_inode_info *ci, if (!err) err = have; if (pool_ns) - pr_debug("__ceph_pool_perm_get pool %lld ns %.*s result = %d\n", - pool, (int)pool_ns->len, pool_ns->str, err); + pr_debug("%s: pool %lld ns %.*s result = %d\n", + __func__, pool, (int)pool_ns->len, pool_ns->str, err); else - pr_debug("__ceph_pool_perm_get pool %lld result = %d\n", - pool, err); + pr_debug("%s: pool %lld result = %d\n", __func__, pool, err); return err; } @@ -2118,13 +2119,13 @@ int ceph_pool_perm_check(struct inode *inode, int need) check: if (flags & CEPH_I_POOL_PERM) { if ((need & CEPH_CAP_FILE_RD) && !(flags & CEPH_I_POOL_RD)) { - pr_debug("ceph_pool_perm_check pool %lld no read perm\n", - pool); + pr_debug("%s: pool %lld no read perm\n", + __func__, pool); return -EPERM; } if ((need & CEPH_CAP_FILE_WR) && !(flags & CEPH_I_POOL_WR)) { - pr_debug("ceph_pool_perm_check pool %lld no write perm\n", - pool); + pr_debug("%s: pool %lld no write perm\n", + __func__, pool); return -EPERM; } return 0; diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 727b71b0ba1e..9f378c546e4d 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -492,8 +492,8 @@ static void __cap_set_timeouts(struct ceph_mds_client *mdsc, struct ceph_mount_options *opt = mdsc->fsc->mount_options; ci->i_hold_caps_max = round_jiffies(jiffies + opt->caps_wanted_delay_max * HZ); - pr_debug("__cap_set_timeouts %p %lu\n", - &ci->vfs_inode, ci->i_hold_caps_max - jiffies); + pr_debug("%s: %p %lu\n", + __func__, &ci->vfs_inode, ci->i_hold_caps_max - jiffies); } /* @@ -791,8 +791,8 @@ static int __cap_is_valid(struct ceph_cap *cap) spin_unlock(&cap->session->s_gen_ttl_lock); if (cap->cap_gen < gen || time_after_eq(jiffies, ttl)) { - pr_debug("__cap_is_valid %p cap %p issued %s but STALE (gen %u vs %u)\n", - &cap->ci->vfs_inode, + pr_debug("%s: %p cap %p issued %s but STALE (gen %u vs %u)\n", + __func__, &cap->ci->vfs_inode, cap, ceph_cap_string(cap->issued), cap->cap_gen, gen); return 0; } @@ -817,8 +817,9 @@ int __ceph_caps_issued(struct ceph_inode_info *ci, int *implemented) cap = rb_entry(p, struct ceph_cap, ci_node); if (!__cap_is_valid(cap)) continue; - pr_debug("__ceph_caps_issued %p cap %p issued %s\n", - &ci->vfs_inode, cap, ceph_cap_string(cap->issued)); + pr_debug("%s: %p cap %p issued %s\n", + __func__, &ci->vfs_inode, cap, + ceph_cap_string(cap->issued)); have |= cap->issued; if (implemented) *implemented |= cap->implemented; @@ -865,12 +866,12 @@ static void __touch_cap(struct ceph_cap *cap) spin_lock(&s->s_cap_lock); if (!s->s_cap_iterator) { - pr_debug("__touch_cap %p cap %p mds%d\n", - &cap->ci->vfs_inode, cap, s->s_mds); + pr_debug("%s: %p cap %p mds%d\n", + __func__, &cap->ci->vfs_inode, cap, s->s_mds); list_move_tail(&cap->session_caps, &s->s_caps); } else { - pr_debug("__touch_cap %p cap %p mds%d NOP, iterating over caps\n", - &cap->ci->vfs_inode, cap, s->s_mds); + pr_debug("%s: %p cap %p mds%d NOP, iterating over caps\n", + __func__, &cap->ci->vfs_inode, cap, s->s_mds); } spin_unlock(&s->s_cap_lock); } @@ -887,8 +888,8 @@ int __ceph_caps_issued_mask(struct ceph_inode_info *ci, int mask, int touch) int have = ci->i_snap_caps; if ((have & mask) == mask) { - pr_debug("__ceph_caps_issued_mask ino 0x%lx snap issued %s (mask %s)\n", - ci->vfs_inode.i_ino, + pr_debug("%s: ino 0x%lx snap issued %s (mask %s)\n", + __func__, ci->vfs_inode.i_ino, ceph_cap_string(have), ceph_cap_string(mask)); return 1; @@ -899,8 +900,8 @@ int __ceph_caps_issued_mask(struct ceph_inode_info *ci, int mask, int touch) if (!__cap_is_valid(cap)) continue; if ((cap->issued & mask) == mask) { - pr_debug("__ceph_caps_issued_mask ino 0x%lx cap %p issued %s (mask %s)\n", - ci->vfs_inode.i_ino, cap, + pr_debug("%s: ino 0x%lx cap %p issued %s (mask %s)\n", + __func__, ci->vfs_inode.i_ino, cap, ceph_cap_string(cap->issued), ceph_cap_string(mask)); if (touch) @@ -911,8 +912,8 @@ int __ceph_caps_issued_mask(struct ceph_inode_info *ci, int mask, int touch) /* does a combination of caps satisfy mask? */ have |= cap->issued; if ((have & mask) == mask) { - pr_debug("__ceph_caps_issued_mask ino 0x%lx combo issued %s (mask %s)\n", - ci->vfs_inode.i_ino, + pr_debug("%s: ino 0x%lx combo issued %s (mask %s)\n", + __func__, ci->vfs_inode.i_ino, ceph_cap_string(cap->issued), ceph_cap_string(mask)); if (touch) { @@ -977,8 +978,8 @@ int ceph_caps_revoking(struct ceph_inode_info *ci, int mask) spin_lock(&ci->i_ceph_lock); ret = __ceph_caps_revoking_other(ci, NULL, mask); spin_unlock(&ci->i_ceph_lock); - pr_debug("ceph_caps_revoking %p %s = %d\n", - inode, ceph_cap_string(mask), ret); + pr_debug("%s: %p %s = %d\n", + __func__, inode, ceph_cap_string(mask), ret); return ret; } @@ -1144,7 +1145,7 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release) ceph_sb_to_client(ci->vfs_inode.i_sb)->mdsc; int removed = 0; - pr_debug("__ceph_remove_cap %p from %p\n", cap, &ci->vfs_inode); + pr_debug("%s: %p from %p\n", __func__, cap, &ci->vfs_inode); /* remove from inode's cap rbtree, and clear auth cap */ rb_erase(&cap->ci_node, &ci->i_caps); @@ -1157,8 +1158,8 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release) spin_lock(&session->s_cap_lock); if (session->s_cap_iterator == cap) { /* not yet, we are iterating over this very cap */ - pr_debug("__ceph_remove_cap delaying %p removal from session %p\n", - cap, cap->session); + pr_debug("%s: delaying %p removal from session %p\n", + __func__, cap, cap->session); } else { list_del_init(&cap->session_caps); session->s_nr_caps--; @@ -1234,8 +1235,8 @@ static int send_cap_msg(struct cap_msg_args *arg) size_t extra_len; struct ceph_osd_client *osdc = &arg->session->s_mdsc->fsc->client->osdc; - pr_debug("send_cap_msg %s %llx %llx caps %s wanted %s dirty %s seq %u/%u tid %llu/%llu mseq %u follows %lld size %llu/%llu xattr_ver %llu xattr_len %d\n", - ceph_cap_op_name(arg->op), + pr_debug("%s: %s %llx %llx caps %s wanted %s dirty %s seq %u/%u tid %llu/%llu mseq %u follows %lld size %llu/%llu xattr_ver %llu xattr_len %d\n", + __func__, ceph_cap_op_name(arg->op), arg->cid, arg->ino, ceph_cap_string(arg->caps), ceph_cap_string(arg->wanted), ceph_cap_string(arg->dirty), arg->seq, arg->issue_seq, arg->flush_tid, arg->oldest_flush_tid, @@ -1636,7 +1637,7 @@ void ceph_flush_snaps(struct ceph_inode_info *ci, struct ceph_mds_session *session = NULL; int mds; - pr_debug("ceph_flush_snaps %p\n", inode); + pr_debug("%s: %p\n", __func__, inode); if (psession) session = *psession; retry: @@ -1827,8 +1828,8 @@ static u64 __mark_caps_flushing(struct inode *inode, BUG_ON(!ci->i_prealloc_cap_flush); flushing = ci->i_dirty_caps; - pr_debug("__mark_caps_flushing flushing %s, flushing_caps %s -> %s\n", - ceph_cap_string(flushing), + pr_debug("%s: flushing %s, flushing_caps %s -> %s\n", + __func__, ceph_cap_string(flushing), ceph_cap_string(ci->i_flushing_caps), ceph_cap_string(ci->i_flushing_caps | flushing)); ci->i_flushing_caps |= flushing; @@ -1872,12 +1873,12 @@ static int try_nonblocking_invalidate(struct inode *inode) if (inode->i_data.nrpages == 0 && invalidating_gen == ci->i_rdcache_gen) { /* success. */ - pr_debug("try_nonblocking_invalidate %p success\n", inode); + pr_debug("%s: %p success\n", __func__, inode); /* save any racing async invalidate some trouble */ ci->i_rdcache_revoking = ci->i_rdcache_gen - 1; return 0; } - pr_debug("try_nonblocking_invalidate %p failed\n", inode); + pr_debug("%s: %p failed\n", __func__, inode); return -1; } @@ -2288,8 +2289,10 @@ static int unsafe_request_wait(struct inode *inode) } spin_unlock(&ci->i_unsafe_lock); - pr_debug("unsafe_request_wait %p wait on tid %llu %llu\n", - inode, req1 ? req1->r_tid : 0ULL, req2 ? req2->r_tid : 0ULL); + pr_debug("%s: %p wait on tid %llu %llu\n", + __func__, inode, + req1 ? req1->r_tid : 0ULL, + req2 ? req2->r_tid : 0ULL); if (req1) { ret = !wait_for_completion_timeout(&req1->r_safe_completion, ceph_timeout_jiffies(req1->r_timeout)); @@ -2644,7 +2647,7 @@ static int try_get_cap_refs(struct inode *inode, int need, int want, if ((flags & CHECK_FILELOCK) && (ci->i_ceph_flags & CEPH_I_ERROR_FILELOCK)) { - pr_debug("try_get_cap_refs %p error filelock\n", inode); + pr_debug("%s: %p error filelock\n", __func__, inode); ret = -EIO; goto out_unlock; } @@ -3192,7 +3195,7 @@ static void invalidate_aliases(struct inode *inode) { struct dentry *dn, *prev = NULL; - pr_debug("invalidate_aliases inode %p\n", inode); + pr_debug("%s: inode %p\n", __func__, inode); d_prune_aliases(inode); /* * For non-directory inode, d_find_alias() only returns @@ -3263,8 +3266,9 @@ static void handle_cap_grant(struct inode *inode, bool deleted_inode = false; bool fill_inline = false; - pr_debug("handle_cap_grant inode %p cap %p mds%d seq %d %s\n", - inode, cap, session->s_mds, seq, ceph_cap_string(newcaps)); + pr_debug("%s: inode %p cap %p mds%d seq %d %s\n", + __func__, inode, cap, session->s_mds, seq, + ceph_cap_string(newcaps)); pr_debug("size %llu max_size %llu, i_size %llu\n", size, max_size, inode->i_size); @@ -3574,8 +3578,8 @@ static void handle_cap_flush_ack(struct inode *inode, u64 flush_tid, } } - pr_debug("handle_cap_flush_ack inode %p mds%d seq %d on %s cleaned %s, flushing %s -> %s\n", - inode, session->s_mds, seq, ceph_cap_string(dirty), + pr_debug("%s: inode %p mds%d seq %d on %s cleaned %s, flushing %s -> %s\n", + __func__, inode, session->s_mds, seq, ceph_cap_string(dirty), ceph_cap_string(cleaned), ceph_cap_string(ci->i_flushing_caps), ceph_cap_string(ci->i_flushing_caps & ~cleaned)); @@ -3655,8 +3659,8 @@ static void handle_cap_flushsnap_ack(struct inode *inode, u64 flush_tid, bool wake_ci = false; bool wake_mdsc = false; - pr_debug("handle_cap_flushsnap_ack inode %p ci %p mds%d follows %lld\n", - inode, ci, session->s_mds, follows); + pr_debug("%s: inode %p ci %p mds%d follows %lld\n", + __func__, inode, ci, session->s_mds, follows); spin_lock(&ci->i_ceph_lock); list_for_each_entry(capsnap, &ci->i_cap_snaps, ci_item) { @@ -3726,8 +3730,8 @@ static bool handle_cap_trunc(struct inode *inode, issued |= implemented | dirty; - pr_debug("handle_cap_trunc inode %p mds%d seq %d to %lld seq %d\n", - inode, mds, seq, truncate_size, truncate_seq); + pr_debug("%s: %p mds%d seq %d to %lld seq %d\n", + __func__, inode, mds, seq, truncate_size, truncate_seq); queue_trunc = ceph_fill_file_size(inode, issued, truncate_seq, truncate_size, size); return queue_trunc; @@ -3765,8 +3769,8 @@ static void handle_cap_export(struct inode *inode, struct ceph_mds_caps *ex, target = -1; } - pr_debug("handle_cap_export inode %p ci %p mds%d mseq %d target %d\n", - inode, ci, mds, mseq, target); + pr_debug("%s: inode %p ci %p mds%d mseq %d target %d\n", + __func__, inode, ci, mds, mseq, target); retry: spin_lock(&ci->i_ceph_lock); cap = __get_cap_for_mds(ci, mds); @@ -3898,8 +3902,8 @@ static void handle_cap_import(struct ceph_mds_client *mdsc, peer = -1; } - pr_debug("handle_cap_import inode %p ci %p mds%d mseq %d peer %d\n", - inode, ci, mds, mseq, peer); + pr_debug("%s: inode %p ci %p mds%d mseq %d peer %d\n", + __func__, inode, ci, mds, mseq, peer); retry: cap = __get_cap_for_mds(ci, mds); if (!cap) { diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c index 244428de3c4b..298b0df741cc 100644 --- a/fs/ceph/debugfs.c +++ b/fs/ceph/debugfs.c @@ -338,7 +338,7 @@ void ceph_fs_debugfs_init(struct ceph_fs_client *fsc) { char name[100]; - pr_debug("ceph_fs_debugfs_init\n"); + pr_debug("%s\n", __func__); fsc->debugfs_congestion_kb = debugfs_create_file("writeback_congestion_kb", 0600, diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c index 911b905cc181..9178b19f93f9 100644 --- a/fs/ceph/dir.c +++ b/fs/ceph/dir.c @@ -120,7 +120,7 @@ static int note_last_dentry(struct ceph_dir_file_info *dfi, const char *name, memcpy(dfi->last_name, name, len); dfi->last_name[len] = 0; dfi->next_offset = next_offset; - pr_debug("note_last_dentry '%s'\n", dfi->last_name); + pr_debug("%s: '%s'\n", __func__, dfi->last_name); return 0; } @@ -191,8 +191,8 @@ static int __dcache_readdir(struct file *file, struct dir_context *ctx, u64 idx = 0; int err = 0; - pr_debug("__dcache_readdir %p v%u at %llx\n", - dir, (unsigned int)shared_gen, ctx->pos); + pr_debug("%s: %p v%u at %llx\n", + __func__, dir, (unsigned int)shared_gen, ctx->pos); /* search start position */ if (ctx->pos > 2) { @@ -222,7 +222,7 @@ static int __dcache_readdir(struct file *file, struct dir_context *ctx, dput(dentry); } - pr_debug("__dcache_readdir %p cache idx %llu\n", dir, idx); + pr_debug("%s: %p cache idx %llu\n", __func__, dir, idx); } @@ -1597,7 +1597,7 @@ static int dentry_lease_is_valid(struct dentry *dentry, unsigned int flags) CEPH_MDS_LEASE_RENEW, seq); ceph_put_mds_session(session); } - pr_debug("dentry_lease_is_valid - dentry %p = %d\n", dentry, valid); + pr_debug("%s: dentry %p = %d\n", __func__, dentry, valid); return valid; } @@ -1661,8 +1661,8 @@ static int dir_lease_is_valid(struct inode *dir, struct dentry *dentry, valid = 0; spin_unlock(&dentry->d_lock); } - pr_debug("dir_lease_is_valid dir %p v%u dentry %p = %d\n", - dir, (unsigned int)atomic_read(&ci->i_shared_gen), + pr_debug("%s: dir %p v%u dentry %p = %d\n", + __func__, dir, (unsigned int)atomic_read(&ci->i_shared_gen), dentry, valid); return valid; } @@ -1825,7 +1825,7 @@ static void ceph_d_prune(struct dentry *dentry) struct ceph_inode_info *dir_ci; struct ceph_dentry_info *di; - pr_debug("ceph_d_prune %pd %p\n", dentry, dentry); + pr_debug("%s: %pd %p\n", __func__, dentry, dentry); /* do we have a valid parent? */ if (IS_ROOT(dentry)) diff --git a/fs/ceph/file.c b/fs/ceph/file.c index 7d7644f8862c..d75b466780ca 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -1020,7 +1020,7 @@ static void ceph_aio_complete(struct inode *inode, if (!ret) ret = aio_req->total_len; - pr_debug("ceph_aio_complete %p rc %d\n", inode, ret); + pr_debug("%s: %p rc %d\n", __func__, inode, ret); if (ret >= 0 && aio_req->write) { int dirty; @@ -1062,8 +1062,8 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req) BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_BVECS); BUG_ON(!osd_data->num_bvecs); - pr_debug("ceph_aio_complete_req %p rc %d bytes %u\n", - inode, rc, osd_data->bvec_pos.iter.bi_size); + pr_debug("%s: %p rc %d bytes %u\n", + __func__, inode, rc, osd_data->bvec_pos.iter.bi_size); /* r_start_latency == 0 means the request was not submitted */ if (req->r_start_latency) { diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index b2ff9f03a46e..8658070c59ee 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -1832,10 +1832,10 @@ void ceph_queue_writeback(struct inode *inode) ihold(inode); if (queue_work(ceph_inode_to_client(inode)->inode_wq, &ci->i_work)) { - pr_debug("ceph_queue_writeback %p\n", inode); + pr_debug("%s: %p\n", __func__, inode); } else { - pr_debug("ceph_queue_writeback %p already queued, mask=%lx\n", - inode, ci->i_work_mask); + pr_debug("%s: %p already queued, mask=%lx\n", + __func__, inode, ci->i_work_mask); iput(inode); } } @@ -1851,10 +1851,10 @@ void ceph_queue_invalidate(struct inode *inode) ihold(inode); if (queue_work(ceph_inode_to_client(inode)->inode_wq, &ceph_inode(inode)->i_work)) { - pr_debug("ceph_queue_invalidate %p\n", inode); + pr_debug("%s: %p\n", __func__, inode); } else { - pr_debug("ceph_queue_invalidate %p already queued, mask=%lx\n", - inode, ci->i_work_mask); + pr_debug("%s: %p already queued, mask=%lx\n", + __func__, inode, ci->i_work_mask); iput(inode); } } @@ -1871,10 +1871,10 @@ void ceph_queue_vmtruncate(struct inode *inode) ihold(inode); if (queue_work(ceph_inode_to_client(inode)->inode_wq, &ci->i_work)) { - pr_debug("ceph_queue_vmtruncate %p\n", inode); + pr_debug("%s: %p\n", __func__, inode); } else { - pr_debug("ceph_queue_vmtruncate %p already queued, mask=%lx\n", - inode, ci->i_work_mask); + pr_debug("%s: %p already queued, mask=%lx\n", + __func__, inode, ci->i_work_mask); iput(inode); } } diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c index 8f6819d2dc99..a414b8d51c3d 100644 --- a/fs/ceph/locks.c +++ b/fs/ceph/locks.c @@ -98,8 +98,8 @@ static int ceph_lock_message(u8 lock_type, u16 operation, struct inode *inode, owner = secure_addr(fl->fl_owner); - pr_debug("ceph_lock_message: rule: %d, op: %d, owner: %llx, pid: %llu, start: %llu, length: %llu, wait: %d, type: %d\n", - (int)lock_type, + pr_debug("%s: rule: %d, op: %d, owner: %llx, pid: %llu, start: %llu, length: %llu, wait: %d, type: %d\n", + __func__, (int)lock_type, (int)operation, owner, (u64)fl->fl_pid, fl->fl_start, length, wait, fl->fl_type); @@ -134,8 +134,8 @@ static int ceph_lock_message(u8 lock_type, u16 operation, struct inode *inode, } ceph_mdsc_put_request(req); - pr_debug("ceph_lock_message: rule: %d, op: %d, pid: %llu, start: %llu, length: %llu, wait: %d, type: %d, err code %d\n", - (int)lock_type, + pr_debug("%s: rule: %d, op: %d, pid: %llu, start: %llu, length: %llu, wait: %d, type: %d, err code %d\n", + __func__, (int)lock_type, (int)operation, (u64)fl->fl_pid, fl->fl_start, length, wait, fl->fl_type, err); return err; @@ -161,8 +161,7 @@ static int ceph_lock_wait_for_completion(struct ceph_mds_client *mdsc, if (!err) return 0; - pr_debug("ceph_lock_wait_for_completion: request %llu was interrupted\n", - req->r_tid); + pr_debug("%s: request %llu was interrupted\n", __func__, req->r_tid); mutex_lock(&mdsc->mutex); if (test_bit(CEPH_MDS_R_GOT_RESULT, &req->r_req_flags)) { @@ -244,7 +243,7 @@ int ceph_lock(struct file *file, int cmd, struct file_lock *fl) if (__mandatory_lock(file->f_mapping->host) && fl->fl_type != F_UNLCK) return -ENOLCK; - pr_debug("ceph_lock, fl_owner: %p\n", fl->fl_owner); + pr_debug("%s: fl_owner: %p\n", __func__, fl->fl_owner); /* set wait bit as appropriate, then make command as Ceph expects it*/ if (IS_GETLK(cmd)) @@ -309,7 +308,7 @@ int ceph_flock(struct file *file, int cmd, struct file_lock *fl) if (fl->fl_type & LOCK_MAND) return -EOPNOTSUPP; - pr_debug("ceph_flock, fl_file: %p\n", fl->fl_file); + pr_debug("%s: fl_file: %p\n", __func__, fl->fl_file); spin_lock(&ci->i_ceph_lock); if (ci->i_ceph_flags & CEPH_I_ERROR_FILELOCK) { diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 1e1c51e396bd..658c24b7c119 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -890,7 +890,7 @@ static void __register_request(struct ceph_mds_client *mdsc, return; } } - pr_debug("__register_request %p tid %lld\n", req, req->r_tid); + pr_debug("%s: %p tid %lld\n", __func__, req, req->r_tid); ceph_mdsc_get_request(req); insert_request(&mdsc->request_tree, req); @@ -1478,7 +1478,7 @@ static void cleanup_session_requests(struct ceph_mds_client *mdsc, struct rb_node *p; struct ceph_inode_info *ci; - pr_debug("cleanup_session_requests mds%d\n", session->s_mds); + pr_debug("%s: mds%d\n", __func__, session->s_mds); mutex_lock(&mdsc->mutex); while (!list_empty(&session->s_unsafe)) { req = list_first_entry(&session->s_unsafe, @@ -1688,7 +1688,7 @@ static void remove_session_caps(struct ceph_mds_session *session) struct super_block *sb = fsc->sb; LIST_HEAD(dispose); - pr_debug("remove_session_caps on %p\n", session); + pr_debug("%s: on %p\n", __func__, session); ceph_iterate_session_caps(session, remove_session_caps_cb, fsc); wake_up_all(&fsc->mdsc->cap_flushing_wq); @@ -1795,13 +1795,13 @@ static int send_renew_caps(struct ceph_mds_client *mdsc, * with its clients. */ state = ceph_mdsmap_get_state(mdsc->mdsmap, session->s_mds); if (state < CEPH_MDS_STATE_RECONNECT) { - pr_debug("send_renew_caps ignoring mds%d (%s)\n", - session->s_mds, ceph_mds_state_name(state)); + pr_debug("%s: ignoring mds%d (%s)\n", + __func__, session->s_mds, ceph_mds_state_name(state)); return 0; } - pr_debug("send_renew_caps to mds%d (%s)\n", - session->s_mds, ceph_mds_state_name(state)); + pr_debug("%s: to mds%d (%s)\n", + __func__, session->s_mds, ceph_mds_state_name(state)); msg = create_session_msg(CEPH_SESSION_REQUEST_RENEWCAPS, ++session->s_renew_seq); if (!msg) @@ -1815,8 +1815,8 @@ static int send_flushmsg_ack(struct ceph_mds_client *mdsc, { struct ceph_msg *msg; - pr_debug("send_flushmsg_ack to mds%d (%s)s seq %lld\n", - session->s_mds, + pr_debug("%s: to mds%d (%s)s seq %lld\n", + __func__, session->s_mds, ceph_session_state_name(session->s_state), seq); msg = create_session_msg(CEPH_SESSION_FLUSHMSG_ACK, seq); if (!msg) @@ -1851,8 +1851,8 @@ static void renewed_caps(struct ceph_mds_client *mdsc, pr_info("mds%d caps still stale\n", session->s_mds); } } - pr_debug("renewed_caps mds%d ttl now %lu, was %s, now %s\n", - session->s_mds, session->s_cap_ttl, + pr_debug("%s: mds%d ttl now %lu, was %s, now %s\n", + __func__, session->s_mds, session->s_cap_ttl, was_stale ? "stale" : "fresh", time_before(jiffies, session->s_cap_ttl) ? "stale" : "fresh"); spin_unlock(&session->s_cap_lock); @@ -1868,9 +1868,9 @@ static int request_close_session(struct ceph_mds_session *session) { struct ceph_msg *msg; - pr_debug("request_close_session mds%d state %s seq %lld\n", - session->s_mds, ceph_session_state_name(session->s_state), - session->s_seq); + pr_debug("%s: mds%d state %s seq %lld\n", + __func__, session->s_mds, + ceph_session_state_name(session->s_state), session->s_seq); msg = create_session_msg(CEPH_SESSION_REQUEST_CLOSE, session->s_seq); if (!msg) return -ENOMEM; @@ -1938,8 +1938,9 @@ static int trim_caps_cb(struct inode *inode, struct ceph_cap *cap, void *arg) wanted = __ceph_caps_file_wanted(ci); oissued = __ceph_caps_issued_other(ci, cap); - pr_debug("trim_caps_cb %p cap %p mine %s oissued %s used %s wanted %s\n", - inode, cap, ceph_cap_string(mine), ceph_cap_string(oissued), + pr_debug("%s: %p cap %p mine %s oissued %s used %s wanted %s\n", + __func__, inode, cap, + ceph_cap_string(mine), ceph_cap_string(oissued), ceph_cap_string(used), ceph_cap_string(wanted)); if (cap == ci->i_auth_cap) { if (ci->i_dirty_caps || ci->i_flushing_caps || @@ -1980,8 +1981,8 @@ static int trim_caps_cb(struct inode *inode, struct ceph_cap *cap, void *arg) count = atomic_read(&inode->i_count); if (count == 1) (*remaining)--; - pr_debug("trim_caps_cb %p cap %p pruned, count now %d\n", - inode, cap, count); + pr_debug("%s: %p cap %p pruned, count now %d\n", + __func__, inode, cap, count); } else { dput(dentry); } @@ -2028,8 +2029,8 @@ static int check_caps_flush(struct ceph_mds_client *mdsc, list_first_entry(&mdsc->cap_flush_list, struct ceph_cap_flush, g_list); if (cf->tid <= want_flush_tid) { - pr_debug("check_caps_flush still flushing tid %llu <= %llu\n", - cf->tid, want_flush_tid); + pr_debug("%s: still flushing tid %llu <= %llu\n", + __func__, cf->tid, want_flush_tid); ret = 0; } } @@ -2045,12 +2046,12 @@ static int check_caps_flush(struct ceph_mds_client *mdsc, static void wait_caps_flush(struct ceph_mds_client *mdsc, u64 want_flush_tid) { - pr_debug("check_caps_flush want %llu\n", want_flush_tid); + pr_debug("%s: want %llu\n", __func__, want_flush_tid); wait_event(mdsc->cap_flushing_wq, check_caps_flush(mdsc, want_flush_tid)); - pr_debug("check_caps_flush ok, flushed thru %llu\n", want_flush_tid); + pr_debug("%s: ok, flushed thru %llu\n", __func__, want_flush_tid); } /* @@ -2863,7 +2864,7 @@ static void __do_request(struct ceph_mds_client *mdsc, ceph_put_mds_session(session); finish: if (err) { - pr_debug("__do_request early error %d\n", err); + pr_debug("%s: early error %d\n", __func__, err); req->r_err = err; complete_request(mdsc, req); __unregister_request(mdsc, req); @@ -2900,7 +2901,7 @@ static void kick_requests(struct ceph_mds_client *mdsc, int mds) struct ceph_mds_request *req; struct rb_node *p = rb_first(&mdsc->request_tree); - pr_debug("kick_requests mds%d\n", mds); + pr_debug("%s: mds%d\n", __func__, mds); while (p) { req = rb_entry(p, struct ceph_mds_request, r_node); p = rb_next(p); @@ -3086,11 +3087,11 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) mutex_lock(&mdsc->mutex); req = lookup_get_request(mdsc, tid); if (!req) { - pr_debug("handle_reply on unknown tid %llu\n", tid); + pr_debug("%s: on unknown tid %llu\n", __func__, tid); mutex_unlock(&mdsc->mutex); return; } - pr_debug("handle_reply %p\n", req); + pr_debug("%s: %p\n", __func__, req); /* correct session? */ if (req->r_session != session) { @@ -3173,7 +3174,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) list_add_tail(&req->r_unsafe_item, &req->r_session->s_unsafe); } - pr_debug("handle_reply tid %lld result %d\n", tid, result); + pr_debug("%s: tid %lld result %d\n", __func__, tid, result); rinfo = &req->r_reply_info; if (test_bit(CEPHFS_FEATURE_REPLY_ENCODING, &session->s_features)) err = parse_reply_info(session, msg, rinfo, (u64)-1); @@ -3385,8 +3386,8 @@ static void handle_session(struct ceph_mds_session *session, mutex_lock(&session->s_mutex); - pr_debug("handle_session mds%d %s %p state %s seq %llu\n", - mds, ceph_session_op_name(op), session, + pr_debug("%s: mds%d %s %p state %s seq %llu\n", + __func__, mds, ceph_session_op_name(op), session, ceph_session_state_name(session->s_state), seq); if (session->s_state == CEPH_MDS_SESSION_HUNG) { @@ -3516,7 +3517,7 @@ static void replay_unsafe_requests(struct ceph_mds_client *mdsc, struct ceph_mds_request *req, *nreq; struct rb_node *p; - pr_debug("replay_unsafe_requests mds%d\n", session->s_mds); + pr_debug("%s: mds%d\n", __func__, session->s_mds); mutex_lock(&mdsc->mutex); list_for_each_entry_safe(req, nreq, &session->s_unsafe, r_unsafe_item) @@ -4058,8 +4059,8 @@ static void check_new_map(struct ceph_mds_client *mdsc, int oldstate, newstate; struct ceph_mds_session *s; - pr_debug("check_new_map new %u old %u\n", - newmap->m_epoch, oldmap->m_epoch); + pr_debug("%s: new %u old %u\n", + __func__, newmap->m_epoch, oldmap->m_epoch); for (i = 0; i < oldmap->possible_max_rank && i < mdsc->max_sessions; i++) { if (!mdsc->sessions[i]) @@ -4068,8 +4069,8 @@ static void check_new_map(struct ceph_mds_client *mdsc, oldstate = ceph_mdsmap_get_state(oldmap, i); newstate = ceph_mdsmap_get_state(newmap, i); - pr_debug("check_new_map mds%d state %s%s -> %s%s (session %s)\n", - i, ceph_mds_state_name(oldstate), + pr_debug("%s: mds%d state %s%s -> %s%s (session %s)\n", + __func__, i, ceph_mds_state_name(oldstate), ceph_mdsmap_is_laggy(oldmap, i) ? " (laggy)" : "", ceph_mds_state_name(newstate), ceph_mdsmap_is_laggy(newmap, i) ? " (laggy)" : "", @@ -4184,7 +4185,7 @@ static void handle_lease(struct ceph_mds_client *mdsc, struct qstr dname; int release = 0; - pr_debug("handle_lease from mds%d\n", mds); + pr_debug("%s: from mds%d\n", __func__, mds); /* decode */ if (msg->front.iov_len < sizeof(*h) + sizeof(u32)) @@ -4199,15 +4200,15 @@ static void handle_lease(struct ceph_mds_client *mdsc, /* lookup inode */ inode = ceph_find_inode(sb, vino); - pr_debug("handle_lease %s, ino %llx %p %.*s\n", - ceph_lease_op_name(h->action), vino.ino, inode, + pr_debug("%s: %s, ino %llx %p %.*s\n", + __func__, ceph_lease_op_name(h->action), vino.ino, inode, dname.len, dname.name); mutex_lock(&session->s_mutex); session->s_seq++; if (!inode) { - pr_debug("handle_lease no inode %llx\n", vino.ino); + pr_debug("%s: no inode %llx\n", __func__, vino.ino); goto release; } @@ -4398,7 +4399,7 @@ static void delayed_work(struct work_struct *work) int renew_interval; int renew_caps; - pr_debug("mdsc delayed_work\n"); + pr_debug("%s: mdsc\n", __func__); if (mdsc->stopping) return; @@ -4542,21 +4543,21 @@ static void wait_requests(struct ceph_mds_client *mdsc) if (__get_oldest_req(mdsc)) { mutex_unlock(&mdsc->mutex); - pr_debug("wait_requests waiting for requests\n"); + pr_debug("%s: waiting for requests\n", __func__); wait_for_completion_timeout(&mdsc->safe_umount_waiters, ceph_timeout_jiffies(opts->mount_timeout)); /* tear down remaining requests */ mutex_lock(&mdsc->mutex); while ((req = __get_oldest_req(mdsc))) { - pr_debug("wait_requests timed out on tid %llu\n", - req->r_tid); + pr_debug("%s: timed out on tid %llu\n", + __func__, req->r_tid); list_del_init(&req->r_wait); __unregister_request(mdsc, req); } } mutex_unlock(&mdsc->mutex); - pr_debug("wait_requests done\n"); + pr_debug("%s: done\n", __func__); } /* @@ -4590,7 +4591,7 @@ static void wait_unsafe_requests(struct ceph_mds_client *mdsc, u64 want_tid) struct rb_node *n; mutex_lock(&mdsc->mutex); - pr_debug("wait_unsafe_requests want %lld\n", want_tid); + pr_debug("%s: want %lld\n", __func__, want_tid); restart: req = __get_oldest_req(mdsc); while (req && req->r_tid <= want_tid) { @@ -4607,8 +4608,8 @@ static void wait_unsafe_requests(struct ceph_mds_client *mdsc, u64 want_tid) if (nextreq) ceph_mdsc_get_request(nextreq); mutex_unlock(&mdsc->mutex); - pr_debug("wait_unsafe_requests wait on %llu (want %llu)\n", - req->r_tid, want_tid); + pr_debug("%s: wait on %llu (want %llu)\n", + __func__, req->r_tid, want_tid); wait_for_completion(&req->r_safe_completion); mutex_lock(&mdsc->mutex); ceph_mdsc_put_request(req); @@ -4624,7 +4625,7 @@ static void wait_unsafe_requests(struct ceph_mds_client *mdsc, u64 want_tid) req = nextreq; } mutex_unlock(&mdsc->mutex); - pr_debug("wait_unsafe_requests done\n"); + pr_debug("%s: done\n", __func__); } void ceph_mdsc_sync(struct ceph_mds_client *mdsc) diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c index 73ccfd4172ac..5f22df095acd 100644 --- a/fs/ceph/snap.c +++ b/fs/ceph/snap.c @@ -281,8 +281,8 @@ static int adjust_snap_realm_parent(struct ceph_mds_client *mdsc, if (IS_ERR(parent)) return PTR_ERR(parent); } - pr_debug("adjust_snap_realm_parent %llx %p: %llx %p -> %llx %p\n", - realm->ino, realm, realm->parent_ino, realm->parent, + pr_debug("%s: %llx %p: %llx %p -> %llx %p\n", + __func__, realm->ino, realm, realm->parent_ino, realm->parent, parentino, parent); if (realm->parent) { list_del_init(&realm->child_item); @@ -338,8 +338,8 @@ static int build_snap_context(struct ceph_snap_realm *realm, realm->cached_context->seq == realm->seq && (!parent || realm->cached_context->seq >= parent->cached_context->seq)) { - pr_debug("build_snap_context %llx %p: %p seq %lld (%u snaps) (unchanged)\n", - realm->ino, realm, realm->cached_context, + pr_debug("%s: %llx %p: %p seq %lld (%u snaps) (unchanged)\n", + __func__, realm->ino, realm, realm->cached_context, realm->cached_context->seq, (unsigned int)realm->cached_context->num_snaps); return 0; @@ -378,8 +378,8 @@ static int build_snap_context(struct ceph_snap_realm *realm, sort(snapc->snaps, num, sizeof(u64), cmpu64_rev, NULL); snapc->num_snaps = num; - pr_debug("build_snap_context %llx %p: %p seq %lld (%u snaps)\n", - realm->ino, realm, snapc, snapc->seq, + pr_debug("%s: %llx %p: %p seq %lld (%u snaps)\n", + __func__, realm->ino, realm, snapc, snapc->seq, (unsigned int)snapc->num_snaps); ceph_put_snap_context(realm->cached_context); @@ -410,7 +410,7 @@ static void rebuild_snap_realms(struct ceph_snap_realm *realm, { struct ceph_snap_realm *child; - pr_debug("rebuild_snap_realms %llx %p\n", realm->ino, realm); + pr_debug("%s: %llx %p\n", __func__, realm->ino, realm); build_snap_context(realm, dirty_realms); list_for_each_entry(child, &realm->children, child_item) @@ -646,7 +646,7 @@ static void queue_realm_cap_snaps(struct ceph_snap_realm *realm) struct ceph_inode_info *ci; struct inode *lastinode = NULL; - pr_debug("queue_realm_cap_snaps %p %llx inodes\n", realm, realm->ino); + pr_debug("%s: %p %llx inodes\n", __func__, realm, realm->ino); spin_lock(&realm->inodes_with_caps_lock); list_for_each_entry(ci, &realm->inodes_with_caps, i_snap_realm_item) { @@ -664,7 +664,7 @@ static void queue_realm_cap_snaps(struct ceph_snap_realm *realm) spin_unlock(&realm->inodes_with_caps_lock); ceph_async_iput(lastinode); - pr_debug("queue_realm_cap_snaps %p %llx done\n", realm, realm->ino); + pr_debug("%s: %p %llx done\n", __func__, realm, realm->ino); } /* @@ -805,7 +805,7 @@ static void flush_snaps(struct ceph_mds_client *mdsc) struct inode *inode; struct ceph_mds_session *session = NULL; - pr_debug("flush_snaps\n"); + pr_debug("%s\n", __func__); spin_lock(&mdsc->snap_flush_lock); while (!list_empty(&mdsc->snap_flush_list)) { ci = list_first_entry(&mdsc->snap_flush_list, @@ -825,7 +825,7 @@ static void flush_snaps(struct ceph_mds_client *mdsc) mutex_unlock(&session->s_mutex); ceph_put_mds_session(session); } - pr_debug("flush_snaps done\n"); + pr_debug("%s done\n", __func__); } diff --git a/fs/ceph/super.c b/fs/ceph/super.c index 08dc6052d2b6..e7f079ebfbc4 100644 --- a/fs/ceph/super.c +++ b/fs/ceph/super.c @@ -726,7 +726,7 @@ static void destroy_fs_client(struct ceph_fs_client *fsc) ceph_destroy_client(fsc->client); kfree(fsc); - pr_debug("destroy_fs_client %p done\n", fsc); + pr_debug("%s: %p done\n", __func__, fsc); } /* @@ -841,7 +841,7 @@ static void ceph_umount_begin(struct super_block *sb) { struct ceph_fs_client *fsc = ceph_sb_to_client(sb); - pr_debug("ceph_umount_begin - starting forced umount\n"); + pr_debug("%s: starting forced umount\n", __func__); if (!fsc) return; fsc->mount_state = CEPH_MOUNT_SHUTDOWN; @@ -1001,7 +1001,7 @@ static int ceph_compare_super(struct super_block *sb, struct fs_context *fc) struct ceph_options *opt = new->client->options; struct ceph_fs_client *other = ceph_sb_to_client(sb); - pr_debug("ceph_compare_super %p\n", sb); + pr_debug("%s: %p\n", __func__, sb); if (compare_mount_options(fsopt, opt, other)) { pr_debug("monitor(s)/mount options don't match\n"); @@ -1052,7 +1052,7 @@ static int ceph_get_tree(struct fs_context *fc) ceph_compare_super; int err; - pr_debug("ceph_get_tree\n"); + pr_debug("%s\n", __func__); if (!fc->source) return invalfc(fc, "No source"); @@ -1115,7 +1115,7 @@ static int ceph_get_tree(struct fs_context *fc) out: destroy_fs_client(fsc); out_final: - pr_debug("ceph_get_tree fail %d\n", err); + pr_debug("%s: fail %d\n", __func__, err); return err; } diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c index 503d1edeb28d..6cdf6866c745 100644 --- a/fs/ceph/xattr.c +++ b/fs/ceph/xattr.c @@ -68,7 +68,7 @@ static ssize_t ceph_vxattrcb_layout(struct ceph_inode_info *ci, char *val, pool_ns = ceph_try_get_string(ci->i_layout.pool_ns); - pr_debug("ceph_vxattrcb_layout %p\n", &ci->vfs_inode); + pr_debug("%s: %p\n", __func__, &ci->vfs_inode); down_read(&osdc->lock); pool_name = ceph_pg_pool_name_by_id(osdc->osdmap, pool); if (pool_name) { @@ -492,7 +492,7 @@ static int __set_xattr(struct ceph_inode_info *ci, xattr->should_free_name = update_xattr; ci->i_xattrs.count++; - pr_debug("__set_xattr count=%d\n", ci->i_xattrs.count); + pr_debug("%s: count=%d\n", __func__, ci->i_xattrs.count); } else { kfree(*newxattr); *newxattr = NULL; @@ -551,13 +551,13 @@ static struct ceph_inode_xattr *__get_xattr(struct ceph_inode_info *ci, else if (c > 0) p = &(*p)->rb_right; else { - pr_debug("__get_xattr %s: found %.*s\n", - name, xattr->val_len, xattr->val); + pr_debug("%s: %s: found %.*s\n", + __func__, name, xattr->val_len, xattr->val); return xattr; } } - pr_debug("__get_xattr %s: not found\n", name); + pr_debug("%s: %s: not found\n", __func__, name); return NULL; } @@ -602,7 +602,7 @@ static char *__copy_xattr_names(struct ceph_inode_info *ci, struct ceph_inode_xattr *xattr = NULL; p = rb_first(&ci->i_xattrs.index); - pr_debug("__copy_xattr_names count=%d\n", ci->i_xattrs.count); + pr_debug("%s: count=%d\n", __func__, ci->i_xattrs.count); while (p) { xattr = rb_entry(p, struct ceph_inode_xattr, node); @@ -627,14 +627,14 @@ void __ceph_destroy_xattrs(struct ceph_inode_info *ci) p = rb_first(&ci->i_xattrs.index); - pr_debug("__ceph_destroy_xattrs p=%p\n", p); + pr_debug("%s: p=%p\n", __func__, p); while (p) { xattr = rb_entry(p, struct ceph_inode_xattr, node); tmp = p; p = rb_next(tmp); - pr_debug("__ceph_destroy_xattrs next p=%p (%.*s)\n", - p, xattr->name_len, xattr->name); + pr_debug("%s: next p=%p (%.*s)\n", + __func__, p, xattr->name_len, xattr->name); rb_erase(tmp, &ci->i_xattrs.index); __free_xattr(xattr); @@ -662,7 +662,8 @@ static int __build_xattrs(struct inode *inode) int err = 0; int i; - pr_debug("__build_xattrs() len=%d\n", + pr_debug("%s: len=%d\n", + __func__, ci->i_xattrs.blob ? (int)ci->i_xattrs.blob->vec.iov_len : 0); if (ci->i_xattrs.index_version >= ci->i_xattrs.version) @@ -745,8 +746,8 @@ static int __get_required_blob_size(struct ceph_inode_info *ci, int name_size, int size = 4 + ci->i_xattrs.count*(4 + 4) + ci->i_xattrs.names_size + ci->i_xattrs.vals_size; - pr_debug("__get_required_blob_size c=%d names.size=%d vals.size=%d\n", - ci->i_xattrs.count, ci->i_xattrs.names_size, + pr_debug("%s: c=%d names.size=%d vals.size=%d\n", + __func__, ci->i_xattrs.count, ci->i_xattrs.names_size, ci->i_xattrs.vals_size); if (name_size) From patchwork Mon Aug 17 01:34:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Perches X-Patchwork-Id: 11716261 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0FD89739 for ; Mon, 17 Aug 2020 01:35:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7867207FF for ; Mon, 17 Aug 2020 01:35:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726758AbgHQBeu (ORCPT ); Sun, 16 Aug 2020 21:34:50 -0400 Received: from smtprelay0004.hostedemail.com ([216.40.44.4]:58222 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726263AbgHQBeq (ORCPT ); Sun, 16 Aug 2020 21:34:46 -0400 Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60]) by smtprelay02.hostedemail.com (Postfix) with ESMTP id D16D4364D; Mon, 17 Aug 2020 01:34:42 +0000 (UTC) X-Session-Marker: 6A6F6540706572636865732E636F6D X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:69:327:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2199:2393:2538:2553:2559:2562:2689:2898:2899:2900:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3874:4250:4321:4384:4605:5007:6119:6248:6261:6691:7875:7903:8603:9040:10004:10226:10848:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12679:12683:12895:12986:13894:14110:14394:21080:21324:21433:21451:21627:21740:21810:21987:21990:30029:30030:30045:30054:30056:30070:30090,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: legs45_2e0dfbb27012 X-Filterd-Recvd-Size: 24447 Received: from joe-laptop.perches.com (unknown [47.151.133.149]) (Authenticated sender: joe@perches.com) by omf01.hostedemail.com (Postfix) with ESMTPA; Mon, 17 Aug 2020 01:34:41 +0000 (UTC) From: Joe Perches To: Ilya Dryomov , Jeff Layton Cc: "David S. Miller" , Jakub Kicinski , ceph-devel@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 4/6] net: ceph: Remove embedded function names from pr_debug uses Date: Sun, 16 Aug 2020 18:34:07 -0700 Message-Id: <7b5986dfb99bc7ad7cc8e80f89dc0e8547457847.1597626802.git.joe@perches.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: References: MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Use "%s: " ..., __func__ so function renaming changes the logging too. Signed-off-by: Joe Perches --- net/ceph/auth_none.c | 2 +- net/ceph/auth_x.c | 26 ++++++------ net/ceph/ceph_common.c | 4 +- net/ceph/debugfs.c | 2 +- net/ceph/messenger.c | 91 +++++++++++++++++++++--------------------- net/ceph/mon_client.c | 8 ++-- net/ceph/msgpool.c | 4 +- net/ceph/osd_client.c | 6 +-- net/ceph/osdmap.c | 30 +++++++------- 9 files changed, 87 insertions(+), 86 deletions(-) diff --git a/net/ceph/auth_none.c b/net/ceph/auth_none.c index f4be840c5961..d6e6e27e6899 100644 --- a/net/ceph/auth_none.c +++ b/net/ceph/auth_none.c @@ -130,7 +130,7 @@ int ceph_auth_none_init(struct ceph_auth_client *ac) { struct ceph_auth_none_info *xi; - pr_debug("ceph_auth_none_init %p\n", ac); + pr_debug("%s: %p\n", __func__, ac); xi = kzalloc(sizeof(*xi), GFP_NOFS); if (!xi) return -ENOMEM; diff --git a/net/ceph/auth_x.c b/net/ceph/auth_x.c index f83944ec10c3..312362515c28 100644 --- a/net/ceph/auth_x.c +++ b/net/ceph/auth_x.c @@ -25,8 +25,8 @@ static int ceph_x_is_authenticated(struct ceph_auth_client *ac) int need; ceph_x_validate_tickets(ac, &need); - pr_debug("ceph_x_is_authenticated want=%d need=%d have=%d\n", - ac->want_keys, need, xi->have_keys); + pr_debug("%s: want=%d need=%d have=%d\n", + __func__, ac->want_keys, need, xi->have_keys); return (ac->want_keys & xi->have_keys) == ac->want_keys; } @@ -36,8 +36,8 @@ static int ceph_x_should_authenticate(struct ceph_auth_client *ac) int need; ceph_x_validate_tickets(ac, &need); - pr_debug("ceph_x_should_authenticate want=%d need=%d have=%d\n", - ac->want_keys, need, xi->have_keys); + pr_debug("%s: want=%d need=%d have=%d\n", + __func__, ac->want_keys, need, xi->have_keys); return need != 0; } @@ -146,7 +146,7 @@ static void remove_ticket_handler(struct ceph_auth_client *ac, { struct ceph_x_info *xi = ac->private; - pr_debug("remove_ticket_handler %p %d\n", th, th->service); + pr_debug("%s: %p %d\n", __func__, th, th->service); rb_erase(&th->node, &xi->ticket_handlers); ceph_crypto_key_destroy(&th->session_key); if (th->ticket_blob) @@ -672,8 +672,8 @@ static int ceph_x_update_authorizer( au = (struct ceph_x_authorizer *)auth->authorizer; if (au->secret_id < th->secret_id) { - pr_debug("ceph_x_update_authorizer service %u secret %llu < %llu\n", - au->service, au->secret_id, th->secret_id); + pr_debug("%s: service %u secret %llu < %llu\n", + __func__, au->service, au->secret_id, th->secret_id); return ceph_x_build_authorizer(ac, th, au); } return 0; @@ -766,7 +766,7 @@ static void ceph_x_destroy(struct ceph_auth_client *ac) struct ceph_x_info *xi = ac->private; struct rb_node *p; - pr_debug("ceph_x_destroy %p\n", ac); + pr_debug("%s: %p\n", __func__, ac); ceph_crypto_key_destroy(&xi->secret); while ((p = rb_first(&xi->ticket_handlers)) != NULL) { @@ -903,11 +903,11 @@ static int ceph_x_check_message_signature(struct ceph_auth_handshake *auth, if (sig_check == msg->footer.sig) return 0; if (msg->footer.flags & CEPH_MSG_FOOTER_SIGNED) - pr_debug("ceph_x_check_message_signature %p has signature %llx expect %llx\n", - msg, msg->footer.sig, sig_check); + pr_debug("%s: %p has signature %llx expect %llx\n", + __func__, msg, msg->footer.sig, sig_check); else - pr_debug("ceph_x_check_message_signature %p sender did not set CEPH_MSG_FOOTER_SIGNED\n", - msg); + pr_debug("%s: %p sender did not set CEPH_MSG_FOOTER_SIGNED\n", + __func__, msg); return -EBADMSG; } @@ -934,7 +934,7 @@ int ceph_x_init(struct ceph_auth_client *ac) struct ceph_x_info *xi; int ret; - pr_debug("ceph_x_init %p\n", ac); + pr_debug("%s: %p\n", __func__, ac); ret = -ENOMEM; xi = kzalloc(sizeof(*xi), GFP_NOFS); if (!xi) diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c index 1750e14115e6..e8af3bf2fdb5 100644 --- a/net/ceph/ceph_common.c +++ b/net/ceph/ceph_common.c @@ -224,7 +224,7 @@ static int parse_fsid(const char *str, struct ceph_fsid *fsid) int err = -EINVAL; int d; - pr_debug("parse_fsid '%s'\n", str); + pr_debug("%s: '%s'\n", __func__, str); tmp[2] = 0; while (*str && i < 16) { if (ispunct(*str)) { @@ -244,7 +244,7 @@ static int parse_fsid(const char *str, struct ceph_fsid *fsid) if (i == 16) err = 0; - pr_debug("parse_fsid ret %d got fsid %pU\n", err, fsid); + pr_debug("%s: ret %d got fsid %pU\n", __func__, err, fsid); return err; } diff --git a/net/ceph/debugfs.c b/net/ceph/debugfs.c index f07cfb595a1c..4bb571f909ac 100644 --- a/net/ceph/debugfs.c +++ b/net/ceph/debugfs.c @@ -411,7 +411,7 @@ void ceph_debugfs_client_init(struct ceph_client *client) snprintf(name, sizeof(name), "%pU.client%lld", &client->fsid, client->monc.auth->global_id); - pr_debug("ceph_debugfs_client_init %p %s\n", client, name); + pr_debug("%s: %p %s\n", __func__, client, name); client->debugfs_dir = debugfs_create_dir(name, ceph_debugfs_dir); diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index 652b7cf2812a..aa2a62654836 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -594,7 +594,7 @@ static int con_close_socket(struct ceph_connection *con) { int rc = 0; - pr_debug("con_close_socket on %p sock %p\n", con, con->sock); + pr_debug("%s: on %p sock %p\n", __func__, con, con->sock); if (con->sock) { rc = con->sock->ops->shutdown(con->sock, SHUT_RDWR); sock_release(con->sock); @@ -636,7 +636,7 @@ static void reset_connection(struct ceph_connection *con) { /* reset connection, out_queue, msg_ and connect_seq */ /* discard existing out_queue and msg_seq */ - pr_debug("reset_connection %p\n", con); + pr_debug("%s: %p\n", __func__, con); ceph_msg_remove_list(&con->out_queue); ceph_msg_remove_list(&con->out_sent); @@ -1234,7 +1234,7 @@ static void prepare_write_message_footer(struct ceph_connection *con) m->footer.flags |= CEPH_MSG_FOOTER_COMPLETE; - pr_debug("prepare_write_message_footer %p\n", con); + pr_debug("%s: %p\n", __func__, con); con_out_kvec_add(con, sizeof_footer(con), &m->footer); if (con->peer_features & CEPH_FEATURE_MSG_AUTH) { if (con->ops->sign_message) @@ -1290,8 +1290,8 @@ static void prepare_write_message(struct ceph_connection *con) con->ops->reencode_message(m); } - pr_debug("prepare_write_message %p seq %lld type %d len %d+%d+%zd\n", - m, con->out_seq, le16_to_cpu(m->hdr.type), + pr_debug("%s: %p seq %lld type %d len %d+%d+%zd\n", + __func__, m, con->out_seq, le16_to_cpu(m->hdr.type), le32_to_cpu(m->hdr.front_len), le32_to_cpu(m->hdr.middle_len), m->data_length); WARN_ON(m->front.iov_len != le32_to_cpu(m->hdr.front_len)); @@ -1469,8 +1469,8 @@ static int prepare_write_connect(struct ceph_connection *con) BUG(); } - pr_debug("prepare_write_connect %p cseq=%d gseq=%d proto=%d\n", - con, con->connect_seq, global_seq, proto); + pr_debug("%s: %p cseq=%d gseq=%d proto=%d\n", + __func__, con, con->connect_seq, global_seq, proto); con->out_connect.features = cpu_to_le64(from_msgr(con->msgr)->supported_features); @@ -1498,7 +1498,7 @@ static int write_partial_kvec(struct ceph_connection *con) { int ret; - pr_debug("write_partial_kvec %p %d left\n", con, con->out_kvec_bytes); + pr_debug("%s: %p %d left\n", __func__, con, con->out_kvec_bytes); while (con->out_kvec_bytes > 0) { ret = ceph_tcp_sendmsg(con->sock, con->out_kvec_cur, con->out_kvec_left, con->out_kvec_bytes, @@ -1525,8 +1525,8 @@ static int write_partial_kvec(struct ceph_connection *con) con->out_kvec_left = 0; ret = 1; out: - pr_debug("write_partial_kvec %p %d left in %d kvecs ret = %d\n", - con, con->out_kvec_bytes, con->out_kvec_left, ret); + pr_debug("%s: %p %d left in %d kvecs ret = %d\n", + __func__, con, con->out_kvec_bytes, con->out_kvec_left, ret); return ret; /* done! */ } @@ -1714,7 +1714,7 @@ static int read_partial_banner(struct ceph_connection *con) int end; int ret; - pr_debug("read_partial_banner %p at %d\n", con, con->in_base_pos); + pr_debug("%s: %p at %d\n", __func__, con, con->in_base_pos); /* peer's banner */ size = strlen(CEPH_BANNER); @@ -1747,7 +1747,7 @@ static int read_partial_connect(struct ceph_connection *con) int end; int ret; - pr_debug("read_partial_connect %p at %d\n", con, con->in_base_pos); + pr_debug("%s: %p at %d\n", __func__, con, con->in_base_pos); size = sizeof (con->in_reply); end = size; @@ -1771,8 +1771,8 @@ static int read_partial_connect(struct ceph_connection *con) goto out; } - pr_debug("read_partial_connect %p tag %d, con_seq = %u, g_seq = %u\n", - con, (int)con->in_reply.tag, + pr_debug("%s: %p tag %d, con_seq = %u, g_seq = %u\n", + __func__, con, (int)con->in_reply.tag, le32_to_cpu(con->in_reply.connect_seq), le32_to_cpu(con->in_reply.global_seq)); out: @@ -2037,8 +2037,8 @@ static int process_banner(struct ceph_connection *con) sizeof(con->peer_addr_for_me.in_addr)); addr_set_port(&con->msgr->inst.addr, port); encode_my_addr(con->msgr); - pr_debug("process_banner learned my addr is %s\n", - ceph_pr_addr(&con->msgr->inst.addr)); + pr_debug("%s: learned my addr is %s\n", + __func__, ceph_pr_addr(&con->msgr->inst.addr)); } return 0; @@ -2051,7 +2051,7 @@ static int process_connect(struct ceph_connection *con) u64 server_feat = le64_to_cpu(con->in_reply.features); int ret; - pr_debug("process_connect on %p tag %d\n", con, (int)con->in_tag); + pr_debug("%s: on %p tag %d\n", __func__, con, (int)con->in_tag); if (con->auth) { int len = le32_to_cpu(con->in_reply.authorizer_len); @@ -2108,8 +2108,8 @@ static int process_connect(struct ceph_connection *con) case CEPH_MSGR_TAG_BADAUTHORIZER: con->auth_retry++; - pr_debug("process_connect %p got BADAUTHORIZER attempt %d\n", - con, con->auth_retry); + pr_debug("%s: %p got BADAUTHORIZER attempt %d\n", + __func__, con, con->auth_retry); if (con->auth_retry == 2) { con->error_msg = "connect authorization failure"; return -1; @@ -2129,8 +2129,8 @@ static int process_connect(struct ceph_connection *con) * that they must have reset their session, and may have * dropped messages. */ - pr_debug("process_connect got RESET peer seq %u\n", - le32_to_cpu(con->in_reply.connect_seq)); + pr_debug("%s: got RESET peer seq %u\n", + __func__, le32_to_cpu(con->in_reply.connect_seq)); pr_err("%s%lld %s connection reset\n", ENTITY_NAME(con->peer_name), ceph_pr_addr(&con->peer_addr)); @@ -2156,7 +2156,8 @@ static int process_connect(struct ceph_connection *con) * If we sent a smaller connect_seq than the peer has, try * again with a larger value. */ - pr_debug("process_connect got RETRY_SESSION my seq %u, peer %u\n", + pr_debug("%s: got RETRY_SESSION my seq %u, peer %u\n", + __func__, le32_to_cpu(con->out_connect.connect_seq), le32_to_cpu(con->in_reply.connect_seq)); con->connect_seq = le32_to_cpu(con->in_reply.connect_seq); @@ -2172,8 +2173,8 @@ static int process_connect(struct ceph_connection *con) * If we sent a smaller global_seq than the peer has, try * again with a larger value. */ - pr_debug("process_connect got RETRY_GLOBAL my %u peer_gseq %u\n", - con->peer_global_seq, + pr_debug("%s: got RETRY_GLOBAL my %u peer_gseq %u\n", + __func__, con->peer_global_seq, le32_to_cpu(con->in_reply.global_seq)); get_global_seq(con->msgr, le32_to_cpu(con->in_reply.global_seq)); @@ -2203,8 +2204,8 @@ static int process_connect(struct ceph_connection *con) con->peer_global_seq = le32_to_cpu(con->in_reply.global_seq); con->connect_seq++; con->peer_features = server_feat; - pr_debug("process_connect got READY gseq %d cseq %d (%d)\n", - con->peer_global_seq, + pr_debug("%s: got READY gseq %d cseq %d (%d)\n", + __func__, con->peer_global_seq, le32_to_cpu(con->in_reply.connect_seq), con->connect_seq); WARN_ON(con->connect_seq != @@ -2366,7 +2367,7 @@ static int read_partial_message(struct ceph_connection *con) u64 seq; u32 crc; - pr_debug("read_partial_message con %p msg %p\n", con, m); + pr_debug("%s: con %p msg %p\n", __func__, con, m); /* header */ size = sizeof (con->in_hdr); @@ -2478,8 +2479,8 @@ static int read_partial_message(struct ceph_connection *con) m->footer.sig = 0; } - pr_debug("read_partial_message got msg %p %d (%u) + %d (%u) + %d (%u)\n", - m, front_len, m->footer.front_crc, middle_len, + pr_debug("%s: got msg %p %d (%u) + %d (%u) + %d (%u)\n", + __func__, m, front_len, m->footer.front_crc, middle_len, m->footer.middle_crc, data_len, m->footer.data_crc); /* crc ok? */ @@ -2562,7 +2563,7 @@ static int try_write(struct ceph_connection *con) { int ret = 1; - pr_debug("try_write start %p state %lu\n", con, con->state); + pr_debug("%s: start %p state %lu\n", __func__, con, con->state); if (con->state != CON_STATE_PREOPEN && con->state != CON_STATE_CONNECTING && con->state != CON_STATE_NEGOTIATING && @@ -2580,8 +2581,8 @@ static int try_write(struct ceph_connection *con) BUG_ON(con->in_msg); con->in_tag = CEPH_MSGR_TAG_READY; - pr_debug("try_write initiating connect on %p new state %lu\n", - con, con->state); + pr_debug("%s: initiating connect on %p new state %lu\n", + __func__, con, con->state); ret = ceph_tcp_connect(con); if (ret < 0) { con->error_msg = "connect error"; @@ -2590,7 +2591,7 @@ static int try_write(struct ceph_connection *con) } more: - pr_debug("try_write out_kvec_bytes %d\n", con->out_kvec_bytes); + pr_debug("%s: out_kvec_bytes %d\n", __func__, con->out_kvec_bytes); BUG_ON(!con->sock); /* kvec data queued? */ @@ -2619,8 +2620,8 @@ static int try_write(struct ceph_connection *con) if (ret == 0) goto out; if (ret < 0) { - pr_debug("try_write write_partial_message_data err %d\n", - ret); + pr_debug("%s: write_partial_message_data err %d\n", + __func__, ret); goto out; } } @@ -2644,10 +2645,10 @@ static int try_write(struct ceph_connection *con) /* Nothing to do! */ con_flag_clear(con, CON_FLAG_WRITE_PENDING); - pr_debug("try_write nothing else to write\n"); + pr_debug("%s: nothing else to write\n", __func__); ret = 0; out: - pr_debug("try_write done on %p ret %d\n", con, ret); + pr_debug("%s: done on %p ret %d\n", __func__, con, ret); return ret; } @@ -2659,7 +2660,7 @@ static int try_read(struct ceph_connection *con) int ret = -1; more: - pr_debug("try_read start on %p state %lu\n", con, con->state); + pr_debug("%s: start on %p state %lu\n", __func__, con, con->state); if (con->state != CON_STATE_CONNECTING && con->state != CON_STATE_NEGOTIATING && con->state != CON_STATE_OPEN) @@ -2667,11 +2668,11 @@ static int try_read(struct ceph_connection *con) BUG_ON(!con->sock); - pr_debug("try_read tag %d in_base_pos %d\n", - (int)con->in_tag, con->in_base_pos); + pr_debug("%s: tag %d in_base_pos %d\n", + __func__, (int)con->in_tag, con->in_base_pos); if (con->state == CON_STATE_CONNECTING) { - pr_debug("try_read connecting\n"); + pr_debug("%s: connecting\n", __func__); ret = read_partial_banner(con); if (ret <= 0) goto out; @@ -2696,7 +2697,7 @@ static int try_read(struct ceph_connection *con) } if (con->state == CON_STATE_NEGOTIATING) { - pr_debug("try_read negotiating\n"); + pr_debug("%s: negotiating\n", __func__); ret = read_partial_connect(con); if (ret <= 0) goto out; @@ -2727,7 +2728,7 @@ static int try_read(struct ceph_connection *con) ret = ceph_tcp_recvmsg(con->sock, &con->in_tag, 1); if (ret <= 0) goto out; - pr_debug("try_read got tag %d\n", (int)con->in_tag); + pr_debug("%s: got tag %d\n", __func__, (int)con->in_tag); switch (con->in_tag) { case CEPH_MSGR_TAG_MSG: prepare_read_message(con); @@ -2789,7 +2790,7 @@ static int try_read(struct ceph_connection *con) } out: - pr_debug("try_read done on %p ret %d\n", con, ret); + pr_debug("%s: done on %p ret %d\n", __func__, con, ret); return ret; bad_tag: @@ -3073,7 +3074,7 @@ static void clear_standby(struct ceph_connection *con) { /* come back from STANDBY? */ if (con->state == CON_STATE_STANDBY) { - pr_debug("clear_standby %p and ++connect_seq\n", con); + pr_debug("%s: %p and ++connect_seq\n", __func__, con); con->state = CON_STATE_PREOPEN; con->connect_seq++; WARN_ON(con_flag_test(con, CON_FLAG_WRITE_PENDING)); diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c index 96aecc142f1c..b0e8563a2ca5 100644 --- a/net/ceph/mon_client.c +++ b/net/ceph/mon_client.c @@ -241,7 +241,7 @@ static void __schedule_delayed(struct ceph_mon_client *monc) else delay = CEPH_MONC_PING_INTERVAL; - pr_debug("__schedule_delayed after %lu\n", delay); + pr_debug("%s: after %lu\n", __func__, delay); mod_delayed_work(system_wq, &monc->delayed_work, round_jiffies_relative(delay)); } @@ -645,11 +645,11 @@ static struct ceph_msg *get_generic_reply(struct ceph_connection *con, mutex_lock(&monc->mutex); req = lookup_generic_request(&monc->generic_request_tree, tid); if (!req) { - pr_debug("get_generic_reply %lld dne\n", tid); + pr_debug("%s: %lld dne\n", __func__, tid); *skip = 1; m = NULL; } else { - pr_debug("get_generic_reply %lld got %p\n", tid, req->reply); + pr_debug("%s: %lld got %p\n", __func__, tid, req->reply); *skip = 0; m = ceph_msg_get(req->reply); /* @@ -978,7 +978,7 @@ static void delayed_work(struct work_struct *work) struct ceph_mon_client *monc = container_of(work, struct ceph_mon_client, delayed_work.work); - pr_debug("monc delayed_work\n"); + pr_debug("%s: monc\n", __func__); mutex_lock(&monc->mutex); if (monc->hunting) { pr_debug("%s continuing hunt\n", __func__); diff --git a/net/ceph/msgpool.c b/net/ceph/msgpool.c index 4341c941d269..a198b869a14f 100644 --- a/net/ceph/msgpool.c +++ b/net/ceph/msgpool.c @@ -17,9 +17,9 @@ static void *msgpool_alloc(gfp_t gfp_mask, void *arg) msg = ceph_msg_new2(pool->type, pool->front_len, pool->max_data_items, gfp_mask, true); if (!msg) { - pr_debug("msgpool_alloc %s failed\n", pool->name); + pr_debug("%s: %s failed\n", __func__, pool->name); } else { - pr_debug("msgpool_alloc %s %p\n", pool->name, msg); + pr_debug("%s: %s %p\n", __func__, pool->name, msg); msg->pool = pool; } return msg; diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index b7bb5f2f5d61..20b5d0a853e1 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -116,8 +116,8 @@ static int calc_layout(struct ceph_file_layout *layout, u64 off, u64 *plen, orig_len - *plen, off, *plen); } - pr_debug("calc_layout objnum=%llx %llu~%llu\n", - *objnum, *objoff, *objlen); + pr_debug("%s: objnum=%llx %llu~%llu\n", + __func__, *objnum, *objoff, *objlen); return 0; } @@ -5515,7 +5515,7 @@ static struct ceph_msg *get_reply(struct ceph_connection *con, } m = ceph_msg_get(req->r_reply); - pr_debug("get_reply tid %lld %p\n", tid, m); + pr_debug("%s: tid %lld %p\n", __func__, tid, m); out_unlock_session: mutex_unlock(&osd->lock); diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c index e47e16aeb008..dcdca28fb979 100644 --- a/net/ceph/osdmap.c +++ b/net/ceph/osdmap.c @@ -67,7 +67,7 @@ static int crush_decode_list_bucket(void **p, void *end, struct crush_bucket_list *b) { int j; - pr_debug("crush_decode_list_bucket %p to %p\n", *p, end); + pr_debug("%s: %p to %p\n", __func__, *p, end); b->item_weights = kcalloc(b->h.size, sizeof(u32), GFP_NOFS); if (b->item_weights == NULL) return -ENOMEM; @@ -88,7 +88,7 @@ static int crush_decode_tree_bucket(void **p, void *end, struct crush_bucket_tree *b) { int j; - pr_debug("crush_decode_tree_bucket %p to %p\n", *p, end); + pr_debug("%s: %p to %p\n", __func__, *p, end); ceph_decode_8_safe(p, end, b->num_nodes, bad); b->node_weights = kcalloc(b->num_nodes, sizeof(u32), GFP_NOFS); if (b->node_weights == NULL) @@ -105,7 +105,7 @@ static int crush_decode_straw_bucket(void **p, void *end, struct crush_bucket_straw *b) { int j; - pr_debug("crush_decode_straw_bucket %p to %p\n", *p, end); + pr_debug("%s: %p to %p\n", __func__, *p, end); b->item_weights = kcalloc(b->h.size, sizeof(u32), GFP_NOFS); if (b->item_weights == NULL) return -ENOMEM; @@ -126,7 +126,7 @@ static int crush_decode_straw2_bucket(void **p, void *end, struct crush_bucket_straw2 *b) { int j; - pr_debug("crush_decode_straw2_bucket %p to %p\n", *p, end); + pr_debug("%s: %p to %p\n", __func__, *p, end); b->item_weights = kcalloc(b->h.size, sizeof(u32), GFP_NOFS); if (b->item_weights == NULL) return -ENOMEM; @@ -421,7 +421,7 @@ static struct crush_map *crush_decode(void *pbyval, void *end) void *start = pbyval; u32 magic; - pr_debug("crush_decode %p to %p len %d\n", *p, end, (int)(end - *p)); + pr_debug("%s: %p to %p len %d\n", __func__, *p, end, (int)(end - *p)); c = kzalloc(sizeof(*c), GFP_NOFS); if (c == NULL) @@ -466,8 +466,8 @@ static struct crush_map *crush_decode(void *pbyval, void *end) c->buckets[i] = NULL; continue; } - pr_debug("crush_decode bucket %d off %x %p to %p\n", - i, (int)(*p - start), *p, end); + pr_debug("%s: bucket %d off %x %p to %p\n", + __func__, i, (int)(*p - start), *p, end); switch (alg) { case CRUSH_BUCKET_UNIFORM: @@ -501,8 +501,8 @@ static struct crush_map *crush_decode(void *pbyval, void *end) b->weight = ceph_decode_32(p); b->size = ceph_decode_32(p); - pr_debug("crush_decode bucket size %d off %x %p to %p\n", - b->size, (int)(*p - start), *p, end); + pr_debug("%s: bucket size %d off %x %p to %p\n", + __func__, b->size, (int)(*p - start), *p, end); b->items = kcalloc(b->size, sizeof(__s32), GFP_NOFS); if (b->items == NULL) @@ -554,14 +554,14 @@ static struct crush_map *crush_decode(void *pbyval, void *end) ceph_decode_32_safe(p, end, yes, bad); if (!yes) { - pr_debug("crush_decode NO rule %d off %x %p to %p\n", - i, (int)(*p - start), *p, end); + pr_debug("%s: NO rule %d off %x %p to %p\n", + __func__, i, (int)(*p - start), *p, end); c->rules[i] = NULL; continue; } - pr_debug("crush_decode rule %d off %x %p to %p\n", - i, (int)(*p - start), *p, end); + pr_debug("%s: rule %d off %x %p to %p\n", + __func__, i, (int)(*p - start), *p, end); /* len */ ceph_decode_32_safe(p, end, yes, bad); @@ -643,13 +643,13 @@ static struct crush_map *crush_decode(void *pbyval, void *end) done: crush_finalize(c); - pr_debug("crush_decode success\n"); + pr_debug("%s: success\n", __func__); return c; badmem: err = -ENOMEM; fail: - pr_debug("crush_decode fail %d\n", err); + pr_debug("%s: fail %d\n", __func__, err); crush_destroy(c); return ERR_PTR(err); From patchwork Mon Aug 17 01:34:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Perches X-Patchwork-Id: 11716263 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD5CC109B for ; Mon, 17 Aug 2020 01:35:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A0DAA2087D for ; Mon, 17 Aug 2020 01:35:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726802AbgHQBfD (ORCPT ); Sun, 16 Aug 2020 21:35:03 -0400 Received: from smtprelay0249.hostedemail.com ([216.40.44.249]:38664 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726732AbgHQBer (ORCPT ); Sun, 16 Aug 2020 21:34:47 -0400 Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60]) by smtprelay03.hostedemail.com (Postfix) with ESMTP id 3BED6837F24D; Mon, 17 Aug 2020 01:34:45 +0000 (UTC) X-Session-Marker: 6A6F6540706572636865732E636F6D X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:69:327:355:379:541:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1978:1981:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3150:3865:3866:3867:3868:3870:3871:3872:4321:4384:4385:4395:4605:5007:6261:7875:7903:8603:8957:9038:10004:10848:11026:11914:12043:12291:12296:12297:12438:12555:12663:12679:12683:12895:12986:13161:13229:13894:13972:14110:14394:21080:21220:21324:21325:21433:21451:21611:21626:21740:21987:21990:30029:30054:30062:30080,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: drop74_330484927012 X-Filterd-Recvd-Size: 31575 Received: from joe-laptop.perches.com (unknown [47.151.133.149]) (Authenticated sender: joe@perches.com) by omf01.hostedemail.com (Postfix) with ESMTPA; Mon, 17 Aug 2020 01:34:43 +0000 (UTC) From: Joe Perches To: Ilya Dryomov , Dongsheng Yang , Jens Axboe Cc: ceph-devel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 5/6] rbd: Use generic debugging facility Date: Sun, 16 Aug 2020 18:34:08 -0700 Message-Id: <6c04898b40c6ac274bc1bf1f5791df2f586c5812.1597626802.git.joe@perches.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: References: MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org The dout macro duplicates the generic features of pr_debug with __FILE__ and __func__ output capability when using dynamic_debug. Convert dout to pr_debug and remove the "pretty" print feature of dout. Miscellanea: o Realign arguments Signed-off-by: Joe Perches --- drivers/block/rbd.c | 231 +++++++++++++++++++++++--------------------- 1 file changed, 120 insertions(+), 111 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index d9c0e7d154f9..19696962c4f4 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -758,7 +758,7 @@ static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts) struct rbd_client *rbdc; int ret = -ENOMEM; - dout("%s:\n", __func__); + pr_debug("%s:\n", __func__); rbdc = kmalloc(sizeof(struct rbd_client), GFP_KERNEL); if (!rbdc) goto out_opt; @@ -779,7 +779,7 @@ static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts) list_add_tail(&rbdc->node, &rbd_client_list); spin_unlock(&rbd_client_list_lock); - dout("%s: rbdc %p\n", __func__, rbdc); + pr_debug("%s: rbdc %p\n", __func__, rbdc); return rbdc; out_client: @@ -789,7 +789,7 @@ static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts) out_opt: if (ceph_opts) ceph_destroy_options(ceph_opts); - dout("%s: error %d\n", __func__, ret); + pr_debug("%s: error %d\n", __func__, ret); return ERR_PTR(ret); } @@ -926,7 +926,7 @@ static void rbd_client_release(struct kref *kref) { struct rbd_client *rbdc = container_of(kref, struct rbd_client, kref); - dout("%s: rbdc %p\n", __func__, rbdc); + pr_debug("%s: rbdc %p\n", __func__, rbdc); spin_lock(&rbd_client_list_lock); list_del(&rbdc->node); spin_unlock(&rbd_client_list_lock); @@ -1310,7 +1310,7 @@ static void zero_bvecs(struct ceph_bvec_iter *bvec_pos, u32 off, u32 bytes) static void rbd_obj_zero_range(struct rbd_obj_request *obj_req, u32 off, u32 bytes) { - dout("%s %p data buf %u~%u\n", __func__, obj_req, off, bytes); + pr_debug("%s %p data buf %u~%u\n", __func__, obj_req, off, bytes); switch (obj_req->img_request->data_type) { case OBJ_REQUEST_BIO: @@ -1329,8 +1329,8 @@ static void rbd_obj_request_destroy(struct kref *kref); static void rbd_obj_request_put(struct rbd_obj_request *obj_request) { rbd_assert(obj_request != NULL); - dout("%s: obj %p (was %d)\n", __func__, obj_request, - kref_read(&obj_request->kref)); + pr_debug("%s: obj %p (was %d)\n", + __func__, obj_request, kref_read(&obj_request->kref)); kref_put(&obj_request->kref, rbd_obj_request_destroy); } @@ -1341,13 +1341,13 @@ static inline void rbd_img_obj_request_add(struct rbd_img_request *img_request, /* Image request now owns object's original reference */ obj_request->img_request = img_request; - dout("%s: img %p obj %p\n", __func__, img_request, obj_request); + pr_debug("%s: img %p obj %p\n", __func__, img_request, obj_request); } static inline void rbd_img_obj_request_del(struct rbd_img_request *img_request, struct rbd_obj_request *obj_request) { - dout("%s: img %p obj %p\n", __func__, img_request, obj_request); + pr_debug("%s: img %p obj %p\n", __func__, img_request, obj_request); list_del(&obj_request->ex.oe_item); rbd_assert(obj_request->img_request == img_request); rbd_obj_request_put(obj_request); @@ -1357,9 +1357,9 @@ static void rbd_osd_submit(struct ceph_osd_request *osd_req) { struct rbd_obj_request *obj_req = osd_req->r_priv; - dout("%s osd_req %p for obj_req %p objno %llu %llu~%llu\n", - __func__, osd_req, obj_req, obj_req->ex.oe_objno, - obj_req->ex.oe_off, obj_req->ex.oe_len); + pr_debug("%s osd_req %p for obj_req %p objno %llu %llu~%llu\n", + __func__, osd_req, obj_req, obj_req->ex.oe_objno, + obj_req->ex.oe_off, obj_req->ex.oe_len); ceph_osdc_start_request(osd_req->r_osdc, osd_req, false); } @@ -1432,8 +1432,8 @@ static void rbd_osd_req_callback(struct ceph_osd_request *osd_req) struct rbd_obj_request *obj_req = osd_req->r_priv; int result; - dout("%s osd_req %p result %d for obj_req %p\n", __func__, osd_req, - osd_req->r_result, obj_req); + pr_debug("%s osd_req %p result %d for obj_req %p\n", + __func__, osd_req, osd_req->r_result, obj_req); /* * Writes aren't allowed to return a data payload. In some @@ -1522,7 +1522,7 @@ static struct rbd_obj_request *rbd_obj_request_create(void) mutex_init(&obj_request->state_mutex); kref_init(&obj_request->kref); - dout("%s %p\n", __func__, obj_request); + pr_debug("%s %p\n", __func__, obj_request); return obj_request; } @@ -1534,7 +1534,7 @@ static void rbd_obj_request_destroy(struct kref *kref) obj_request = container_of(kref, struct rbd_obj_request, kref); - dout("%s: obj %p\n", __func__, obj_request); + pr_debug("%s: obj %p\n", __func__, obj_request); while (!list_empty(&obj_request->osd_reqs)) { osd_req = list_first_entry(&obj_request->osd_reqs, @@ -1661,7 +1661,7 @@ static void rbd_img_request_destroy(struct rbd_img_request *img_request) struct rbd_obj_request *obj_request; struct rbd_obj_request *next_obj_request; - dout("%s: img %p\n", __func__, img_request); + pr_debug("%s: img %p\n", __func__, img_request); WARN_ON(!list_empty(&img_request->lock_item)); for_each_obj_request_safe(img_request, obj_request, next_obj_request) @@ -2037,8 +2037,8 @@ static void rbd_object_map_callback(struct ceph_osd_request *osd_req) struct rbd_obj_request *obj_req = osd_req->r_priv; int result; - dout("%s osd_req %p result %d for obj_req %p\n", __func__, osd_req, - osd_req->r_result, obj_req); + pr_debug("%s osd_req %p result %d for obj_req %p\n", + __func__, osd_req, osd_req->r_result, obj_req); result = rbd_object_map_update_finish(obj_req, osd_req); rbd_obj_handle_request(obj_req, result); @@ -2347,9 +2347,10 @@ static int rbd_obj_init_discard(struct rbd_obj_request *obj_req) if (off >= next_off) return 1; - dout("%s %p %llu~%llu -> %llu~%llu\n", __func__, - obj_req, obj_req->ex.oe_off, obj_req->ex.oe_len, - off, next_off - off); + pr_debug("%s %p %llu~%llu -> %llu~%llu\n", + __func__, obj_req, + obj_req->ex.oe_off, obj_req->ex.oe_len, + off, next_off - off); obj_req->ex.oe_off = off; obj_req->ex.oe_len = next_off - off; } @@ -2661,7 +2662,7 @@ static void set_bio_pos(struct ceph_object_extent *ex, u32 bytes, void *arg) container_of(ex, struct rbd_obj_request, ex); struct ceph_bio_iter *it = arg; - dout("%s objno %llu bytes %u\n", __func__, ex->oe_objno, bytes); + pr_debug("%s objno %llu bytes %u\n", __func__, ex->oe_objno, bytes); obj_req->bio_pos = *it; ceph_bio_iter_advance(it, bytes); } @@ -2672,7 +2673,7 @@ static void count_bio_bvecs(struct ceph_object_extent *ex, u32 bytes, void *arg) container_of(ex, struct rbd_obj_request, ex); struct ceph_bio_iter *it = arg; - dout("%s objno %llu bytes %u\n", __func__, ex->oe_objno, bytes); + pr_debug("%s objno %llu bytes %u\n", __func__, ex->oe_objno, bytes); ceph_bio_iter_advance_step(it, bytes, ({ obj_req->bvec_count++; })); @@ -2685,7 +2686,7 @@ static void copy_bio_bvecs(struct ceph_object_extent *ex, u32 bytes, void *arg) container_of(ex, struct rbd_obj_request, ex); struct ceph_bio_iter *it = arg; - dout("%s objno %llu bytes %u\n", __func__, ex->oe_objno, bytes); + pr_debug("%s objno %llu bytes %u\n", __func__, ex->oe_objno, bytes); ceph_bio_iter_advance_step(it, bytes, ({ obj_req->bvec_pos.bvecs[obj_req->bvec_idx++] = bv; obj_req->bvec_pos.iter.bi_size += bv.bv_len; @@ -2808,8 +2809,8 @@ static bool rbd_obj_may_exist(struct rbd_obj_request *obj_req) return true; } - dout("%s %p objno %llu assuming dne\n", __func__, obj_req, - obj_req->ex.oe_objno); + pr_debug("%s %p objno %llu assuming dne\n", + __func__, obj_req, obj_req->ex.oe_objno); return false; } @@ -2854,8 +2855,8 @@ static int rbd_obj_read_from_parent(struct rbd_obj_request *obj_req) rbd_img_capture_header(child_img_req); up_read(&parent->header_rwsem); - dout("%s child_img_req %p for obj_req %p\n", __func__, child_img_req, - obj_req); + pr_debug("%s child_img_req %p for obj_req %p\n", + __func__, child_img_req, obj_req); if (!rbd_img_is_write(img_req)) { switch (img_req->data_type) { @@ -2977,7 +2978,7 @@ static bool rbd_obj_write_is_noop(struct rbd_obj_request *obj_req) if (!(obj_req->flags & RBD_OBJ_FLAG_MAY_EXIST) && (obj_req->flags & RBD_OBJ_FLAG_NOOP_FOR_NONEXISTENT)) { - dout("%s %p noop for nonexistent\n", __func__, obj_req); + pr_debug("%s %p noop for nonexistent\n", __func__, obj_req); return true; } @@ -3063,7 +3064,7 @@ static int rbd_obj_copyup_empty_snapc(struct rbd_obj_request *obj_req, struct ceph_osd_request *osd_req; int ret; - dout("%s obj_req %p bytes %u\n", __func__, obj_req, bytes); + pr_debug("%s obj_req %p bytes %u\n", __func__, obj_req, bytes); rbd_assert(bytes > 0 && bytes != MODS_ONLY); osd_req = __rbd_obj_add_osd_request(obj_req, &rbd_empty_snapc, 1); @@ -3092,7 +3093,7 @@ static int rbd_obj_copyup_current_snapc(struct rbd_obj_request *obj_req, int which = 0; int ret; - dout("%s obj_req %p bytes %u\n", __func__, obj_req, bytes); + pr_debug("%s obj_req %p bytes %u\n", __func__, obj_req, bytes); if (bytes != MODS_ONLY) num_ops++; /* copyup */ @@ -3278,7 +3279,7 @@ static bool rbd_obj_advance_copyup(struct rbd_obj_request *obj_req, int *result) if (is_zero_bvecs(obj_req->copyup_bvecs, rbd_obj_img_extents_bytes(obj_req))) { - dout("%s %p detected zeros\n", __func__, obj_req); + pr_debug("%s %p detected zeros\n", __func__, obj_req); obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ZEROS; } @@ -3530,7 +3531,7 @@ static int rbd_img_exclusive_lock(struct rbd_img_request *img_req) * Note the use of mod_delayed_work() in rbd_acquire_lock() * and cancel_delayed_work() in wake_lock_waiters(). */ - dout("%s rbd_dev %p queueing lock_dwork\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p queueing lock_dwork\n", __func__, rbd_dev); queue_delayed_work(rbd_dev->task_wq, &rbd_dev->lock_dwork, 0); return 0; } @@ -3679,9 +3680,10 @@ static struct rbd_client_id rbd_get_cid(struct rbd_device *rbd_dev) static void rbd_set_owner_cid(struct rbd_device *rbd_dev, const struct rbd_client_id *cid) { - dout("%s rbd_dev %p %llu-%llu -> %llu-%llu\n", __func__, rbd_dev, - rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle, - cid->gid, cid->handle); + pr_debug("%s rbd_dev %p %llu-%llu -> %llu-%llu\n", + __func__, rbd_dev, + rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle, + cid->gid, cid->handle); rbd_dev->owner_cid = *cid; /* struct */ } @@ -3759,7 +3761,7 @@ static int __rbd_notify_op_lock(struct rbd_device *rbd_dev, int buf_size = sizeof(buf); void *p = buf; - dout("%s rbd_dev %p notify_op %d\n", __func__, rbd_dev, notify_op); + pr_debug("%s rbd_dev %p notify_op %d\n", __func__, rbd_dev, notify_op); /* encode *LockPayload NotifyMessage (op + ClientId) */ ceph_start_encoding(&p, 2, 1, buf_size - CEPH_ENCODING_START_BLK_LEN); @@ -3801,7 +3803,7 @@ static int rbd_request_lock(struct rbd_device *rbd_dev) bool lock_owner_responded = false; int ret; - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); ret = __rbd_notify_op_lock(rbd_dev, RBD_NOTIFY_OP_REQUEST_LOCK, &reply_pages, &reply_len); @@ -3870,7 +3872,7 @@ static void wake_lock_waiters(struct rbd_device *rbd_dev, int result) { struct rbd_img_request *img_req; - dout("%s rbd_dev %p result %d\n", __func__, rbd_dev, result); + pr_debug("%s rbd_dev %p result %d\n", __func__, rbd_dev, result); lockdep_assert_held_write(&rbd_dev->lock_rwsem); cancel_delayed_work(&rbd_dev->lock_dwork); @@ -3900,7 +3902,7 @@ static int get_lock_owner_info(struct rbd_device *rbd_dev, char *lock_tag; int ret; - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); ret = ceph_cls_lock_info(osdc, &rbd_dev->header_oid, &rbd_dev->header_oloc, RBD_LOCK_NAME, @@ -3909,7 +3911,8 @@ static int get_lock_owner_info(struct rbd_device *rbd_dev, return ret; if (*num_lockers == 0) { - dout("%s rbd_dev %p no lockers detected\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p no lockers detected\n", + __func__, rbd_dev); goto out; } @@ -3965,15 +3968,15 @@ static int find_watcher(struct rbd_device *rbd_dev, .handle = cookie, }; - dout("%s rbd_dev %p found cid %llu-%llu\n", __func__, - rbd_dev, cid.gid, cid.handle); + pr_debug("%s rbd_dev %p found cid %llu-%llu\n", + __func__, rbd_dev, cid.gid, cid.handle); rbd_set_owner_cid(rbd_dev, &cid); ret = 1; goto out; } } - dout("%s rbd_dev %p no watchers\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p no watchers\n", __func__, rbd_dev); ret = 0; out: kfree(watchers); @@ -4058,8 +4061,8 @@ static int rbd_try_acquire_lock(struct rbd_device *rbd_dev) int ret; down_read(&rbd_dev->lock_rwsem); - dout("%s rbd_dev %p read lock_state %d\n", __func__, rbd_dev, - rbd_dev->lock_state); + pr_debug("%s rbd_dev %p read lock_state %d\n", + __func__, rbd_dev, rbd_dev->lock_state); if (__rbd_is_lock_owner(rbd_dev)) { up_read(&rbd_dev->lock_rwsem); return 0; @@ -4067,8 +4070,8 @@ static int rbd_try_acquire_lock(struct rbd_device *rbd_dev) up_read(&rbd_dev->lock_rwsem); down_write(&rbd_dev->lock_rwsem); - dout("%s rbd_dev %p write lock_state %d\n", __func__, rbd_dev, - rbd_dev->lock_state); + pr_debug("%s rbd_dev %p write lock_state %d\n", + __func__, rbd_dev, rbd_dev->lock_state); if (__rbd_is_lock_owner(rbd_dev)) { up_write(&rbd_dev->lock_rwsem); return 0; @@ -4113,11 +4116,12 @@ static void rbd_acquire_lock(struct work_struct *work) struct rbd_device, lock_dwork); int ret; - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); again: ret = rbd_try_acquire_lock(rbd_dev); if (ret <= 0) { - dout("%s rbd_dev %p ret %d - done\n", __func__, rbd_dev, ret); + pr_debug("%s rbd_dev %p ret %d - done\n", + __func__, rbd_dev, ret); return; } @@ -4138,8 +4142,8 @@ static void rbd_acquire_lock(struct work_struct *work) * lock owner acked, but resend if we don't see them * release the lock */ - dout("%s rbd_dev %p requeuing lock_dwork\n", __func__, - rbd_dev); + pr_debug("%s rbd_dev %p requeuing lock_dwork\n", + __func__, rbd_dev); mod_delayed_work(rbd_dev->task_wq, &rbd_dev->lock_dwork, msecs_to_jiffies(2 * RBD_NOTIFY_TIMEOUT * MSEC_PER_SEC)); } @@ -4149,7 +4153,7 @@ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev) { bool need_wait; - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); lockdep_assert_held_write(&rbd_dev->lock_rwsem); if (rbd_dev->lock_state != RBD_LOCK_STATE_LOCKED) @@ -4222,7 +4226,7 @@ static void maybe_kick_acquire(struct rbd_device *rbd_dev) { bool have_requests; - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); if (__rbd_is_lock_owner(rbd_dev)) return; @@ -4230,7 +4234,8 @@ static void maybe_kick_acquire(struct rbd_device *rbd_dev) have_requests = !list_empty(&rbd_dev->acquiring_list); spin_unlock(&rbd_dev->lock_lists_lock); if (have_requests || delayed_work_pending(&rbd_dev->lock_dwork)) { - dout("%s rbd_dev %p kicking lock_dwork\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p kicking lock_dwork\n", + __func__, rbd_dev); mod_delayed_work(rbd_dev->task_wq, &rbd_dev->lock_dwork, 0); } } @@ -4245,8 +4250,8 @@ static void rbd_handle_acquired_lock(struct rbd_device *rbd_dev, u8 struct_v, cid.handle = ceph_decode_64(p); } - dout("%s rbd_dev %p cid %llu-%llu\n", __func__, rbd_dev, cid.gid, - cid.handle); + pr_debug("%s rbd_dev %p cid %llu-%llu\n", + __func__, rbd_dev, cid.gid, cid.handle); if (!rbd_cid_equal(&cid, &rbd_empty_cid)) { down_write(&rbd_dev->lock_rwsem); if (rbd_cid_equal(&cid, &rbd_dev->owner_cid)) { @@ -4278,14 +4283,14 @@ static void rbd_handle_released_lock(struct rbd_device *rbd_dev, u8 struct_v, cid.handle = ceph_decode_64(p); } - dout("%s rbd_dev %p cid %llu-%llu\n", __func__, rbd_dev, cid.gid, - cid.handle); + pr_debug("%s rbd_dev %p cid %llu-%llu\n", + __func__, rbd_dev, cid.gid, cid.handle); if (!rbd_cid_equal(&cid, &rbd_empty_cid)) { down_write(&rbd_dev->lock_rwsem); if (!rbd_cid_equal(&cid, &rbd_dev->owner_cid)) { - dout("%s rbd_dev %p unexpected owner, cid %llu-%llu != owner_cid %llu-%llu\n", - __func__, rbd_dev, cid.gid, cid.handle, - rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle); + pr_debug("%s rbd_dev %p unexpected owner, cid %llu-%llu != owner_cid %llu-%llu\n", + __func__, rbd_dev, cid.gid, cid.handle, + rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle); up_write(&rbd_dev->lock_rwsem); return; } @@ -4316,8 +4321,8 @@ static int rbd_handle_request_lock(struct rbd_device *rbd_dev, u8 struct_v, cid.handle = ceph_decode_64(p); } - dout("%s rbd_dev %p cid %llu-%llu\n", __func__, rbd_dev, cid.gid, - cid.handle); + pr_debug("%s rbd_dev %p cid %llu-%llu\n", + __func__, rbd_dev, cid.gid, cid.handle); if (rbd_cid_equal(&cid, &my_cid)) return result; @@ -4335,8 +4340,8 @@ static int rbd_handle_request_lock(struct rbd_device *rbd_dev, u8 struct_v, if (rbd_dev->lock_state == RBD_LOCK_STATE_LOCKED) { if (!rbd_dev->opts->exclusive) { - dout("%s rbd_dev %p queueing unlock_work\n", - __func__, rbd_dev); + pr_debug("%s rbd_dev %p queueing unlock_work\n", + __func__, rbd_dev); queue_work(rbd_dev->task_wq, &rbd_dev->unlock_work); } else { @@ -4380,14 +4385,14 @@ static void __rbd_acknowledge_notify(struct rbd_device *rbd_dev, static void rbd_acknowledge_notify(struct rbd_device *rbd_dev, u64 notify_id, u64 cookie) { - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); __rbd_acknowledge_notify(rbd_dev, notify_id, cookie, NULL); } static void rbd_acknowledge_notify_result(struct rbd_device *rbd_dev, u64 notify_id, u64 cookie, s32 result) { - dout("%s rbd_dev %p result %d\n", __func__, rbd_dev, result); + pr_debug("%s rbd_dev %p result %d\n", __func__, rbd_dev, result); __rbd_acknowledge_notify(rbd_dev, notify_id, cookie, &result); } @@ -4402,8 +4407,8 @@ static void rbd_watch_cb(void *arg, u64 notify_id, u64 cookie, u32 notify_op; int ret; - dout("%s rbd_dev %p cookie %llu notify_id %llu data_len %zu\n", - __func__, rbd_dev, cookie, notify_id, data_len); + pr_debug("%s rbd_dev %p cookie %llu notify_id %llu data_len %zu\n", + __func__, rbd_dev, cookie, notify_id, data_len); if (data_len) { ret = ceph_start_decoding(&p, end, 1, "NotifyMessage", &struct_v, &len); @@ -4420,7 +4425,7 @@ static void rbd_watch_cb(void *arg, u64 notify_id, u64 cookie, len = 0; } - dout("%s rbd_dev %p notify_op %u\n", __func__, rbd_dev, notify_op); + pr_debug("%s rbd_dev %p notify_op %u\n", __func__, rbd_dev, notify_op); switch (notify_op) { case RBD_NOTIFY_OP_ACQUIRED_LOCK: rbd_handle_acquired_lock(rbd_dev, struct_v, &p); @@ -4486,7 +4491,7 @@ static int __rbd_register_watch(struct rbd_device *rbd_dev) struct ceph_osd_linger_request *handle; rbd_assert(!rbd_dev->watch_handle); - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); handle = ceph_osdc_watch(osdc, &rbd_dev->header_oid, &rbd_dev->header_oloc, rbd_watch_cb, @@ -4507,7 +4512,7 @@ static void __rbd_unregister_watch(struct rbd_device *rbd_dev) int ret; rbd_assert(rbd_dev->watch_handle); - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); ret = ceph_osdc_unwatch(osdc, rbd_dev->watch_handle); if (ret) @@ -4536,7 +4541,7 @@ static int rbd_register_watch(struct rbd_device *rbd_dev) static void cancel_tasks_sync(struct rbd_device *rbd_dev) { - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); cancel_work_sync(&rbd_dev->acquired_lock_work); cancel_work_sync(&rbd_dev->released_lock_work); @@ -4602,7 +4607,7 @@ static void rbd_reregister_watch(struct work_struct *work) struct rbd_device, watch_dwork); int ret; - dout("%s rbd_dev %p\n", __func__, rbd_dev); + pr_debug("%s rbd_dev %p\n", __func__, rbd_dev); mutex_lock(&rbd_dev->watch_mutex); if (rbd_dev->watch_state != RBD_WATCH_STATE_ERROR) { @@ -4713,7 +4718,7 @@ static void rbd_queue_workfn(struct work_struct *work) /* Ignore/skip any zero-length requests */ if (!length) { - dout("%s: zero-length request\n", __func__); + pr_debug("%s: zero-length request\n", __func__); result = 0; goto err_img_request; } @@ -4732,8 +4737,9 @@ static void rbd_queue_workfn(struct work_struct *work) goto err_img_request; } - dout("%s rbd_dev %p img_req %p %s %llu~%llu\n", __func__, rbd_dev, - img_request, obj_op_name(op_type), offset, length); + pr_debug("%s rbd_dev %p img_req %p %s %llu~%llu\n", + __func__, rbd_dev, img_request, + obj_op_name(op_type), offset, length); if (op_type == OBJ_OP_DISCARD || op_type == OBJ_OP_ZEROOUT) result = rbd_img_fill_nodata(img_request, offset, length); @@ -4919,7 +4925,8 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev) if (test_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags) && !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) { size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE; - dout("setting size to %llu sectors", (unsigned long long)size); + pr_debug("setting size to %llu sectors\n", + (unsigned long long)size); set_capacity(rbd_dev->disk, size); revalidate_disk(rbd_dev->disk); } @@ -5454,7 +5461,8 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc, /* we have a ref from do_rbd_add() */ __module_get(THIS_MODULE); - dout("%s rbd_dev %p dev_id %d\n", __func__, rbd_dev, rbd_dev->dev_id); + pr_debug("%s rbd_dev %p dev_id %d\n", + __func__, rbd_dev, rbd_dev->dev_id); return rbd_dev; fail_dev_id: @@ -5489,7 +5497,7 @@ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id, &rbd_dev->header_oloc, "get_size", &snapid, sizeof(snapid), &size_buf, sizeof(size_buf)); - dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret); + pr_debug("%s: rbd_obj_method_sync returned %d\n", __func__, ret); if (ret < 0) return ret; if (ret < sizeof (size_buf)) @@ -5497,13 +5505,13 @@ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id, if (order) { *order = size_buf.order; - dout(" order %u", (unsigned int)*order); + pr_debug("order %u\n", (unsigned int)*order); } *snap_size = le64_to_cpu(size_buf.size); - dout(" snap_id 0x%016llx snap_size = %llu\n", - (unsigned long long)snap_id, - (unsigned long long)*snap_size); + pr_debug("snap_id 0x%016llx snap_size = %llu\n", + (unsigned long long)snap_id, + (unsigned long long)*snap_size); return 0; } @@ -5531,7 +5539,7 @@ static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev) ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid, &rbd_dev->header_oloc, "get_object_prefix", NULL, 0, reply_buf, size); - dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret); + pr_debug("%s: rbd_obj_method_sync returned %d\n", __func__, ret); if (ret < 0) goto out; @@ -5544,7 +5552,7 @@ static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev) ret = PTR_ERR(rbd_dev->header.object_prefix); rbd_dev->header.object_prefix = NULL; } else { - dout(" object_prefix = %s\n", rbd_dev->header.object_prefix); + pr_debug("object_prefix = %s\n", rbd_dev->header.object_prefix); } out: kfree(reply_buf); @@ -5573,7 +5581,7 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id, &rbd_dev->header_oloc, "get_features", &features_in, sizeof(features_in), &features_buf, sizeof(features_buf)); - dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret); + pr_debug("%s: rbd_obj_method_sync returned %d\n", __func__, ret); if (ret < 0) return ret; if (ret < sizeof (features_buf)) @@ -5588,10 +5596,10 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id, *snap_features = le64_to_cpu(features_buf.features); - dout(" snap_id 0x%016llx features = 0x%016llx incompat = 0x%016llx\n", - (unsigned long long)snap_id, - (unsigned long long)*snap_features, - (unsigned long long)le64_to_cpu(features_buf.incompat)); + pr_debug("snap_id 0x%016llx features = 0x%016llx incompat = 0x%016llx\n", + (unsigned long long)snap_id, + (unsigned long long)*snap_features, + (unsigned long long)le64_to_cpu(features_buf.incompat)); return 0; } @@ -5795,9 +5803,9 @@ static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev) if (ret) goto out_err; - dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n", - __func__, pii.pool_id, pii.pool_ns, pii.image_id, pii.snap_id, - pii.has_overlap, pii.overlap); + pr_debug("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n", + __func__, pii.pool_id, pii.pool_ns, pii.image_id, pii.snap_id, + pii.has_overlap, pii.overlap); if (pii.pool_id == CEPH_NOPOOL || !pii.has_overlap) { /* @@ -5890,7 +5898,7 @@ static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev) ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid, &rbd_dev->header_oloc, "get_stripe_unit_count", NULL, 0, &striping_info_buf, size); - dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret); + pr_debug("%s: rbd_obj_method_sync returned %d\n", __func__, ret); if (ret < 0) return ret; if (ret < size) @@ -5963,7 +5971,8 @@ static char *rbd_dev_image_name(struct rbd_device *rbd_dev) if (IS_ERR(image_name)) image_name = NULL; else - dout("%s: name is %s len is %zd\n", __func__, image_name, len); + pr_debug("%s: name is %s len is %zd\n", + __func__, image_name, len); out: kfree(reply_buf); kfree(image_id); @@ -6135,7 +6144,7 @@ static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev) ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid, &rbd_dev->header_oloc, "get_snapcontext", NULL, 0, reply_buf, size); - dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret); + pr_debug("%s: rbd_obj_method_sync returned %d\n", __func__, ret); if (ret < 0) goto out; @@ -6172,8 +6181,8 @@ static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev) ceph_put_snap_context(rbd_dev->header.snapc); rbd_dev->header.snapc = snapc; - dout(" snap context seq = %llu, snap_count = %u\n", - (unsigned long long)seq, (unsigned int)snap_count); + pr_debug(" snap context seq = %llu, snap_count = %u\n", + (unsigned long long)seq, (unsigned int)snap_count); out: kfree(reply_buf); @@ -6200,7 +6209,7 @@ static const char *rbd_dev_v2_snap_name(struct rbd_device *rbd_dev, ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid, &rbd_dev->header_oloc, "get_snapshot_name", &snapid, sizeof(snapid), reply_buf, size); - dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret); + pr_debug("%s: rbd_obj_method_sync returned %d\n", __func__, ret); if (ret < 0) { snap_name = ERR_PTR(ret); goto out; @@ -6212,8 +6221,8 @@ static const char *rbd_dev_v2_snap_name(struct rbd_device *rbd_dev, if (IS_ERR(snap_name)) goto out; - dout(" snap_id 0x%016llx snap_name = %s\n", - (unsigned long long)snap_id, snap_name); + pr_debug(" snap_id 0x%016llx snap_name = %s\n", + (unsigned long long)snap_id, snap_name); out: kfree(reply_buf); @@ -6320,7 +6329,7 @@ static int rbd_parse_param(struct fs_parameter *param, return ret; token = __fs_parse(&log, rbd_parameters, param, &result); - dout("%s fs_parse '%s' token %d\n", __func__, param->key, token); + pr_debug("%s fs_parse '%s' token %d\n", __func__, param->key, token); if (token < 0) { if (token == -ENOPARAM) return inval_plog(&log, "Unknown parameter '%s'", @@ -6409,7 +6418,7 @@ static int rbd_parse_options(char *options, struct rbd_parse_opts_ctx *pctx) char *key; int ret = 0; - dout("%s '%s'\n", __func__, options); + pr_debug("%s '%s'\n", __func__, options); while ((key = strsep(&options, ",")) != NULL) { if (*key) { struct fs_parameter param = { @@ -6692,7 +6701,7 @@ static int rbd_dev_image_id(struct rbd_device *rbd_dev) if (ret) return ret; - dout("rbd id object name is %s\n", oid.name); + pr_debug("rbd id object name is %s\n", oid.name); /* Response will be an encoded string, which includes a length */ size = sizeof (__le32) + RBD_IMAGE_ID_LEN_MAX; @@ -6707,7 +6716,7 @@ static int rbd_dev_image_id(struct rbd_device *rbd_dev) ret = rbd_obj_method_sync(rbd_dev, &oid, &rbd_dev->header_oloc, "get_id", NULL, 0, response, size); - dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret); + pr_debug("%s: rbd_obj_method_sync returned %d\n", __func__, ret); if (ret == -ENOENT) { image_id = kstrdup("", GFP_KERNEL); ret = image_id ? 0 : -ENOMEM; @@ -6725,7 +6734,7 @@ static int rbd_dev_image_id(struct rbd_device *rbd_dev) if (!ret) { rbd_dev->spec->image_id = image_id; - dout("image_id is %s\n", image_id); + pr_debug("image_id is %s\n", image_id); } out: kfree(response); @@ -7031,8 +7040,8 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth) if (ret) goto err_out_probe; - dout("discovered format %u image, header name is %s\n", - rbd_dev->image_format, rbd_dev->header_oid.name); + pr_debug("discovered format %u image, header name is %s\n", + rbd_dev->image_format, rbd_dev->header_oid.name); return 0; err_out_probe: From patchwork Mon Aug 17 01:34:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Perches X-Patchwork-Id: 11716265 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BBBD816B1 for ; Mon, 17 Aug 2020 01:35:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE2BE2087D for ; Mon, 17 Aug 2020 01:35:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726792AbgHQBfD (ORCPT ); Sun, 16 Aug 2020 21:35:03 -0400 Received: from smtprelay0100.hostedemail.com ([216.40.44.100]:50468 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726221AbgHQBes (ORCPT ); Sun, 16 Aug 2020 21:34:48 -0400 Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60]) by smtprelay05.hostedemail.com (Postfix) with ESMTP id AEFEB18029155; Mon, 17 Aug 2020 01:34:46 +0000 (UTC) X-Session-Marker: 6A6F6540706572636865732E636F6D X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3867:3868:5007:6261:8603:8660:9163:9592:10004:10848:11026:11473:11658:11914:12043:12296:12297:12438:12555:12683:12895:13069:13148:13230:13311:13357:13894:14110:14181:14384:14394:14721:14828:21080:21433:21451:21627:21939:30054,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: care37_35117f927012 X-Filterd-Recvd-Size: 2064 Received: from joe-laptop.perches.com (unknown [47.151.133.149]) (Authenticated sender: joe@perches.com) by omf01.hostedemail.com (Postfix) with ESMTPA; Mon, 17 Aug 2020 01:34:45 +0000 (UTC) From: Joe Perches To: Ilya Dryomov , Jeff Layton Cc: ceph-devel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V2 6/6] ceph_debug: Remove now unused dout macro definitions Date: Sun, 16 Aug 2020 18:34:09 -0700 Message-Id: X-Mailer: git-send-email 2.26.0 In-Reply-To: References: MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org All the uses have be converted to pr_debug, so remove these. Signed-off-by: Joe Perches --- include/linux/ceph/ceph_debug.h | 30 ------------------------------ 1 file changed, 30 deletions(-) diff --git a/include/linux/ceph/ceph_debug.h b/include/linux/ceph/ceph_debug.h index d5a5da838caf..81c0d7195f1e 100644 --- a/include/linux/ceph/ceph_debug.h +++ b/include/linux/ceph/ceph_debug.h @@ -6,34 +6,4 @@ #include -#ifdef CONFIG_CEPH_LIB_PRETTYDEBUG - -/* - * wrap pr_debug to include a filename:lineno prefix on each line. - * this incurs some overhead (kernel size and execution time) due to - * the extra function call at each call site. - */ - -# if defined(DEBUG) || defined(CONFIG_DYNAMIC_DEBUG) -# define dout(fmt, ...) \ - pr_debug("%.*s %12.12s:%-4d : " fmt, \ - 8 - (int)sizeof(KBUILD_MODNAME), " ", \ - kbasename(__FILE__), __LINE__, ##__VA_ARGS__) -# else -/* faux printk call just to see any compiler warnings. */ -# define dout(fmt, ...) do { \ - if (0) \ - printk(KERN_DEBUG fmt, ##__VA_ARGS__); \ - } while (0) -# endif - -#else - -/* - * or, just wrap pr_debug - */ -# define dout(fmt, ...) pr_debug(" " fmt, ##__VA_ARGS__) - -#endif - #endif