From patchwork Thu Feb 27 21:12:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 11410083 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E75717E0 for ; Thu, 27 Feb 2020 21:29:47 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 174F7246A0 for ; Thu, 27 Feb 2020 21:29:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 174F7246A0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lustre-devel-bounces@lists.lustre.org Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id D5B163494F3; Thu, 27 Feb 2020 13:25:35 -0800 (PST) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 8B33B21FD34 for ; Thu, 27 Feb 2020 13:19:35 -0800 (PST) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id 4B22A2C6F; Thu, 27 Feb 2020 16:18:16 -0500 (EST) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id 49C4F46C; Thu, 27 Feb 2020 16:18:16 -0500 (EST) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Thu, 27 Feb 2020 16:12:01 -0500 Message-Id: <1582838290-17243-254-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1582838290-17243-1-git-send-email-jsimmons@infradead.org> References: <1582838290-17243-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 253/622] lustre: ptlrpc: Fix style issues for sec_bulk.c X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Arshad Hussain , Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" From: Arshad Hussain This patch fixes issues reported by checkpatch for file fs/lustre/ptlrpc/sec_bulk.c WC-bug-id: https://jira.whamcloud.com/browse/LU-6142 Lustre-commit: a294ea9a0e04 ("LU-6142 ptlrpc: Fix style issues for sec_bulk.c") Signed-off-by: Arshad Hussain Reviewed-on: https://review.whamcloud.com/34548 Reviewed-by: Andreas Dilger Reviewed-by: Ben Evans Signed-off-by: James Simmons --- fs/lustre/ptlrpc/sec_bulk.c | 71 ++++++++++++++++++++------------------------- 1 file changed, 32 insertions(+), 39 deletions(-) diff --git a/fs/lustre/ptlrpc/sec_bulk.c b/fs/lustre/ptlrpc/sec_bulk.c index e170da1..d36230b 100644 --- a/fs/lustre/ptlrpc/sec_bulk.c +++ b/fs/lustre/ptlrpc/sec_bulk.c @@ -50,9 +50,9 @@ #include "ptlrpc_internal.h" -/**************************************** - * bulk encryption page pools * - ****************************************/ +/* + * bulk encryption page pools + */ #define POINTERS_PER_PAGE (PAGE_SIZE / sizeof(void *)) #define PAGES_PER_POOL (POINTERS_PER_PAGE) @@ -63,19 +63,16 @@ #define CACHE_QUIESCENT_PERIOD (20) static struct ptlrpc_enc_page_pool { - /* - * constants - */ - unsigned long epp_max_pages; /* maximum pages can hold, const */ - unsigned int epp_max_pools; /* number of pools, const */ + unsigned long epp_max_pages; /* maximum pages can hold, const */ + unsigned int epp_max_pools; /* number of pools, const */ /* * wait queue in case of not enough free pages. */ - wait_queue_head_t epp_waitq; /* waiting threads */ - unsigned int epp_waitqlen; /* wait queue length */ - unsigned long epp_pages_short; /* # of pages wanted of in-q users */ - unsigned int epp_growing:1; /* during adding pages */ + wait_queue_head_t epp_waitq; /* waiting threads */ + unsigned int epp_waitqlen; /* wait queue length */ + unsigned long epp_pages_short; /* # of pages wanted of in-q users */ + unsigned int epp_growing:1; /* during adding pages */ /* * indicating how idle the pools are, from 0 to MAX_IDLE_IDX @@ -84,36 +81,32 @@ * is idled for a while but the idle_idx might still be low if no * activities happened in the pools. */ - unsigned long epp_idle_idx; + unsigned long epp_idle_idx; /* last shrink time due to mem tight */ - time64_t epp_last_shrink; - time64_t epp_last_access; - - /* - * in-pool pages bookkeeping - */ - spinlock_t epp_lock; /* protect following fields */ - unsigned long epp_total_pages; /* total pages in pools */ - unsigned long epp_free_pages; /* current pages available */ - - /* - * statistics - */ - unsigned long epp_st_max_pages; /* # of pages ever reached */ - unsigned int epp_st_grows; /* # of grows */ - unsigned int epp_st_grow_fails; /* # of add pages failures */ - unsigned int epp_st_shrinks; /* # of shrinks */ - unsigned long epp_st_access; /* # of access */ - unsigned long epp_st_missings; /* # of cache missing */ - unsigned long epp_st_lowfree; /* lowest free pages reached */ - unsigned int epp_st_max_wqlen; /* highest waitqueue length */ - ktime_t epp_st_max_wait; /* in nanoseconds */ - unsigned long epp_st_outofmem; /* # of out of mem requests */ + time64_t epp_last_shrink; + time64_t epp_last_access; + + /* in-pool pages bookkeeping */ + spinlock_t epp_lock; /* protect following fields */ + unsigned long epp_total_pages; /* total pages in pools */ + unsigned long epp_free_pages; /* current pages available */ + + /* statistics */ + unsigned long epp_st_max_pages; /* # of pages ever reached */ + unsigned int epp_st_grows; /* # of grows */ + unsigned int epp_st_grow_fails; /* # of add pages failures */ + unsigned int epp_st_shrinks; /* # of shrinks */ + unsigned long epp_st_access; /* # of access */ + unsigned long epp_st_missings; /* # of cache missing */ + unsigned long epp_st_lowfree; /* lowest free pages reached */ + unsigned int epp_st_max_wqlen; /* highest waitqueue length */ + ktime_t epp_st_max_wait; /* in nanoseconds */ + unsigned long epp_st_outofmem; /* # of out of mem requests */ /* - * pointers to pools + * pointers to pools, may be vmalloc'd */ - struct page ***epp_pools; + struct page ***epp_pools; } page_pools; /* @@ -185,7 +178,7 @@ static void enc_pools_release_free_pages(long npages) /* max pool index after the release */ p_idx_max1 = page_pools.epp_total_pages == 0 ? -1 : - ((page_pools.epp_total_pages - 1) / PAGES_PER_POOL); + ((page_pools.epp_total_pages - 1) / PAGES_PER_POOL); p_idx = page_pools.epp_free_pages / PAGES_PER_POOL; g_idx = page_pools.epp_free_pages % PAGES_PER_POOL;