diff mbox series

[1/1] block: fix trivial typos in comments

Message ID 31acb3239b7ab8989db0c9951e8740050aef0205.1616727528.git.tom.saeger@oracle.com (mailing list archive)
State New, archived
Headers show
Series [1/1] block: fix trivial typos in comments | expand

Commit Message

Tom Saeger March 26, 2021, 3:04 a.m. UTC
s/Additonal/Additional/
s/assocaited/associated/
s/assocaited/associated/
s/assocating/associating/
s/becasue/because/
s/configred/configured/
s/deactive/deactivate/
s/followings/following/
s/funtion/function/
s/heirarchy/hierarchy/
s/intiailized/initialized/
s/prefered/preferred/
s/readded/read/
s/Secion/Section/
s/soley/solely/

Cc: trivial@kernel.org
Signed-off-by: Tom Saeger <tom.saeger@oracle.com>
---
 block/bfq-iosched.c       |  4 ++--
 block/blk-cgroup-rwstat.c |  2 +-
 block/blk-cgroup.c        |  6 +++---
 block/blk-core.c          |  2 +-
 block/blk-iocost.c        | 12 ++++++------
 block/blk-iolatency.c     |  4 ++--
 block/blk-merge.c         |  6 +++---
 block/blk-mq.c            |  4 ++--
 block/blk-settings.c      |  2 +-
 block/blk-stat.h          |  2 +-
 block/blk.h               |  2 +-
 block/kyber-iosched.c     |  2 +-
 block/opal_proto.h        |  4 ++--
 block/partitions/ibm.c    |  2 +-
 block/partitions/sun.c    |  2 +-
 block/scsi_ioctl.c        |  2 +-
 16 files changed, 29 insertions(+), 29 deletions(-)


base-commit: 430a67f9d6169a7b3e328bceb2ef9542e4153c7c

Comments

Jens Axboe March 26, 2021, 3:41 p.m. UTC | #1
On 3/25/21 9:04 PM, Tom Saeger wrote:
> 
> s/Additonal/Additional/
> s/assocaited/associated/
> s/assocaited/associated/
> s/assocating/associating/
> s/becasue/because/
> s/configred/configured/
> s/deactive/deactivate/
> s/followings/following/
> s/funtion/function/
> s/heirarchy/hierarchy/
> s/intiailized/initialized/
> s/prefered/preferred/
> s/readded/read/
> s/Secion/Section/
> s/soley/solely/

While I'm generally happy to accept any patch that makes sense, the
recent influx of speling fixes have me less than excited. They just
add complications to backports and stable patches, for example, and
I'd prefer not to take them for that reason alone.
Tom Saeger March 26, 2021, 3:45 p.m. UTC | #2
On Fri, Mar 26, 2021 at 09:41:49AM -0600, Jens Axboe wrote:
> On 3/25/21 9:04 PM, Tom Saeger wrote:
> > 
> > s/Additonal/Additional/
> > s/assocaited/associated/
> > s/assocaited/associated/
> > s/assocating/associating/
> > s/becasue/because/
> > s/configred/configured/
> > s/deactive/deactivate/
> > s/followings/following/
> > s/funtion/function/
> > s/heirarchy/hierarchy/
> > s/intiailized/initialized/
> > s/prefered/preferred/
> > s/readded/read/
> > s/Secion/Section/
> > s/soley/solely/
> 
> While I'm generally happy to accept any patch that makes sense, the
> recent influx of speling fixes have me less than excited. They just
> add complications to backports and stable patches, for example, and
> I'd prefer not to take them for that reason alone.

Nod.

In that case - perhaps adding these entries to scripts/spelling.txt
would at least catch some going forward?

I can send that.

> 
> -- 
> Jens Axboe
>
Jens Axboe March 26, 2021, 3:46 p.m. UTC | #3
On 3/26/21 9:45 AM, Tom Saeger wrote:
> On Fri, Mar 26, 2021 at 09:41:49AM -0600, Jens Axboe wrote:
>> On 3/25/21 9:04 PM, Tom Saeger wrote:
>>>
>>> s/Additonal/Additional/
>>> s/assocaited/associated/
>>> s/assocaited/associated/
>>> s/assocating/associating/
>>> s/becasue/because/
>>> s/configred/configured/
>>> s/deactive/deactivate/
>>> s/followings/following/
>>> s/funtion/function/
>>> s/heirarchy/hierarchy/
>>> s/intiailized/initialized/
>>> s/prefered/preferred/
>>> s/readded/read/
>>> s/Secion/Section/
>>> s/soley/solely/
>>
>> While I'm generally happy to accept any patch that makes sense, the
>> recent influx of speling fixes have me less than excited. They just
>> add complications to backports and stable patches, for example, and
>> I'd prefer not to take them for that reason alone.
> 
> Nod.
> 
> In that case - perhaps adding these entries to scripts/spelling.txt
> would at least catch some going forward?
> 
> I can send that.

That seems like a good idea.
Tom Saeger March 29, 2021, 6:13 p.m. UTC | #4
On Fri, Mar 26, 2021 at 09:46:36AM -0600, Jens Axboe wrote:
> On 3/26/21 9:45 AM, Tom Saeger wrote:
> > On Fri, Mar 26, 2021 at 09:41:49AM -0600, Jens Axboe wrote:
> >> On 3/25/21 9:04 PM, Tom Saeger wrote:
> >>>
> >>> s/Additonal/Additional/
> >>> s/assocaited/associated/
> >>> s/assocaited/associated/
> >>> s/assocating/associating/
> >>> s/becasue/because/
> >>> s/configred/configured/
> >>> s/deactive/deactivate/
> >>> s/followings/following/
> >>> s/funtion/function/
> >>> s/heirarchy/hierarchy/
> >>> s/intiailized/initialized/
> >>> s/prefered/preferred/
> >>> s/readded/read/
> >>> s/Secion/Section/
> >>> s/soley/solely/
> >>
> >> While I'm generally happy to accept any patch that makes sense, the
> >> recent influx of speling fixes have me less than excited. They just
> >> add complications to backports and stable patches, for example, and
> >> I'd prefer not to take them for that reason alone.
> > 
> > Nod.
> > 
> > In that case - perhaps adding these entries to scripts/spelling.txt
> > would at least catch some going forward?
> > 
> > I can send that.
> 
> That seems like a good idea.
> 
> -- 
> Jens Axboe
> 

What would be the process - which avoids the backport complications?

Perhaps a few of these could be fixed when other changes are made to these files?
Granted these are trivial fixes, just trying to understand the process.

Example work-flow based on recent block merge upstream:

$ git log -n1 abed516ecd02ceb30fbd091e9b26205ea3192c65 --oneline
    abed516ecd02 Merge tag 'block-5.12-2021-03-27' of git://git.kernel.dk/linux-block

# get files changed by recent merge (or use commit range of interest)
$ HID=abed516ecd02 ; git diff --name-only "$(git merge-base "${HID}^2" "${HID}^1")".."${HID}^2"
    block/bio.c
    block/blk-merge.c
    block/partitions/core.c
    fs/block_dev.c

# foreach file in recent merge, run checkpatch in typo fix mode
$ HID=abed516ecd02 ; git diff --name-only "$(git merge-base "${HID}^2" "${HID}^1")".."${HID}^2" \
| xargs ./scripts/checkpatch.pl --strict --fix --fix-inplace --types TYPO_SPELLING -f | grep -A4 -F "CHECK:"
    CHECK: 'splitted' may be misspelled - perhaps 'split'?
    #344: FILE: block/blk-merge.c:344:
    +               /* there isn't chance to merge the splitted bio */
                                                       ^^^^^^^^

    --
    CHECK: 'beeing' may be misspelled - perhaps 'being'?
    #934: FILE: fs/block_dev.c:934:
    + * or NULL if the inode is already beeing freed.
                                        ^^^^^^

# what changed?
$ git status -su
     M block/blk-merge.c
     M fs/block_dev.c


Something similar could be done for other checkpatch modes or other linters.
Just an idea - thoughts?

Regards,

--Tom
diff mbox series

Patch

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 9b7678ad5830..936e48bdecaf 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1798,7 +1798,7 @@  static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
 	 * As for throughput, we ask bfq_better_to_idle() whether we
 	 * still need to plug I/O dispatching. If bfq_better_to_idle()
 	 * says no, then plugging is not needed any longer, either to
-	 * boost throughput or to perserve service guarantees. Then
+	 * boost throughput or to preserve service guarantees. Then
 	 * the best option is to stop plugging I/O, as not doing so
 	 * would certainly lower throughput. We may end up in this
 	 * case if: (1) upon a dispatch attempt, we detected that it
@@ -5050,7 +5050,7 @@  void bfq_put_queue(struct bfq_queue *bfqq)
 		 * by the fact that bfqq has just been merged.
 		 * 2) burst_size is greater than 0, to handle
 		 * unbalanced decrements. Unbalanced decrements may
-		 * happen in te following case: bfqq is inserted into
+		 * happen in the following case: bfqq is inserted into
 		 * the current burst list--without incrementing
 		 * bust_size--because of a split, but the current
 		 * burst list is not the burst list bfqq belonged to
diff --git a/block/blk-cgroup-rwstat.c b/block/blk-cgroup-rwstat.c
index 3304e841df7c..0039e4756fc3 100644
--- a/block/blk-cgroup-rwstat.c
+++ b/block/blk-cgroup-rwstat.c
@@ -37,7 +37,7 @@  EXPORT_SYMBOL_GPL(blkg_rwstat_exit);
  * @pd: policy private data of interest
  * @rwstat: rwstat to print
  *
- * Print @rwstat to @sf for the device assocaited with @pd.
+ * Print @rwstat to @sf for the device associated with @pd.
  */
 u64 __blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
 			 const struct blkg_rwstat_sample *rwstat)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index a317c03d40f6..e5dc2e13487f 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -143,7 +143,7 @@  static void blkg_async_bio_workfn(struct work_struct *work)
  * @q: request_queue the new blkg is associated with
  * @gfp_mask: allocation mask to use
  *
- * Allocate a new blkg assocating @blkcg and @q.
+ * Allocate a new blkg associating @blkcg and @q.
  */
 static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct request_queue *q,
 				   gfp_t gfp_mask)
@@ -526,7 +526,7 @@  EXPORT_SYMBOL_GPL(blkcg_print_blkgs);
  * @pd: policy private data of interest
  * @v: value to print
  *
- * Print @v to @sf for the device assocaited with @pd.
+ * Print @v to @sf for the device associated with @pd.
  */
 u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v)
 {
@@ -715,7 +715,7 @@  EXPORT_SYMBOL_GPL(blkg_conf_prep);
 
 /**
  * blkg_conf_finish - finish up per-blkg config update
- * @ctx: blkg_conf_ctx intiailized by blkg_conf_prep()
+ * @ctx: blkg_conf_ctx initialized by blkg_conf_prep()
  *
  * Finish up after per-blkg config update.  This function must be paired
  * with blkg_conf_prep().
diff --git a/block/blk-core.c b/block/blk-core.c
index fc60ff208497..e866e58214e2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1035,7 +1035,7 @@  blk_qc_t submit_bio_noacct(struct bio *bio)
 	/*
 	 * We only want one ->submit_bio to be active at a time, else stack
 	 * usage with stacked devices could be a problem.  Use current->bio_list
-	 * to collect a list of requests submited by a ->submit_bio method while
+	 * to collect a list of requests submitted by a ->submit_bio method while
 	 * it is active, and then process them after it returned.
 	 */
 	if (current->bio_list) {
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 98d656bdb42b..282903250530 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -111,7 +111,7 @@ 
  * busy signal.
  *
  * As devices can have deep queues and be unfair in how the queued commands
- * are executed, soley depending on rq wait may not result in satisfactory
+ * are executed, solely depending on rq wait may not result in satisfactory
  * control quality.  For a better control quality, completion latency QoS
  * parameters can be configured so that the device is considered saturated
  * if N'th percentile completion latency rises above the set point.
@@ -851,7 +851,7 @@  static int ioc_autop_idx(struct ioc *ioc)
 }
 
 /*
- * Take the followings as input
+ * Take the following as input
  *
  *  @bps	maximum sequential throughput
  *  @seqiops	maximum sequential 4k iops
@@ -1090,7 +1090,7 @@  static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
 		/* update the level sums */
 		parent->child_active_sum += (s32)(active - child->active);
 		parent->child_inuse_sum += (s32)(inuse - child->inuse);
-		/* apply the udpates */
+		/* apply the updates */
 		child->active = active;
 		child->inuse = inuse;
 
@@ -1144,7 +1144,7 @@  static void current_hweight(struct ioc_gq *iocg, u32 *hw_activep, u32 *hw_inusep
 	u32 hwa, hwi;
 	int ioc_gen;
 
-	/* hot path - if uptodate, use cached */
+	/* hot path - if up-to-date, use cached */
 	ioc_gen = atomic_read(&ioc->hweight_gen);
 	if (ioc_gen == iocg->hweight_gen)
 		goto out;
@@ -1241,7 +1241,7 @@  static bool iocg_activate(struct ioc_gq *iocg, struct ioc_now *now)
 
 	/*
 	 * If seem to be already active, just update the stamp to tell the
-	 * timer that we're still active.  We don't mind occassional races.
+	 * timer that we're still active.  We don't mind occasional races.
 	 */
 	if (!list_empty(&iocg->active_list)) {
 		ioc_now(ioc, now);
@@ -2122,7 +2122,7 @@  static void ioc_forgive_debts(struct ioc *ioc, u64 usage_us_sum, int nr_debtors,
 }
 
 /*
- * Check the active iocgs' state to avoid oversleeping and deactive
+ * Check the active iocgs' state to avoid oversleeping and deactivate
  * idle iocgs.
  *
  * Since waiters determine the sleep durations based on the vrate
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 81be0096411d..1b2a537ed2fa 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -18,7 +18,7 @@ 
  * every configured node, and each configured node has it's own independent
  * queue depth.  This means that we only care about our latency targets at the
  * peer level.  Some group at the bottom of the hierarchy isn't going to affect
- * a group at the end of some other path if we're only configred at leaf level.
+ * a group at the end of some other path if we're only configured at leaf level.
  *
  * Consider the following
  *
@@ -34,7 +34,7 @@ 
  * throttle "unloved", but nobody else.
  *
  * In this example "fast", "slow", and "normal" will be the only groups actually
- * accounting their io latencies.  We have to walk up the heirarchy to the root
+ * accounting their io latencies.  We have to walk up the hierarchy to the root
  * on every submit and complete so we can do the appropriate stat recording and
  * adjust the queue depth of ourselves if needed.
  *
diff --git a/block/blk-merge.c b/block/blk-merge.c
index ffb4aa0ea68b..1b44f42b19d2 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -283,7 +283,7 @@  static struct bio *blk_bio_segment_split(struct request_queue *q,
 	/*
 	 * Bio splitting may cause subtle trouble such as hang when doing sync
 	 * iopoll in direct IO routine. Given performance gain of iopoll for
-	 * big IO can be trival, disable iopoll when split needed.
+	 * big IO can be trivial, disable iopoll when split needed.
 	 */
 	bio->bi_opf &= ~REQ_HIPRI;
 
@@ -341,7 +341,7 @@  void __blk_queue_split(struct bio **bio, unsigned int *nr_segs)
 	}
 
 	if (split) {
-		/* there isn't chance to merge the splitted bio */
+		/* there isn't chance to merge the split bio */
 		split->bi_opf |= REQ_NOMERGE;
 
 		bio_chain(split, *bio);
@@ -674,7 +674,7 @@  void blk_rq_set_mixed_merge(struct request *rq)
 	/*
 	 * @rq will no longer represent mixable attributes for all the
 	 * contained bios.  It will just track those of the first one.
-	 * Distributes the attributs to each bio.
+	 * Distributes the attributes to each bio.
 	 */
 	for (bio = rq->bio; bio; bio = bio->bi_next) {
 		WARN_ON_ONCE((bio->bi_opf & REQ_FAILFAST_MASK) &&
diff --git a/block/blk-mq.c b/block/blk-mq.c
index d4d7c1caa439..5d5689ae4398 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -915,7 +915,7 @@  static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
 
 	/*
 	 * Just do a quick check if it is expired before locking the request in
-	 * so we're not unnecessarilly synchronizing across CPUs.
+	 * so we're not unnecessarily synchronizing across CPUs.
 	 */
 	if (!blk_mq_req_expired(rq, next))
 		return true;
@@ -1634,7 +1634,7 @@  static bool blk_mq_has_sqsched(struct request_queue *q)
 }
 
 /*
- * Return prefered queue to dispatch from (if any) for non-mq aware IO
+ * Return preferred queue to dispatch from (if any) for non-mq aware IO
  * scheduler.
  */
 static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index b4aa2f37fab6..63168b529360 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -749,7 +749,7 @@  void blk_queue_virt_boundary(struct request_queue *q, unsigned long mask)
 	/*
 	 * Devices that require a virtual boundary do not support scatter/gather
 	 * I/O natively, but instead require a descriptor list entry for each
-	 * page (which might not be idential to the Linux PAGE_SIZE).  Because
+	 * page (which might not be identical to the Linux PAGE_SIZE).  Because
 	 * of that they are not limited by our notion of "segment size".
 	 */
 	if (mask)
diff --git a/block/blk-stat.h b/block/blk-stat.h
index 17b47a86eefb..0541f7228ef0 100644
--- a/block/blk-stat.h
+++ b/block/blk-stat.h
@@ -105,7 +105,7 @@  void blk_stat_add_callback(struct request_queue *q,
  * @cb: The callback.
  *
  * When this returns, the callback is not running on any CPUs and will not be
- * called again unless readded.
+ * called again unless read.
  */
 void blk_stat_remove_callback(struct request_queue *q,
 			      struct blk_stat_callback *cb);
diff --git a/block/blk.h b/block/blk.h
index 3b53e44b967e..84f52f113231 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -252,7 +252,7 @@  static inline void req_set_nomerge(struct request_queue *q, struct request *req)
 }
 
 /*
- * The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size
+ * The max size one bio can handle is UINT_MAX because bvec_iter.bi_size
  * is defined as 'unsigned int', meantime it has to aligned to with logical
  * block size which is the minimum accepted unit by hardware.
  */
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index 33d34d69cade..f7b23bdf4866 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -142,7 +142,7 @@  struct kyber_cpu_latency {
  */
 struct kyber_ctx_queue {
 	/*
-	 * Used to ensure operations on rq_list and kcq_map to be an atmoic one.
+	 * Used to ensure operations on rq_list and kcq_map to be an atomic one.
 	 * Also protect the rqs on rq_list when merge.
 	 */
 	spinlock_t lock;
diff --git a/block/opal_proto.h b/block/opal_proto.h
index b486b3ec7dc4..20190c39fe3a 100644
--- a/block/opal_proto.h
+++ b/block/opal_proto.h
@@ -212,7 +212,7 @@  enum opal_parameter {
 
 /* Packets derived from:
  * TCG_Storage_Architecture_Core_Spec_v2.01_r1.00
- * Secion: 3.2.3 ComPackets, Packets & Subpackets
+ * Section: 3.2.3 ComPackets, Packets & Subpackets
  */
 
 /* Comm Packet (header) for transmissions. */
@@ -405,7 +405,7 @@  struct d0_single_user_mode {
 };
 
 /*
- * Additonal Datastores feature
+ * Additional Datastores feature
  *
  * code == 0x0202
  */
diff --git a/block/partitions/ibm.c b/block/partitions/ibm.c
index 4b044e620d35..f0b51bf892d6 100644
--- a/block/partitions/ibm.c
+++ b/block/partitions/ibm.c
@@ -212,7 +212,7 @@  static int find_lnx1_partitions(struct parsed_partitions *state,
 		size = label->lnx.formatted_blocks * secperblk;
 	} else {
 		/*
-		 * Formated w/o large volume support. If the sanity check
+		 * Formatted w/o large volume support. If the sanity check
 		 * 'size based on geo == size based on i_size' is true, then
 		 * we can safely assume that we know the formatted size of
 		 * the disk, otherwise we need additional information
diff --git a/block/partitions/sun.c b/block/partitions/sun.c
index 47dc53eccf77..c67b4d145270 100644
--- a/block/partitions/sun.c
+++ b/block/partitions/sun.c
@@ -101,7 +101,7 @@  int sun_partition(struct parsed_partitions *state)
 
 	/*
 	 * So that old Linux-Sun partitions continue to work,
-	 * alow the VTOC to be used under the additional condition ...
+	 * allow the VTOC to be used under the additional condition ...
 	 */
 	use_vtoc = use_vtoc || !(label->vtoc.sanity ||
 				 label->vtoc.version || label->vtoc.nparts);
diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index 6599bac0a78c..384aa4837acb 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -461,7 +461,7 @@  int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
 	if (err)
 		goto error;
 
-	/* default.  possible overriden later */
+	/* default.  possible overridden later */
 	req->retries = 5;
 
 	switch (opcode) {