diff mbox

[v2] libfc: sanity check cpu number extracted from xid

Message ID 1467300756-7949-1-git-send-email-cleech@redhat.com (mailing list archive)
State Accepted, archived
Headers show

Commit Message

Chris Leech June 30, 2016, 3:32 p.m. UTC
In the receive path libfc extracts a cpu number from the ox_id in the
fiber channel header and uses that to do a per_cpu_ptr conversion.
If, for some reason, a frame is received with an invalid ox_id,
per_cpu_ptr will return an invalid pointer and the libfc receive path
will panic the system trying to use it.

I'm currently looking at such a case, and I don't yet know why a
cpu number > nr_cpu_ids is appearing in an exchange id.  But adding a
sanity check in libfc prevents a system panic, and seems like good idea
when dealing with frames coming in from the network.

Signed-off-by: Chris Leech <cleech@redhat.com>
---
 drivers/scsi/libfc/fc_exch.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Johannes Thumshirn July 1, 2016, 8:09 a.m. UTC | #1
On Thu, Jun 30, 2016 at 08:32:36AM -0700, Chris Leech wrote:
> In the receive path libfc extracts a cpu number from the ox_id in the
> fiber channel header and uses that to do a per_cpu_ptr conversion.
> If, for some reason, a frame is received with an invalid ox_id,
> per_cpu_ptr will return an invalid pointer and the libfc receive path
> will panic the system trying to use it.
> 
> I'm currently looking at such a case, and I don't yet know why a
> cpu number > nr_cpu_ids is appearing in an exchange id.  But adding a
> sanity check in libfc prevents a system panic, and seems like good idea
> when dealing with frames coming in from the network.
> 
> Signed-off-by: Chris Leech <cleech@redhat.com>
> ---
>  drivers/scsi/libfc/fc_exch.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
> index 30f9ef0..e72673b 100644
> --- a/drivers/scsi/libfc/fc_exch.c
> +++ b/drivers/scsi/libfc/fc_exch.c
> @@ -908,9 +908,17 @@ static struct fc_exch *fc_exch_find(struct fc_exch_mgr *mp, u16 xid)
>  {
>  	struct fc_exch_pool *pool;
>  	struct fc_exch *ep = NULL;
> +	u16 cpu = xid & fc_cpu_mask;
> +
> +	if (cpu >= nr_cpu_ids || !cpu_possible(cpu)) {
> +		printk_ratelimited(KERN_ERR
> +			"libfc: lookup request for XID = %d, "
> +			"indicates invalid CPU %d\n", xid, cpu);
> +		return NULL;
> +	}
>  
>  	if ((xid >= mp->min_xid) && (xid <= mp->max_xid)) {
> -		pool = per_cpu_ptr(mp->pool, xid & fc_cpu_mask);
> +		pool = per_cpu_ptr(mp->pool, cpu);
>  		spin_lock_bh(&pool->lock);
>  		ep = fc_exch_ptr_get(pool, (xid - mp->min_xid) >> fc_cpu_order);
>  		if (ep) {


Acked-by: Johannes Thumshirn <jth@kernel.org>

@Martin, do you queue the libfc patches as well?
Martin K. Petersen July 14, 2016, 1:50 a.m. UTC | #2
>>>>> "Chris" == Chris Leech <cleech@redhat.com> writes:

Chris> In the receive path libfc extracts a cpu number from the ox_id in
Chris> the fiber channel header and uses that to do a per_cpu_ptr
Chris> conversion.  If, for some reason, a frame is received with an
Chris> invalid ox_id, per_cpu_ptr will return an invalid pointer and the
Chris> libfc receive path will panic the system trying to use it.

Applied to 4.8/scsi-queue.
Martin K. Petersen July 14, 2016, 1:51 a.m. UTC | #3
>>>>> "Johannes" == Johannes Thumshirn <jthumshirn@suse.de> writes:

Johannes> @Martin, do you queue the libfc patches as well?

Sure.

(Sorry about the delay, been on vacation).
diff mbox

Patch

diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
index 30f9ef0..e72673b 100644
--- a/drivers/scsi/libfc/fc_exch.c
+++ b/drivers/scsi/libfc/fc_exch.c
@@ -908,9 +908,17 @@  static struct fc_exch *fc_exch_find(struct fc_exch_mgr *mp, u16 xid)
 {
 	struct fc_exch_pool *pool;
 	struct fc_exch *ep = NULL;
+	u16 cpu = xid & fc_cpu_mask;
+
+	if (cpu >= nr_cpu_ids || !cpu_possible(cpu)) {
+		printk_ratelimited(KERN_ERR
+			"libfc: lookup request for XID = %d, "
+			"indicates invalid CPU %d\n", xid, cpu);
+		return NULL;
+	}
 
 	if ((xid >= mp->min_xid) && (xid <= mp->max_xid)) {
-		pool = per_cpu_ptr(mp->pool, xid & fc_cpu_mask);
+		pool = per_cpu_ptr(mp->pool, cpu);
 		spin_lock_bh(&pool->lock);
 		ep = fc_exch_ptr_get(pool, (xid - mp->min_xid) >> fc_cpu_order);
 		if (ep) {