From patchwork Fri Mar 13 02:37:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11435859 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1AF28139A for ; Fri, 13 Mar 2020 02:37:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 03122206F1 for ; Fri, 13 Mar 2020 02:37:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726371AbgCMCh3 (ORCPT ); Thu, 12 Mar 2020 22:37:29 -0400 Received: from mail-pf1-f194.google.com ([209.85.210.194]:36254 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726299AbgCMCh3 (ORCPT ); Thu, 12 Mar 2020 22:37:29 -0400 Received: by mail-pf1-f194.google.com with SMTP id i13so4288316pfe.3 for ; Thu, 12 Mar 2020 19:37:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IBQKvkaGfO86Bh6U7W3UvRuoWwjSpJY4eiulxf2d5Lk=; b=nB2SUi2/8A1bf8eo7J3H1GQsFwwROzw4iUnUXw8+XVW5y9Qm3E9wGJHj4G2gGwWofv 2NKdM84fwNVJ9hNShCjSatj6JULS1AQzya7uHL3vvX5bGuVNvkNuqj62jzubmFSJg1J0 /h3qkyDFxEgGw6+sCZK57Jtw+42GU7F16Z0BOaESQ+gVfTsSy6ROAVh9YMGNnYim3bml Tsh0LpD5yml2Rpdv2IrT/n0vwrP54S2v0t69xOgB5cORZfFv25fcWJYPFqwsl/iq3zqc gamfMEkgkOIH+EVKzn+9nZBtAVHdBHvly0PfIb59EVhORTr/Zz/YmEm0fhr+w3dr6Pae ZalA== X-Gm-Message-State: ANhLgQ0XIDSxji6S+sgLDWB7hpumQv7mZjOfqKg2w0hGv4OV2kN5hA3b v/IjgbmC/CS+iK3f5CBcKk6Qyl/g7WM= X-Google-Smtp-Source: ADFU+vv1fIS/p9KyO7iqIKdCfxHm76lI459I5NctNllVojCZxnn0mFUBovDczzs82t3cOX9nnOn1ZA== X-Received: by 2002:a62:15cc:: with SMTP id 195mr9607527pfv.276.1584067048425; Thu, 12 Mar 2020 19:37:28 -0700 (PDT) Received: from asus.hsd1.ca.comcast.net ([2601:647:4000:d7:7dc2:675a:7f2a:2f89]) by smtp.gmail.com with ESMTPSA id o129sm3123516pfb.61.2020.03.12.19.37.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 19:37:27 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Andy Shevchenko , Greg Kroah-Hartman , Bart Van Assche , Harvey Harrison , Ingo Molnar , Thomas Gleixner , "H . Peter Anvin" , Andrew Morton Subject: [PATCH v2 1/5] linux/unaligned/byteshift.h: Remove superfluous casts Date: Thu, 12 Mar 2020 19:37:14 -0700 Message-Id: <20200313023718.21830-2-bvanassche@acm.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200313023718.21830-1-bvanassche@acm.org> References: <20200313023718.21830-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The C language supports implicitly casting a void pointer into a non-void pointer. Remove explicit void pointer to non-void pointer casts because these are superfluous. Cc: Harvey Harrison Cc: Ingo Molnar Cc: Thomas Gleixner Cc: H. Peter Anvin Cc: Andrew Morton Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig --- include/linux/unaligned/be_byteshift.h | 6 +++--- include/linux/unaligned/le_byteshift.h | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/unaligned/be_byteshift.h b/include/linux/unaligned/be_byteshift.h index 8bdb8fa01bd4..c43ff5918c8a 100644 --- a/include/linux/unaligned/be_byteshift.h +++ b/include/linux/unaligned/be_byteshift.h @@ -40,17 +40,17 @@ static inline void __put_unaligned_be64(u64 val, u8 *p) static inline u16 get_unaligned_be16(const void *p) { - return __get_unaligned_be16((const u8 *)p); + return __get_unaligned_be16(p); } static inline u32 get_unaligned_be32(const void *p) { - return __get_unaligned_be32((const u8 *)p); + return __get_unaligned_be32(p); } static inline u64 get_unaligned_be64(const void *p) { - return __get_unaligned_be64((const u8 *)p); + return __get_unaligned_be64(p); } static inline void put_unaligned_be16(u16 val, void *p) diff --git a/include/linux/unaligned/le_byteshift.h b/include/linux/unaligned/le_byteshift.h index 1628b75866f0..2248dcb0df76 100644 --- a/include/linux/unaligned/le_byteshift.h +++ b/include/linux/unaligned/le_byteshift.h @@ -40,17 +40,17 @@ static inline void __put_unaligned_le64(u64 val, u8 *p) static inline u16 get_unaligned_le16(const void *p) { - return __get_unaligned_le16((const u8 *)p); + return __get_unaligned_le16(p); } static inline u32 get_unaligned_le32(const void *p) { - return __get_unaligned_le32((const u8 *)p); + return __get_unaligned_le32(p); } static inline u64 get_unaligned_le64(const void *p) { - return __get_unaligned_le64((const u8 *)p); + return __get_unaligned_le64(p); } static inline void put_unaligned_le16(u16 val, void *p) From patchwork Fri Mar 13 02:37:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11435861 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE841913 for ; Fri, 13 Mar 2020 02:37:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 97CAA20716 for ; Fri, 13 Mar 2020 02:37:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726387AbgCMChb (ORCPT ); Thu, 12 Mar 2020 22:37:31 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:40929 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726299AbgCMChb (ORCPT ); Thu, 12 Mar 2020 22:37:31 -0400 Received: by mail-pl1-f195.google.com with SMTP id h11so3486949plk.7 for ; Thu, 12 Mar 2020 19:37:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JFCp6e0kQJxo5K6JbWCw91PTHC+VB30q7KfTslceTus=; b=kxQ6aYmBAbiy7hHd3Hbg+gq41UfUtZKSLV60FNpNO0lKDnKv76J5VdBoUS08PgNRjg WI6TITw6RizAlYz0WMJgmTgtf/ekv2FUsYiq3uZqAkD0bIi5gUUgJknExb7uX6YKVqhO cgvvKgxsno8d0jNQj2pntw344jxzdk4be8tKJrM3GgxWU5+yas8W7xFdii1ZVZV9aDCF 5/iUgdhzIIKctIZognh/gKzMZV0J6Y0/RYK27xUGJAJhTUXkPzowgQ0dqpuO5Z1Dn2k1 gMveJui1qCc/MvQo2IE8fKV6XcRHrJKDneD7khezPlahQIae4uh+lc/gyxceKiUo5Gmi CaDQ== X-Gm-Message-State: ANhLgQ2njBsV7Z9hNTq7saz3XY4gBwvse2OpN5zhwTI7lYnT1XCbCRSz cK9T+MHi8lI0AvfZN14wWFg= X-Google-Smtp-Source: ADFU+vvII81CcDuUB82AxGA+lT2ECeW2aIzqIeNbPV4x0/moR06nTo5kk8Qu+ni2CShlq4hm6HXX2w== X-Received: by 2002:a17:90a:33d1:: with SMTP id n75mr7187445pjb.167.1584067049883; Thu, 12 Mar 2020 19:37:29 -0700 (PDT) Received: from asus.hsd1.ca.comcast.net ([2601:647:4000:d7:7dc2:675a:7f2a:2f89]) by smtp.gmail.com with ESMTPSA id o129sm3123516pfb.61.2020.03.12.19.37.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 19:37:29 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Andy Shevchenko , Greg Kroah-Hartman , Bart Van Assche , Mark Salter , Aurelien Jacquiot Subject: [PATCH v2 2/5] c6x: Include instead of duplicating it Date: Thu, 12 Mar 2020 19:37:15 -0700 Message-Id: <20200313023718.21830-3-bvanassche@acm.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200313023718.21830-1-bvanassche@acm.org> References: <20200313023718.21830-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use the generic __{get,put}_unaligned_[bl]e() definitions instead of duplicating these. Since a later patch will add more definitions into , this patch ensures that these definitions have to be added only once. See also commit a7f626c1948a ("C6X: headers"). See also commit 6510d41954dc ("kernel: Move arches to use common unaligned access"). Acked-by: Mark Salter Cc: Aurelien Jacquiot Signed-off-by: Bart Van Assche --- arch/c6x/include/asm/unaligned.h | 65 +------------------------------- 1 file changed, 1 insertion(+), 64 deletions(-) diff --git a/arch/c6x/include/asm/unaligned.h b/arch/c6x/include/asm/unaligned.h index b56ba7110f5a..d628cc170564 100644 --- a/arch/c6x/include/asm/unaligned.h +++ b/arch/c6x/include/asm/unaligned.h @@ -10,6 +10,7 @@ #define _ASM_C6X_UNALIGNED_H #include +#include /* * The C64x+ can do unaligned word and dword accesses in hardware @@ -100,68 +101,4 @@ static inline void put_unaligned64(u64 val, const void *p) #endif -/* - * Cause a link-time error if we try an unaligned access other than - * 1,2,4 or 8 bytes long - */ -extern int __bad_unaligned_access_size(void); - -#define __get_unaligned_le(ptr) (typeof(*(ptr)))({ \ - sizeof(*(ptr)) == 1 ? *(ptr) : \ - (sizeof(*(ptr)) == 2 ? get_unaligned_le16((ptr)) : \ - (sizeof(*(ptr)) == 4 ? get_unaligned_le32((ptr)) : \ - (sizeof(*(ptr)) == 8 ? get_unaligned_le64((ptr)) : \ - __bad_unaligned_access_size()))); \ - }) - -#define __get_unaligned_be(ptr) (__force typeof(*(ptr)))({ \ - sizeof(*(ptr)) == 1 ? *(ptr) : \ - (sizeof(*(ptr)) == 2 ? get_unaligned_be16((ptr)) : \ - (sizeof(*(ptr)) == 4 ? get_unaligned_be32((ptr)) : \ - (sizeof(*(ptr)) == 8 ? get_unaligned_be64((ptr)) : \ - __bad_unaligned_access_size()))); \ - }) - -#define __put_unaligned_le(val, ptr) ({ \ - void *__gu_p = (ptr); \ - switch (sizeof(*(ptr))) { \ - case 1: \ - *(u8 *)__gu_p = (__force u8)(val); \ - break; \ - case 2: \ - put_unaligned_le16((__force u16)(val), __gu_p); \ - break; \ - case 4: \ - put_unaligned_le32((__force u32)(val), __gu_p); \ - break; \ - case 8: \ - put_unaligned_le64((__force u64)(val), __gu_p); \ - break; \ - default: \ - __bad_unaligned_access_size(); \ - break; \ - } \ - (void)0; }) - -#define __put_unaligned_be(val, ptr) ({ \ - void *__gu_p = (ptr); \ - switch (sizeof(*(ptr))) { \ - case 1: \ - *(u8 *)__gu_p = (__force u8)(val); \ - break; \ - case 2: \ - put_unaligned_be16((__force u16)(val), __gu_p); \ - break; \ - case 4: \ - put_unaligned_be32((__force u32)(val), __gu_p); \ - break; \ - case 8: \ - put_unaligned_be64((__force u64)(val), __gu_p); \ - break; \ - default: \ - __bad_unaligned_access_size(); \ - break; \ - } \ - (void)0; }) - #endif /* _ASM_C6X_UNALIGNED_H */ From patchwork Fri Mar 13 02:37:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11435863 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E3BD913 for ; Fri, 13 Mar 2020 02:37:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1F82B20716 for ; Fri, 13 Mar 2020 02:37:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726390AbgCMChc (ORCPT ); Thu, 12 Mar 2020 22:37:32 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:46595 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726299AbgCMChc (ORCPT ); Thu, 12 Mar 2020 22:37:32 -0400 Received: by mail-pf1-f196.google.com with SMTP id c19so4267317pfo.13 for ; Thu, 12 Mar 2020 19:37:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ygmwgXhqdnQ8ylYgx836I+hn3FAd0T+ujGmVqh6i3hQ=; b=NDl7YZMLgdscg23DZR0PuwTS/y+4U5aPiu0zxagVlD2qsYut4Tnz+U+St0S3EAaCBW RVVA3Y7HiJz7xyg1HaS/FUqU2fAoXbIjxZ54ndwEkWgtvA+jqoSnS2AwxUo19G9yleJg LhnPHoXvVRTrAfbjl2id9L+YeLeMz3R/md7QQU9XiAHX9/SRXOK/qD4UH2AkHH2cm6TR eQtcNKtFTmN/sNulmJZ8yiMhbEBTv2oJKfXJsRoGJWfKTR8e2HH03GI8Rr+2Qq8OwjV8 vEfY5TdR88hhTbb374wHpfqL7myhqvTziXJpQkxLnzeYs/vvp9VZ0ljtWu9EJ+pEZCFZ IIWg== X-Gm-Message-State: ANhLgQ1kVd29kaaVE3E6W+ATofSgth9jOdw8NcUrsMP2BtWRS40WNtFO /QfbrRmU8Vb2FVpL0eyCb4c= X-Google-Smtp-Source: ADFU+vtoGsCDtKMFJWDEkxaf3THgvE2kDYHgBUYvUnHHymA9gILrWJG0WkoE4x+ntJioqogUOfIxdA== X-Received: by 2002:aa7:962d:: with SMTP id r13mr11802401pfg.244.1584067051535; Thu, 12 Mar 2020 19:37:31 -0700 (PDT) Received: from asus.hsd1.ca.comcast.net ([2601:647:4000:d7:7dc2:675a:7f2a:2f89]) by smtp.gmail.com with ESMTPSA id o129sm3123516pfb.61.2020.03.12.19.37.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 19:37:30 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Andy Shevchenko , Greg Kroah-Hartman , Bart Van Assche , Keith Busch , Sagi Grimberg , Jens Axboe , Felipe Balbi , Harvey Harrison , Ingo Molnar , Thomas Gleixner , "H . Peter Anvin" , Andrew Morton Subject: [PATCH v2 3/5] treewide: Consolidate {get,put}_unaligned_[bl]e24() definitions Date: Thu, 12 Mar 2020 19:37:16 -0700 Message-Id: <20200313023718.21830-4-bvanassche@acm.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200313023718.21830-1-bvanassche@acm.org> References: <20200313023718.21830-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move the get_unaligned_be24(), get_unaligned_le24() and put_unaligned_le24() definitions from various drivers into include/linux/unaligned/generic.h. Add a put_unaligned_be24() implementation. Cc: Christoph Hellwig Cc: Keith Busch Cc: Sagi Grimberg Cc: Jens Axboe Cc: Felipe Balbi Cc: Harvey Harrison Cc: Martin K. Petersen Cc: Ingo Molnar Cc: Thomas Gleixner Cc: H. Peter Anvin Cc: Andrew Morton Signed-off-by: Bart Van Assche Reviewed-by: Andy Shevchenko Reviewed-by: Christoph Hellwig Acked-by: Felipe Balbi Reviewed-by: Greg Kroah-Hartman # For USB --- drivers/nvme/host/rdma.c | 8 ---- drivers/nvme/target/rdma.c | 6 --- drivers/usb/gadget/function/f_mass_storage.c | 1 + drivers/usb/gadget/function/storage_common.h | 5 --- include/linux/unaligned/generic.h | 46 ++++++++++++++++++++ include/target/target_core_backend.h | 6 --- 6 files changed, 47 insertions(+), 25 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 3e85c5cacefd..2845118e6e40 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -142,14 +142,6 @@ static void nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc); static const struct blk_mq_ops nvme_rdma_mq_ops; static const struct blk_mq_ops nvme_rdma_admin_mq_ops; -/* XXX: really should move to a generic header sooner or later.. */ -static inline void put_unaligned_le24(u32 val, u8 *p) -{ - *p++ = val; - *p++ = val >> 8; - *p++ = val >> 16; -} - static inline int nvme_rdma_queue_idx(struct nvme_rdma_queue *queue) { return queue - queue->ctrl->queues; diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 37d262a65877..8fcede75e02a 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -143,12 +143,6 @@ static int num_pages(int len) return 1 + (((len - 1) & PAGE_MASK) >> PAGE_SHIFT); } -/* XXX: really should move to a generic header sooner or later.. */ -static inline u32 get_unaligned_le24(const u8 *p) -{ - return (u32)p[0] | (u32)p[1] << 8 | (u32)p[2] << 16; -} - static inline bool nvmet_rdma_need_data_in(struct nvmet_rdma_rsp *rsp) { return nvme_is_write(rsp->req.cmd) && diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c index 7c96c4665178..950d2a85f098 100644 --- a/drivers/usb/gadget/function/f_mass_storage.c +++ b/drivers/usb/gadget/function/f_mass_storage.c @@ -216,6 +216,7 @@ #include #include #include +#include #include #include diff --git a/drivers/usb/gadget/function/storage_common.h b/drivers/usb/gadget/function/storage_common.h index e5e3a2553aaa..bdeb1e233fc9 100644 --- a/drivers/usb/gadget/function/storage_common.h +++ b/drivers/usb/gadget/function/storage_common.h @@ -172,11 +172,6 @@ enum data_direction { DATA_DIR_NONE }; -static inline u32 get_unaligned_be24(u8 *buf) -{ - return 0xffffff & (u32) get_unaligned_be32(buf - 1); -} - static inline struct fsg_lun *fsg_lun_from_dev(struct device *dev) { return container_of(dev, struct fsg_lun, dev); diff --git a/include/linux/unaligned/generic.h b/include/linux/unaligned/generic.h index 57d3114656e5..5a0cefda7a13 100644 --- a/include/linux/unaligned/generic.h +++ b/include/linux/unaligned/generic.h @@ -2,6 +2,8 @@ #ifndef _LINUX_UNALIGNED_GENERIC_H #define _LINUX_UNALIGNED_GENERIC_H +#include + /* * Cause a link-time error if we try an unaligned access other than * 1,2,4 or 8 bytes long @@ -66,4 +68,48 @@ extern void __bad_unaligned_access_size(void); } \ (void)0; }) +static inline u32 __get_unaligned_be24(const u8 *p) +{ + return p[0] << 16 | p[1] << 8 | p[2]; +} + +static inline u32 get_unaligned_be24(const void *p) +{ + return __get_unaligned_be24(p); +} + +static inline u32 __get_unaligned_le24(const u8 *p) +{ + return p[0] | p[1] << 8 | p[2] << 16; +} + +static inline u32 get_unaligned_le24(const void *p) +{ + return __get_unaligned_le24(p); +} + +static inline void __put_unaligned_be24(u32 val, u8 *p) +{ + *p++ = val >> 16; + *p++ = val >> 8; + *p++ = val; +} + +static inline void put_unaligned_be24(u32 val, void *p) +{ + __put_unaligned_be24(val, p); +} + +static inline void __put_unaligned_le24(u32 val, u8 *p) +{ + *p++ = val; + *p++ = val >> 8; + *p++ = val >> 16; +} + +static inline void put_unaligned_le24(u32 val, void *p) +{ + __put_unaligned_le24(val, p); +} + #endif /* _LINUX_UNALIGNED_GENERIC_H */ diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h index 51b6f50eabee..1b752d8ea529 100644 --- a/include/target/target_core_backend.h +++ b/include/target/target_core_backend.h @@ -116,10 +116,4 @@ static inline bool target_dev_configured(struct se_device *se_dev) return !!(se_dev->dev_flags & DF_CONFIGURED); } -/* Only use get_unaligned_be24() if reading p - 1 is allowed. */ -static inline uint32_t get_unaligned_be24(const uint8_t *const p) -{ - return get_unaligned_be32(p - 1) & 0xffffffU; -} - #endif /* TARGET_CORE_BACKEND_H */ From patchwork Fri Mar 13 02:37:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11435867 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B7C4514E5 for ; Fri, 13 Mar 2020 02:37:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A0F1020716 for ; Fri, 13 Mar 2020 02:37:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726406AbgCMChg (ORCPT ); Thu, 12 Mar 2020 22:37:36 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:44698 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726393AbgCMChf (ORCPT ); Thu, 12 Mar 2020 22:37:35 -0400 Received: by mail-pf1-f195.google.com with SMTP id b72so4267148pfb.11 for ; Thu, 12 Mar 2020 19:37:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6tY4FFTq6Ygi2Z8/kj5Udlg9a06deWMMZmiKUDwiIEk=; b=Sz6959E910EM3h1olxtzdSYHlRQT1o0O8N9V4TQ/cDq+jGFV0mgcVe+BxbmtWVeKVs aMLoWotqlH6SadDceSrE8jl7cHg/sv7dtfTvk1Jp5SSzbgoMA4cXr5KhIBekc5vxol7L a17cCU55zGPJbtXcaCD/jJ7j0XJ4TYtvdG34Hn4E/I//2K8Fw8v46wDxuvlvizvI0u1Y g5JuKjgu8/wPTF+PHM7xHxH7O5J7tyRgNEMXHEJfYRxNrrGbdDdbXtVKREDjtIH0gTeu +8bXgSHcE9euDTp3sJ7cGxqaxfinRJ/EblT1BgKzw/TxBw/xJy0lxeHpxfu+7c8DXybi z4mA== X-Gm-Message-State: ANhLgQ2gbsC2WxJcqw2T2ORp9j/e1QYsAhmtMg9EYxcO9Pe0ufvxTvZY uKpua1XSvGLRk/AFSyzgDuI= X-Google-Smtp-Source: ADFU+vtFNx7TRByc+pjsPBfbWP0udFo9I9yk0tlDLiI0LOGwNmhM+f0kwtwBEmTlen6KPcscGtrZTA== X-Received: by 2002:aa7:99d1:: with SMTP id v17mr7882460pfi.165.1584067052996; Thu, 12 Mar 2020 19:37:32 -0700 (PDT) Received: from asus.hsd1.ca.comcast.net ([2601:647:4000:d7:7dc2:675a:7f2a:2f89]) by smtp.gmail.com with ESMTPSA id o129sm3123516pfb.61.2020.03.12.19.37.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 19:37:32 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Andy Shevchenko , Greg Kroah-Hartman , Bart Van Assche , Kai Makisara , "James E . J . Bottomley" Subject: [PATCH v2 4/5] scsi/st: Use get_unaligned_be24() and sign_extend32() Date: Thu, 12 Mar 2020 19:37:17 -0700 Message-Id: <20200313023718.21830-5-bvanassche@acm.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200313023718.21830-1-bvanassche@acm.org> References: <20200313023718.21830-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use these functions instead of open-coding them. Cc: Kai Makisara Cc: James E.J. Bottomley Cc: Martin K. Petersen Signed-off-by: Bart Van Assche --- drivers/scsi/st.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index 393f3019ccac..0f315dadf7e8 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -45,6 +45,7 @@ static const char *verstr = "20160209"; #include #include +#include #include #include @@ -2680,8 +2681,7 @@ static void deb_space_print(struct scsi_tape *STp, int direction, char *units, u if (!debugging) return; - sc = cmd[2] & 0x80 ? 0xff000000 : 0; - sc |= (cmd[2] << 16) | (cmd[3] << 8) | cmd[4]; + sc = sign_extend32(get_unaligned_be24(&cmd[2]), 23); if (direction) sc = -sc; st_printk(ST_DEB_MSG, STp, "Spacing tape %s over %d %s.\n", From patchwork Fri Mar 13 02:37:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11435865 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E23B139A for ; Fri, 13 Mar 2020 02:37:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 37E0E20716 for ; Fri, 13 Mar 2020 02:37:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726246AbgCMChf (ORCPT ); Thu, 12 Mar 2020 22:37:35 -0400 Received: from mail-pl1-f193.google.com ([209.85.214.193]:41761 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726299AbgCMChf (ORCPT ); Thu, 12 Mar 2020 22:37:35 -0400 Received: by mail-pl1-f193.google.com with SMTP id t14so3485481plr.8 for ; Thu, 12 Mar 2020 19:37:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=beh3Zdp0f9Hox5LNNmSa2B7yeHNlWHTfdfx7QjCc02E=; b=JxvCNb2XgqNKX+4hCKko+u7mForlqIw3x2vyXncoHeQDoA2Fy4M6L2Xised0lMMmS9 zyXbBSFUJcQt+lrz72fhBM60G78vGrVCAnKWpe+Y6Rv4eetkvVloWUnw3LtJVukwwWoF 0E48SmTi4+HRBOM+mI7THdARPWfjjTl4lcuMg6Wp61+3EH73Yh4nLNljROb4NtLsG9F7 1aQJAHRU7j79bZg/5COSaE0h76///lMnFz132MS8HAYZpdZ424e/qR2ujYmq609ZB/sp b3uwwd5x8WgDKxyfT5eOVcGbckgDZPTK8dOt5TeRGRsHKIbJP5EI4VCbeO0YKlwKrZne OoQg== X-Gm-Message-State: ANhLgQ1KDRowbSAQXSpG+OwryN6e/dLrWmlSlv7YFkCH8/moYniv2zaE PGozAd0BvvnAybx829YUI+0= X-Google-Smtp-Source: ADFU+vtC+4UFTWawJXQeMcECSzgd+SeYLEkpkglraiGatY14OAx7Au4oLCn52KCCO3HOKuBFqFRg6g== X-Received: by 2002:a17:90a:aa0c:: with SMTP id k12mr7441107pjq.193.1584067054490; Thu, 12 Mar 2020 19:37:34 -0700 (PDT) Received: from asus.hsd1.ca.comcast.net ([2601:647:4000:d7:7dc2:675a:7f2a:2f89]) by smtp.gmail.com with ESMTPSA id o129sm3123516pfb.61.2020.03.12.19.37.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 19:37:33 -0700 (PDT) From: Bart Van Assche To: "Martin K . Petersen" , "James E . J . Bottomley" Cc: linux-scsi@vger.kernel.org, Christoph Hellwig , Andy Shevchenko , Greg Kroah-Hartman , Bart Van Assche , "James E . J . Bottomley" , Colin Ian King Subject: [PATCH v2 5/5] scsi/trace: Use get_unaligned_be24() Date: Thu, 12 Mar 2020 19:37:18 -0700 Message-Id: <20200313023718.21830-6-bvanassche@acm.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200313023718.21830-1-bvanassche@acm.org> References: <20200313023718.21830-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This makes the SCSI tracing code slightly easier to read. Cc: Christoph Hellwig Cc: James E.J. Bottomley Cc: Martin K. Petersen Reported-by: Colin Ian King Fixes: bf8162354233 ("[SCSI] add scsi trace core functions and put trace points") Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig --- drivers/scsi/scsi_trace.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/scsi_trace.c b/drivers/scsi/scsi_trace.c index ac35c301c792..41a950075913 100644 --- a/drivers/scsi/scsi_trace.c +++ b/drivers/scsi/scsi_trace.c @@ -18,11 +18,9 @@ static const char * scsi_trace_rw6(struct trace_seq *p, unsigned char *cdb, int len) { const char *ret = trace_seq_buffer_ptr(p); - u32 lba = 0, txlen; + u32 lba, txlen; - lba |= ((cdb[1] & 0x1F) << 16); - lba |= (cdb[2] << 8); - lba |= cdb[3]; + lba = get_unaligned_be24(&cdb[1]) & 0x1fffff; /* * From SBC-2: a TRANSFER LENGTH field set to zero specifies that 256 * logical blocks shall be read (READ(6)) or written (WRITE(6)).