From patchwork Mon Oct 28 20:06:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216351 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE65113BD for ; Mon, 28 Oct 2019 20:07:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B579321721 for ; Mon, 28 Oct 2019 20:07:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731816AbfJ1UHL (ORCPT ); Mon, 28 Oct 2019 16:07:11 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:41770 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730671AbfJ1UHK (ORCPT ); Mon, 28 Oct 2019 16:07:10 -0400 Received: by mail-pf1-f193.google.com with SMTP id p26so3510159pfq.8; Mon, 28 Oct 2019 13:07:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oOA/uRu5K9VdzGvhcHy12M8llL+HEp7gWLqCP1nZnMA=; b=SjWuoScMwX2gN7niIvQ/fNCLbVWTEPEOz7Wr5Fj4D1beav/ascmXknLsJK54Foa2I7 a4JmIdB/vg/mNCYGseGGheo8MK8UfVAi1nTAw8BbC7yy1z3KgCvRwju2GtTc57/9RbzS W0gOQ9aLy4SM3/KsWOW3UvxYvk2XPhAMndEANbVwXyjAd9JQl0JdLKjfAgqxjSrLephB 83UGTBfFrB9g5o8vzRC0yx8nCixqpH4wY6rpyIEh4nE04RQYrZcrY+RLyneLr1uTq2/4 qcWAUkvb4lvc4OrPQUjjNBdG9obrKQTwq4Fj6JbLjxLBIWcUwa2ZGqYFy0BinOIoMD4b 1zqA== X-Gm-Message-State: APjAAAWJRcp5r4ZYyB49IWbVgVItORhn1A7mXLVKiWqKjrp93clc0p6K Og4vqF1vIp+HKxQPNvJ5J4g= X-Google-Smtp-Source: APXvYqzR7cM3Oy8iNmb2QJQmMCkdK4EBQCOhhozNfYxkHl3svNryldFKMAeadMFKCBhDNNziq2el9w== X-Received: by 2002:a62:1d8d:: with SMTP id d135mr22746869pfd.172.1572293229424; Mon, 28 Oct 2019 13:07:09 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:08 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , Harvey Harrison , "H . Peter Anvin" , Andrew Morton Subject: [PATCH 1/9] linux/unaligned/byteshift.h: Remove superfluous casts Date: Mon, 28 Oct 2019 13:06:52 -0700 Message-Id: <20191028200700.213753-2-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The C language supports casting a void pointer into a non-void pointer implicitly. Remove explicit void pointer to non-void pointer casts because these are superfluous. Cc: Harvey Harrison Cc: Ingo Molnar Cc: Thomas Gleixner Cc: H. Peter Anvin Cc: Andrew Morton Signed-off-by: Bart Van Assche --- include/linux/unaligned/be_byteshift.h | 6 +++--- include/linux/unaligned/le_byteshift.h | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/unaligned/be_byteshift.h b/include/linux/unaligned/be_byteshift.h index 8bdb8fa01bd4..c43ff5918c8a 100644 --- a/include/linux/unaligned/be_byteshift.h +++ b/include/linux/unaligned/be_byteshift.h @@ -40,17 +40,17 @@ static inline void __put_unaligned_be64(u64 val, u8 *p) static inline u16 get_unaligned_be16(const void *p) { - return __get_unaligned_be16((const u8 *)p); + return __get_unaligned_be16(p); } static inline u32 get_unaligned_be32(const void *p) { - return __get_unaligned_be32((const u8 *)p); + return __get_unaligned_be32(p); } static inline u64 get_unaligned_be64(const void *p) { - return __get_unaligned_be64((const u8 *)p); + return __get_unaligned_be64(p); } static inline void put_unaligned_be16(u16 val, void *p) diff --git a/include/linux/unaligned/le_byteshift.h b/include/linux/unaligned/le_byteshift.h index 1628b75866f0..2248dcb0df76 100644 --- a/include/linux/unaligned/le_byteshift.h +++ b/include/linux/unaligned/le_byteshift.h @@ -40,17 +40,17 @@ static inline void __put_unaligned_le64(u64 val, u8 *p) static inline u16 get_unaligned_le16(const void *p) { - return __get_unaligned_le16((const u8 *)p); + return __get_unaligned_le16(p); } static inline u32 get_unaligned_le32(const void *p) { - return __get_unaligned_le32((const u8 *)p); + return __get_unaligned_le32(p); } static inline u64 get_unaligned_le64(const void *p) { - return __get_unaligned_le64((const u8 *)p); + return __get_unaligned_le64(p); } static inline void put_unaligned_le16(u16 val, void *p) From patchwork Mon Oct 28 20:06:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B960139A for ; Mon, 28 Oct 2019 20:07:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 73C9721835 for ; Mon, 28 Oct 2019 20:07:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732652AbfJ1UHq (ORCPT ); Mon, 28 Oct 2019 16:07:46 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:43616 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731838AbfJ1UHM (ORCPT ); Mon, 28 Oct 2019 16:07:12 -0400 Received: by mail-pf1-f193.google.com with SMTP id 3so7635242pfb.10; Mon, 28 Oct 2019 13:07:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qoKV8usW5vpFoZSpVG8vjHvbCZY+ib36Ahb8Zg+4+n4=; b=sherXZtGsiNnQeqKaiXtQVs/zhkKKaMgfLmRF7eeLlK89wRXCKa8V6fSt2aESd/bPt gR3TGkq0ebSBMmfw/KUasR/xz4SpnmU/mXsXwTUbhi2IpZuF5He7tV3Xr1wNBtNdSuZS k0502Tht/nOkrGh9EI662H7ZdP8//aHoG1nIO7O6Ggn637Fe1/4dyvI7L0TxH4t+dCBQ Rz1gxyM2UkJYvpAAbwTOputM063VVW4lZXf/572aj8bWozmyLbDxejbM7cpZH4cVMXzr U2IHzbGqYl3ZugC1hnbgzE/ZdhZeg0dbjCcjZELZ1CBgcS1ssFNN0OI9MbxzC0ZZu47U Bi9Q== X-Gm-Message-State: APjAAAUsSvOJGynnVY9AbP7/mvzwEFtt/1SPSzpeVGbh9fc+SsgXAuae eTk55KVvZlK9p3RE4ooSFp0= X-Google-Smtp-Source: APXvYqzOTaS1zSkFvfbKOzrkMSJ9jaJxWDxGHh97N7mPKIPSSYNijOyps9dI48qF2742aiRIutwnXQ== X-Received: by 2002:a63:3104:: with SMTP id x4mr22314986pgx.135.1572293231022; Mon, 28 Oct 2019 13:07:11 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:10 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , Mark Salter , Aurelien Jacquiot Subject: [PATCH 2/9] c6x: Include instead of duplicating it Date: Mon, 28 Oct 2019 13:06:53 -0700 Message-Id: <20191028200700.213753-3-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use the generic __{get,put}_unaligned_[bl]e() definitions instead of duplicating these. Since a later patch will add more definitions into , this patch ensures that these definitions have to be added only once. See also commit a7f626c1948a ("C6X: headers"). See also commit 6510d41954dc ("kernel: Move arches to use common unaligned access"). Cc: Mark Salter Cc: Aurelien Jacquiot Signed-off-by: Bart Van Assche Acked-by: Mark Salter --- arch/c6x/include/asm/unaligned.h | 65 +------------------------------- 1 file changed, 1 insertion(+), 64 deletions(-) diff --git a/arch/c6x/include/asm/unaligned.h b/arch/c6x/include/asm/unaligned.h index b56ba7110f5a..d628cc170564 100644 --- a/arch/c6x/include/asm/unaligned.h +++ b/arch/c6x/include/asm/unaligned.h @@ -10,6 +10,7 @@ #define _ASM_C6X_UNALIGNED_H #include +#include /* * The C64x+ can do unaligned word and dword accesses in hardware @@ -100,68 +101,4 @@ static inline void put_unaligned64(u64 val, const void *p) #endif -/* - * Cause a link-time error if we try an unaligned access other than - * 1,2,4 or 8 bytes long - */ -extern int __bad_unaligned_access_size(void); - -#define __get_unaligned_le(ptr) (typeof(*(ptr)))({ \ - sizeof(*(ptr)) == 1 ? *(ptr) : \ - (sizeof(*(ptr)) == 2 ? get_unaligned_le16((ptr)) : \ - (sizeof(*(ptr)) == 4 ? get_unaligned_le32((ptr)) : \ - (sizeof(*(ptr)) == 8 ? get_unaligned_le64((ptr)) : \ - __bad_unaligned_access_size()))); \ - }) - -#define __get_unaligned_be(ptr) (__force typeof(*(ptr)))({ \ - sizeof(*(ptr)) == 1 ? *(ptr) : \ - (sizeof(*(ptr)) == 2 ? get_unaligned_be16((ptr)) : \ - (sizeof(*(ptr)) == 4 ? get_unaligned_be32((ptr)) : \ - (sizeof(*(ptr)) == 8 ? get_unaligned_be64((ptr)) : \ - __bad_unaligned_access_size()))); \ - }) - -#define __put_unaligned_le(val, ptr) ({ \ - void *__gu_p = (ptr); \ - switch (sizeof(*(ptr))) { \ - case 1: \ - *(u8 *)__gu_p = (__force u8)(val); \ - break; \ - case 2: \ - put_unaligned_le16((__force u16)(val), __gu_p); \ - break; \ - case 4: \ - put_unaligned_le32((__force u32)(val), __gu_p); \ - break; \ - case 8: \ - put_unaligned_le64((__force u64)(val), __gu_p); \ - break; \ - default: \ - __bad_unaligned_access_size(); \ - break; \ - } \ - (void)0; }) - -#define __put_unaligned_be(val, ptr) ({ \ - void *__gu_p = (ptr); \ - switch (sizeof(*(ptr))) { \ - case 1: \ - *(u8 *)__gu_p = (__force u8)(val); \ - break; \ - case 2: \ - put_unaligned_be16((__force u16)(val), __gu_p); \ - break; \ - case 4: \ - put_unaligned_be32((__force u32)(val), __gu_p); \ - break; \ - case 8: \ - put_unaligned_be64((__force u64)(val), __gu_p); \ - break; \ - default: \ - __bad_unaligned_access_size(); \ - break; \ - } \ - (void)0; }) - #endif /* _ASM_C6X_UNALIGNED_H */ From patchwork Mon Oct 28 20:06:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 758BE13BD for ; Mon, 28 Oct 2019 20:07:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 53AFB218BA for ; Mon, 28 Oct 2019 20:07:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732228AbfJ1UHO (ORCPT ); Mon, 28 Oct 2019 16:07:14 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:46693 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732192AbfJ1UHN (ORCPT ); Mon, 28 Oct 2019 16:07:13 -0400 Received: by mail-pf1-f195.google.com with SMTP id b25so7628670pfi.13; Mon, 28 Oct 2019 13:07:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sm2lOxknf5zvtUoPgIgD9u4NTyPopFFJnLUpKIDnglg=; b=sHM3iPkhm0H/H2s7Y03rIH1bGH8RQ31t/rOeRIsQD82aGTtqWCC186ENJIxweJ6Wta ojYbgafI0E6uRHCC18Jbzyh4kIIiXdIXvVHlJKS0ZkkS+R8P1VyuZV12XpmtlGpdmoMw nna77NG2AtjZMkt9BoE88aYfqTThdxi4SDG9ikL7ENgpV6W478Appl7aQrKgGZEa3otK iTlwMxMRTcPWsVDla6Ziqs6RK1rBxYNc4opSV6KBSAJdMa0dO9hjQeqCulkt11CIyvXv 0yM2YTaQTNgv3s95TpJ/3kUE2mugjQWLz72ray+hCG165KswRa6BgplPnusqLipQe8T5 rUQw== X-Gm-Message-State: APjAAAXWkeIzjqZnFoCc064lVkqxdzxKtFGD3QQC8JwPSgW3H8cUUD7x VDvEf7wVCXuv4jz8OEBEZC8= X-Google-Smtp-Source: APXvYqwmrv+nMd3B/OcGVw2oEZbfvNIw/JuIYZDh8cPShh0jvan9U/+UmSzU1FwlMF/Kty+h1W+X+Q== X-Received: by 2002:a62:fb15:: with SMTP id x21mr21541125pfm.79.1572293232727; Mon, 28 Oct 2019 13:07:12 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:12 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , Keith Busch , Sagi Grimberg , Jens Axboe , Felipe Balbi , Harvey Harrison , "H . Peter Anvin" , Andrew Morton Subject: [PATCH 3/9] treewide: Consolidate {get,put}_unaligned_[bl]e24() definitions Date: Mon, 28 Oct 2019 13:06:54 -0700 Message-Id: <20191028200700.213753-4-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move the get_unaligned_be24(), get_unaligned_le24() and put_unaligned_le24() definitions from various drivers into include/linux/unaligned/generic.h. Add a put_unaligned_be24() and get_unaligned_signed_[bl]e24() definitions. Change the functions that depend on get_unaligned_be32() into macros because may be included before get_unaligned_be32() has been redefined as a macro. Cc: Christoph Hellwig Cc: Keith Busch Cc: Sagi Grimberg Cc: Jens Axboe Cc: Felipe Balbi Cc: Harvey Harrison Cc: Martin K. Petersen Cc: Ingo Molnar Cc: Thomas Gleixner Cc: H. Peter Anvin Cc: Andrew Morton Signed-off-by: Bart Van Assche --- drivers/nvme/host/rdma.c | 8 ---- drivers/nvme/target/rdma.c | 6 --- drivers/usb/gadget/function/f_mass_storage.c | 1 + drivers/usb/gadget/function/storage_common.h | 5 --- include/linux/unaligned/generic.h | 44 ++++++++++++++++++++ include/target/target_core_backend.h | 6 --- 6 files changed, 45 insertions(+), 25 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index dfa07bb9dfeb..66d9c8cc0c5c 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -142,14 +142,6 @@ static void nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc); static const struct blk_mq_ops nvme_rdma_mq_ops; static const struct blk_mq_ops nvme_rdma_admin_mq_ops; -/* XXX: really should move to a generic header sooner or later.. */ -static inline void put_unaligned_le24(u32 val, u8 *p) -{ - *p++ = val; - *p++ = val >> 8; - *p++ = val >> 16; -} - static inline int nvme_rdma_queue_idx(struct nvme_rdma_queue *queue) { return queue - queue->ctrl->queues; diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 36d906a7f70d..dc193526d4da 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -143,12 +143,6 @@ static int num_pages(int len) return 1 + (((len - 1) & PAGE_MASK) >> PAGE_SHIFT); } -/* XXX: really should move to a generic header sooner or later.. */ -static inline u32 get_unaligned_le24(const u8 *p) -{ - return (u32)p[0] | (u32)p[1] << 8 | (u32)p[2] << 16; -} - static inline bool nvmet_rdma_need_data_in(struct nvmet_rdma_rsp *rsp) { return nvme_is_write(rsp->req.cmd) && diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c index 7c96c4665178..950d2a85f098 100644 --- a/drivers/usb/gadget/function/f_mass_storage.c +++ b/drivers/usb/gadget/function/f_mass_storage.c @@ -216,6 +216,7 @@ #include #include #include +#include #include #include diff --git a/drivers/usb/gadget/function/storage_common.h b/drivers/usb/gadget/function/storage_common.h index e5e3a2553aaa..bdeb1e233fc9 100644 --- a/drivers/usb/gadget/function/storage_common.h +++ b/drivers/usb/gadget/function/storage_common.h @@ -172,11 +172,6 @@ enum data_direction { DATA_DIR_NONE }; -static inline u32 get_unaligned_be24(u8 *buf) -{ - return 0xffffff & (u32) get_unaligned_be32(buf - 1); -} - static inline struct fsg_lun *fsg_lun_from_dev(struct device *dev) { return container_of(dev, struct fsg_lun, dev); diff --git a/include/linux/unaligned/generic.h b/include/linux/unaligned/generic.h index 57d3114656e5..f7fa3f248c85 100644 --- a/include/linux/unaligned/generic.h +++ b/include/linux/unaligned/generic.h @@ -2,6 +2,8 @@ #ifndef _LINUX_UNALIGNED_GENERIC_H #define _LINUX_UNALIGNED_GENERIC_H +#include + /* * Cause a link-time error if we try an unaligned access other than * 1,2,4 or 8 bytes long @@ -66,4 +68,46 @@ extern void __bad_unaligned_access_size(void); } \ (void)0; }) +/* Only use get_unaligned_be24() if reading p - 1 is allowed. */ +#define get_unaligned_be24(p) (get_unaligned_be32((p) - 1) & 0xffffffu) + +#define get_unaligned_le24(p) (get_unaligned_le32((p)) & 0xffffffu) + +/* Sign-extend a 24-bit into a 32-bit integer. */ +static inline s32 sign_extend_24_to_32(u32 i) +{ + i &= 0xffffffu; + return i - ((i >> 23) << 24); +} + +#define get_unaligned_signed_be24(p) \ + sign_extend_24_to_32(get_unaligned_be24((p))) + +#define get_unaligned_signed_le24(p) \ + sign_extend_24_to_32(get_unaligned_le24((p))) + +static inline void __put_unaligned_be24(u32 val, u8 *p) +{ + *p++ = val >> 16; + *p++ = val >> 8; + *p++ = val; +} + +static inline void put_unaligned_be24(u32 val, void *p) +{ + __put_unaligned_be24(val, p); +} + +static inline void __put_unaligned_le24(u32 val, u8 *p) +{ + *p++ = val; + *p++ = val >> 8; + *p++ = val >> 16; +} + +static inline void put_unaligned_le24(u32 val, void *p) +{ + __put_unaligned_le24(val, p); +} + #endif /* _LINUX_UNALIGNED_GENERIC_H */ diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h index 51b6f50eabee..1b752d8ea529 100644 --- a/include/target/target_core_backend.h +++ b/include/target/target_core_backend.h @@ -116,10 +116,4 @@ static inline bool target_dev_configured(struct se_device *se_dev) return !!(se_dev->dev_flags & DF_CONFIGURED); } -/* Only use get_unaligned_be24() if reading p - 1 is allowed. */ -static inline uint32_t get_unaligned_be24(const uint8_t *const p) -{ - return get_unaligned_be32(p - 1) & 0xffffffU; -} - #endif /* TARGET_CORE_BACKEND_H */ From patchwork Mon Oct 28 20:06:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04138139A for ; Mon, 28 Oct 2019 20:07:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DF2AD208C0 for ; Mon, 28 Oct 2019 20:07:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732265AbfJ1UHP (ORCPT ); Mon, 28 Oct 2019 16:07:15 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:41832 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732221AbfJ1UHP (ORCPT ); Mon, 28 Oct 2019 16:07:15 -0400 Received: by mail-pg1-f194.google.com with SMTP id l3so7651843pgr.8; Mon, 28 Oct 2019 13:07:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1oBXLCEA6bDkeKywgn+CtEH0L7iMtQpOWpVW8yEPhC8=; b=pZozOzvz5TipYCS9/zypTBWWMRY54LS7opQOiX6Esqx/bj3O2l7qz6FbxN+2QuK/8X aoUvy52Y4n3ktCunLan/FbPfF/BLN4UJse43ZKQVqtvZbhCwxwpqUun4AW/LQdjYnNHZ YejYl0VufjXXNnOSbOXjfJ/kccc21sW3uViFATXMqe/1Nkyd+jfiPGgzkn2MrPVd75Vu aofHNlHd7rmPQRyRKZ0kK3v9jyhSGNsrKc6ZRXP24MHeGm0tb2L27v9UmNyqXprqDpHg RzFMPr22Rax16MeNP6lKwYEA7bjN05NVo9C3iguh0PfCdCARoC3Mb8sPoyRWybly1i6Q JRCQ== X-Gm-Message-State: APjAAAVchpEhbnnUkC254jCJImMiDnxR5RbH4/PgfoQFeCVf11420uIS YXxBBye4FXic5+W3Qj9VPKo= X-Google-Smtp-Source: APXvYqyGZXaHAoKPYMZmixkQ/UpGd7ZSzk81MMGfj89a70nSjdOribyZ590gaws8mhw8BrSVyQu44w== X-Received: by 2002:aa7:8046:: with SMTP id y6mr21932294pfm.222.1572293234040; Mon, 28 Oct 2019 13:07:14 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:13 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , Jonathan Cameron , Hartmut Knaack , Lars-Peter Clausen , Peter Meerwald-Stadler Subject: [PATCH 4/9] drivers/iio: Sign extend without triggering implementation-defined behavior Date: Mon, 28 Oct 2019 13:06:55 -0700 Message-Id: <20191028200700.213753-5-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From the C standard: "The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type or if E1 has a signed type and a nonnegative value, the value of the result is the integral part of the quotient of E1 / 2E2 . If E1 has a signed type and a negative value, the resulting value is implementation-defined." Hence use sign_extend_24_to_32() instead of "<< 8 >> 8". Cc: Jonathan Cameron Cc: Hartmut Knaack Cc: Lars-Peter Clausen Cc: Peter Meerwald-Stadler Signed-off-by: Bart Van Assche --- drivers/iio/common/st_sensors/st_sensors_core.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c index 4a3064fb6cd9..94a9cec69cd7 100644 --- a/drivers/iio/common/st_sensors/st_sensors_core.c +++ b/drivers/iio/common/st_sensors/st_sensors_core.c @@ -21,11 +21,6 @@ #include "st_sensors_core.h" -static inline u32 st_sensors_get_unaligned_le24(const u8 *p) -{ - return (s32)((p[0] | p[1] << 8 | p[2] << 16) << 8) >> 8; -} - int st_sensors_write_data_with_mask(struct iio_dev *indio_dev, u8 reg_addr, u8 mask, u8 data) { @@ -556,7 +551,7 @@ static int st_sensors_read_axis_data(struct iio_dev *indio_dev, else if (byte_for_channel == 2) *data = (s16)get_unaligned_le16(outdata); else if (byte_for_channel == 3) - *data = (s32)st_sensors_get_unaligned_le24(outdata); + *data = get_unaligned_signed_le24(outdata); st_sensors_free_memory: kfree(outdata); From patchwork Mon Oct 28 20:06:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216345 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47E2713BD for ; Mon, 28 Oct 2019 20:07:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2FB86208C0 for ; Mon, 28 Oct 2019 20:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732577AbfJ1UHh (ORCPT ); Mon, 28 Oct 2019 16:07:37 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:38540 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730303AbfJ1UHQ (ORCPT ); Mon, 28 Oct 2019 16:07:16 -0400 Received: by mail-pf1-f196.google.com with SMTP id c13so7655478pfp.5; Mon, 28 Oct 2019 13:07:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qqmafbxvt6DIwPrQ8GK8JiWsq1zCk4bn8bjPAnXJ94o=; b=XeMbqqxpF+XpG8E1XvUmqM5AgW5npoTrQsFzGmAx1L+/vuBH4IWdI+wOVLDjAZ8cRx DbAyXCT3P56EyD81uXC/r9A2n4uyNz1JzV+yUsSZ9U9K/1/RKuTlhjAqe05oUMnzJrQi TpObNa9lr2r7PoBQ3bLOZsLdSBIgTQFvG7m8wJzg/jstJI7Jq6oMqUom0PLK3rfj3sfo cnKw6yMfwpYRQUKzUSnKNlF2a2CwDgE+p5BUSq6kP+FG4E0xeemLwvfNPuLpLxf1Jqfe WTUwD0AyY8c5UA6mZXM6cpKZwFMgiW8lilxP5cO7hifafvQffZvRJYNk4jOxQQy4SV2A unmw== X-Gm-Message-State: APjAAAVLSh0XcXe2h2buG5XSgPJrSbFh5gJ2Cs7lsQawCHJpC6Ts2u/P csK9FRPfheI996783to0tII= X-Google-Smtp-Source: APXvYqy10CadmomzITN/67Bio0fCL4NQieKkQErSkx+ODEEALJOyZM0aGK3wXkK2z/rgynMOGPbxSA== X-Received: by 2002:a17:90a:86c1:: with SMTP id y1mr1387473pjv.71.1572293235698; Mon, 28 Oct 2019 13:07:15 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:14 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , Kai Makisara , "James E . J . Bottomley" Subject: [PATCH 5/9] scsi/st: Use get_unaligned_signed_be24() Date: Mon, 28 Oct 2019 13:06:56 -0700 Message-Id: <20191028200700.213753-6-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use this function instead of open-coding it. Cc: Kai Makisara Cc: James E.J. Bottomley Cc: Martin K. Petersen Signed-off-by: Bart Van Assche --- drivers/scsi/st.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index e3266a64a477..53dc7706c935 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -44,6 +44,7 @@ static const char *verstr = "20160209"; #include #include +#include #include #include @@ -2679,8 +2680,7 @@ static void deb_space_print(struct scsi_tape *STp, int direction, char *units, u if (!debugging) return; - sc = cmd[2] & 0x80 ? 0xff000000 : 0; - sc |= (cmd[2] << 16) | (cmd[3] << 8) | cmd[4]; + sc = get_unaligned_signed_be24(&cmd[2]); if (direction) sc = -sc; st_printk(ST_DEB_MSG, STp, "Spacing tape %s over %d %s.\n", From patchwork Mon Oct 28 20:06:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97165139A for ; Mon, 28 Oct 2019 20:07:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 750B1214B2 for ; Mon, 28 Oct 2019 20:07:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732359AbfJ1UHT (ORCPT ); Mon, 28 Oct 2019 16:07:19 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:36138 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732323AbfJ1UHS (ORCPT ); Mon, 28 Oct 2019 16:07:18 -0400 Received: by mail-pg1-f193.google.com with SMTP id j22so1646158pgh.3; Mon, 28 Oct 2019 13:07:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=utYVDPt8pbKWHU3EARy+Py0gf4nflX/pfjedHEM4HCQ=; b=tOa8RoZRqC1UcCvs14tdpelefbIDIRybAr2ZpyvZ8m25ejWvhbVEh7T8fqvwYd6lXk g2GgsbWqIzzGXJYUw/MdYV8bg4lv5OZfxhtZf/ZHPCIDoyHnkx7gSkrCsrKJamDuvRZ1 CGw8NoI7Nd1FlMi15S/0dObpzzHS0QygKsk7Bqt9pY0Pn3W/658l0mV5G7JTK+hWryJK Ns7wp4/DMVlhjJJTJWIwWW/z/ABz0toYsJGoDJvuzkdY7knEsGOqlcRk9L2VWaK9WOe2 MllVyky2mLNjG4IEJfkLdIwgo8Dewm1kfMueRAaKfnrn1NQbhujRPLER/jIcDqoBo6iZ 3MQQ== X-Gm-Message-State: APjAAAVkFT8REOro6uSLBP6od6KOFdnnK00f8VQVE6LPmM41t7A4jJq/ ZbV58CYARRQquzCqsaEcAOc= X-Google-Smtp-Source: APXvYqzDmF5XMlf4BGtndYka4iTOT+K2ZnnFrxCKvCd2S4EX3AoVvEOC3FeAA86iUE2WNkFBtIiEkQ== X-Received: by 2002:a17:90a:2ec3:: with SMTP id h3mr1342803pjs.131.1572293237146; Mon, 28 Oct 2019 13:07:17 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:16 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , "James E . J . Bottomley" , Colin Ian King Subject: [PATCH 6/9] scsi/trace: Use get_unaligned_be*() Date: Mon, 28 Oct 2019 13:06:57 -0700 Message-Id: <20191028200700.213753-7-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch fixes an unintended sign extension on left shifts. From Colin King: "Shifting a u8 left will cause the value to be promoted to an integer. If the top bit of the u8 is set then the following conversion to an u64 will sign extend the value causing the upper 32 bits to be set in the result." Fix this by using get_unaligned_be*() instead. Additionally, fix handling of TRANSFER LENGTH == 0 for READ(6) and WRITE(6). Cc: Christoph Hellwig Cc: James E.J. Bottomley Cc: Martin K. Petersen Reported-by: Colin Ian King Fixes: bf8162354233 ("[SCSI] add scsi trace core functions and put trace points") Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_trace.c | 128 ++++++++++++-------------------------- 1 file changed, 41 insertions(+), 87 deletions(-) diff --git a/drivers/scsi/scsi_trace.c b/drivers/scsi/scsi_trace.c index 0f17e7dac1b0..24c9c504e42c 100644 --- a/drivers/scsi/scsi_trace.c +++ b/drivers/scsi/scsi_trace.c @@ -9,7 +9,7 @@ #include #define SERVICE_ACTION16(cdb) (cdb[1] & 0x1f) -#define SERVICE_ACTION32(cdb) ((cdb[8] << 8) | cdb[9]) +#define SERVICE_ACTION32(cdb) (get_unaligned_be16(&cdb[8])) static const char * scsi_trace_misc(struct trace_seq *, unsigned char *, int); @@ -18,15 +18,16 @@ static const char * scsi_trace_rw6(struct trace_seq *p, unsigned char *cdb, int len) { const char *ret = trace_seq_buffer_ptr(p); - sector_t lba = 0, txlen = 0; + u32 lba, txlen; - lba |= ((cdb[1] & 0x1F) << 16); - lba |= (cdb[2] << 8); - lba |= cdb[3]; - txlen = cdb[4]; + lba = get_unaligned_be24(&cdb[1]) & 0x1fffff; + /* + * From SBC-2: a TRANSFER LENGTH field set to zero specifies that 256 + * logical blocks shall be read (READ(6)) or written (WRITE(6)). + */ + txlen = cdb[4] ? : 256; - trace_seq_printf(p, "lba=%llu txlen=%llu", - (unsigned long long)lba, (unsigned long long)txlen); + trace_seq_printf(p, "lba=%u txlen=%u", lba, txlen); trace_seq_putc(p, 0); return ret; @@ -36,17 +37,12 @@ static const char * scsi_trace_rw10(struct trace_seq *p, unsigned char *cdb, int len) { const char *ret = trace_seq_buffer_ptr(p); - sector_t lba = 0, txlen = 0; + u32 lba, txlen; - lba |= (cdb[2] << 24); - lba |= (cdb[3] << 16); - lba |= (cdb[4] << 8); - lba |= cdb[5]; - txlen |= (cdb[7] << 8); - txlen |= cdb[8]; + lba = get_unaligned_be32(&cdb[2]); + txlen = get_unaligned_be16(&cdb[7]); - trace_seq_printf(p, "lba=%llu txlen=%llu protect=%u", - (unsigned long long)lba, (unsigned long long)txlen, + trace_seq_printf(p, "lba=%u txlen=%u protect=%u", lba, txlen, cdb[1] >> 5); if (cdb[0] == WRITE_SAME) @@ -61,19 +57,12 @@ static const char * scsi_trace_rw12(struct trace_seq *p, unsigned char *cdb, int len) { const char *ret = trace_seq_buffer_ptr(p); - sector_t lba = 0, txlen = 0; - - lba |= (cdb[2] << 24); - lba |= (cdb[3] << 16); - lba |= (cdb[4] << 8); - lba |= cdb[5]; - txlen |= (cdb[6] << 24); - txlen |= (cdb[7] << 16); - txlen |= (cdb[8] << 8); - txlen |= cdb[9]; - - trace_seq_printf(p, "lba=%llu txlen=%llu protect=%u", - (unsigned long long)lba, (unsigned long long)txlen, + u32 lba, txlen; + + lba = get_unaligned_be32(&cdb[2]); + txlen = get_unaligned_be32(&cdb[6]); + + trace_seq_printf(p, "lba=%u txlen=%u protect=%u", lba, txlen, cdb[1] >> 5); trace_seq_putc(p, 0); @@ -84,23 +73,13 @@ static const char * scsi_trace_rw16(struct trace_seq *p, unsigned char *cdb, int len) { const char *ret = trace_seq_buffer_ptr(p); - sector_t lba = 0, txlen = 0; - - lba |= ((u64)cdb[2] << 56); - lba |= ((u64)cdb[3] << 48); - lba |= ((u64)cdb[4] << 40); - lba |= ((u64)cdb[5] << 32); - lba |= (cdb[6] << 24); - lba |= (cdb[7] << 16); - lba |= (cdb[8] << 8); - lba |= cdb[9]; - txlen |= (cdb[10] << 24); - txlen |= (cdb[11] << 16); - txlen |= (cdb[12] << 8); - txlen |= cdb[13]; - - trace_seq_printf(p, "lba=%llu txlen=%llu protect=%u", - (unsigned long long)lba, (unsigned long long)txlen, + u64 lba; + u32 txlen; + + lba = get_unaligned_be64(&cdb[2]); + txlen = get_unaligned_be32(&cdb[10]); + + trace_seq_printf(p, "lba=%llu txlen=%u protect=%u", lba, txlen, cdb[1] >> 5); if (cdb[0] == WRITE_SAME_16) @@ -115,8 +94,8 @@ static const char * scsi_trace_rw32(struct trace_seq *p, unsigned char *cdb, int len) { const char *ret = trace_seq_buffer_ptr(p), *cmd; - sector_t lba = 0, txlen = 0; - u32 ei_lbrt = 0; + u64 lba; + u32 ei_lbrt, txlen; switch (SERVICE_ACTION32(cdb)) { case READ_32: @@ -136,26 +115,12 @@ scsi_trace_rw32(struct trace_seq *p, unsigned char *cdb, int len) goto out; } - lba |= ((u64)cdb[12] << 56); - lba |= ((u64)cdb[13] << 48); - lba |= ((u64)cdb[14] << 40); - lba |= ((u64)cdb[15] << 32); - lba |= (cdb[16] << 24); - lba |= (cdb[17] << 16); - lba |= (cdb[18] << 8); - lba |= cdb[19]; - ei_lbrt |= (cdb[20] << 24); - ei_lbrt |= (cdb[21] << 16); - ei_lbrt |= (cdb[22] << 8); - ei_lbrt |= cdb[23]; - txlen |= (cdb[28] << 24); - txlen |= (cdb[29] << 16); - txlen |= (cdb[30] << 8); - txlen |= cdb[31]; - - trace_seq_printf(p, "%s_32 lba=%llu txlen=%llu protect=%u ei_lbrt=%u", - cmd, (unsigned long long)lba, - (unsigned long long)txlen, cdb[10] >> 5, ei_lbrt); + lba = get_unaligned_be64(&cdb[12]); + ei_lbrt = get_unaligned_be32(&cdb[20]); + txlen = get_unaligned_be32(&cdb[28]); + + trace_seq_printf(p, "%s_32 lba=%llu txlen=%u protect=%u ei_lbrt=%u", + cmd, lba, txlen, cdb[10] >> 5, ei_lbrt); if (SERVICE_ACTION32(cdb) == WRITE_SAME_32) trace_seq_printf(p, " unmap=%u", cdb[10] >> 3 & 1); @@ -170,7 +135,7 @@ static const char * scsi_trace_unmap(struct trace_seq *p, unsigned char *cdb, int len) { const char *ret = trace_seq_buffer_ptr(p); - unsigned int regions = cdb[7] << 8 | cdb[8]; + unsigned int regions = get_unaligned_be16(&cdb[7]); trace_seq_printf(p, "regions=%u", (regions - 8) / 16); trace_seq_putc(p, 0); @@ -182,8 +147,8 @@ static const char * scsi_trace_service_action_in(struct trace_seq *p, unsigned char *cdb, int len) { const char *ret = trace_seq_buffer_ptr(p), *cmd; - sector_t lba = 0; - u32 alloc_len = 0; + u64 lba; + u32 alloc_len; switch (SERVICE_ACTION16(cdb)) { case SAI_READ_CAPACITY_16: @@ -197,21 +162,10 @@ scsi_trace_service_action_in(struct trace_seq *p, unsigned char *cdb, int len) goto out; } - lba |= ((u64)cdb[2] << 56); - lba |= ((u64)cdb[3] << 48); - lba |= ((u64)cdb[4] << 40); - lba |= ((u64)cdb[5] << 32); - lba |= (cdb[6] << 24); - lba |= (cdb[7] << 16); - lba |= (cdb[8] << 8); - lba |= cdb[9]; - alloc_len |= (cdb[10] << 24); - alloc_len |= (cdb[11] << 16); - alloc_len |= (cdb[12] << 8); - alloc_len |= cdb[13]; - - trace_seq_printf(p, "%s lba=%llu alloc_len=%u", cmd, - (unsigned long long)lba, alloc_len); + lba = get_unaligned_be64(&cdb[2]); + alloc_len = get_unaligned_be32(&cdb[10]); + + trace_seq_printf(p, "%s lba=%llu alloc_len=%u", cmd, lba, alloc_len); out: trace_seq_putc(p, 0); From patchwork Mon Oct 28 20:06:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD145139A for ; Mon, 28 Oct 2019 20:07:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B4D41218AC for ; Mon, 28 Oct 2019 20:07:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732323AbfJ1UHW (ORCPT ); Mon, 28 Oct 2019 16:07:22 -0400 Received: from mail-pl1-f196.google.com ([209.85.214.196]:35340 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732399AbfJ1UHV (ORCPT ); Mon, 28 Oct 2019 16:07:21 -0400 Received: by mail-pl1-f196.google.com with SMTP id x6so2155286pln.2; Mon, 28 Oct 2019 13:07:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qgYaQ/4w6DU5OAfx3juTipbLvhVJGJbsViU8yUiC/dU=; b=Ox/OWwkPyk4mh2RW6gI/L5NAsNR579MbAjgTlaY4Wd7/YTRtwUwrRA/Uf6Sxv+G0Hh gjCUh/K8DDZw3J4yq3KJ39u0KURZsH3nTgJfnRDYk8TyqqAGXvsRqhRRjRR26Vm794Aj lfOn0n5VHRyjK7kxBUUN7nnLQA5mynor9EzVca4+/+tdtjHvL6/BtHR4CYhy5LoSeLS3 uDESFiPKW6JF60LB/RnWof6nEPiyL4Jh7YNVZmDnpZlBztjLWOvQoRrksyJoJ2JeuGjH KugjcUXfOGUOBP4uGIK8U2mKoJx1nMXYr/cZnyKCOIbEEjW0dB+Iwa4KS6fxau6TUHFH vUfQ== X-Gm-Message-State: APjAAAWZHI+g/YYE865Fomd1QAAe/re+WbQWTbz8RSkYShvO7QVt87oj tUMGV+SjyIh+srwkFhUrw8M= X-Google-Smtp-Source: APXvYqxECuqfJhEN4zSLeOmEK6k51rQVb9Fler5QHU7PuW0Xo8hSyRLLoC3ofFeK1dnL+ymZ098IBw== X-Received: by 2002:a17:902:d209:: with SMTP id t9mr21318494ply.278.1572293238573; Mon, 28 Oct 2019 13:07:18 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:17 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , Russell King Subject: [PATCH 7/9] arm/ecard: Use get_unaligned_le{16,24}() Date: Mon, 28 Oct 2019 13:06:58 -0700 Message-Id: <20191028200700.213753-8-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Use these functions instead of open-coding them. Cc: Russell King Signed-off-by: Bart Van Assche --- arch/arm/mach-rpc/ecard.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/arch/arm/mach-rpc/ecard.c b/arch/arm/mach-rpc/ecard.c index 75cfad2cb143..4db4ef085fcb 100644 --- a/arch/arm/mach-rpc/ecard.c +++ b/arch/arm/mach-rpc/ecard.c @@ -89,16 +89,6 @@ ecard_loader_reset(unsigned long base, loader_t loader); asmlinkage extern int ecard_loader_read(int off, unsigned long base, loader_t loader); -static inline unsigned short ecard_getu16(unsigned char *v) -{ - return v[0] | v[1] << 8; -} - -static inline signed long ecard_gets24(unsigned char *v) -{ - return v[0] | v[1] << 8 | v[2] << 16 | ((v[2] & 0x80) ? 0xff000000 : 0); -} - static inline ecard_t *slot_to_ecard(unsigned int slot) { return slot < MAX_ECARDS ? slot_to_expcard[slot] : NULL; @@ -915,13 +905,13 @@ static int __init ecard_probe(int slot, unsigned irq, card_type_t type) ec->cid.cd = cid.r_cd; ec->cid.is = cid.r_is; ec->cid.w = cid.r_w; - ec->cid.manufacturer = ecard_getu16(cid.r_manu); - ec->cid.product = ecard_getu16(cid.r_prod); + ec->cid.manufacturer = get_unaligned_le16(cid.r_manu); + ec->cid.product = get_unaligned_le16(cid.r_prod); ec->cid.country = cid.r_country; ec->cid.irqmask = cid.r_irqmask; - ec->cid.irqoff = ecard_gets24(cid.r_irqoff); + ec->cid.irqoff = get_unaligned_le24_sign_extend(cid.r_irqoff); ec->cid.fiqmask = cid.r_fiqmask; - ec->cid.fiqoff = ecard_gets24(cid.r_fiqoff); + ec->cid.fiqoff = get_unaligned_le24_sign_extend(cid.r_fiqoff); ec->fiqaddr = ec->irqaddr = addr; From patchwork Mon Oct 28 20:06:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216341 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5448A13BD for ; Mon, 28 Oct 2019 20:07:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3BBF0218AC for ; Mon, 28 Oct 2019 20:07:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732465AbfJ1UHX (ORCPT ); Mon, 28 Oct 2019 16:07:23 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:34321 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732418AbfJ1UHV (ORCPT ); Mon, 28 Oct 2019 16:07:21 -0400 Received: by mail-pg1-f193.google.com with SMTP id e4so3368978pgs.1; Mon, 28 Oct 2019 13:07:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cYBcEr7JXamXlGE0nw4evwvTfMUMSlNBodCHAYkMNhs=; b=kfnRiBA6xelLmMHBltelKUlY5h4r/oQNbW0pHmoWzO86stxbVVLQLKBOIBC+v3jRd3 41xEkxMvfumQYoQxqYR9bvOJ8nBjnx92LvKmvtLYYx9kCA1bkbxJyKQNNPKvc7JXs2NJ XE2Cnu+ALuRG4vgmSd7QjEPJzIjU2bC+VeLhbLUNXRiPqG7f9L52q4ARUBLVIlCp0PWV U3FyJyZMRXGNmleY47goVbYVCkvzGqdGjyHNsG84P05+XU2Y0zhKids7kOH7tCYWERJZ 9OIAYjPBiQZwxe+iT5ZOXiL4Ge6n2YdwO29AiBTuY9C4BCcrkxKwTpdX2up7/ADxxjj5 +4Qg== X-Gm-Message-State: APjAAAVG8zbgsbH2EiYPa+pjIG6bd8BzKUUj2y3b3mJ3HTl2F/Yhqm2P flXJUdKs8gL+awPjs9LqG0c= X-Google-Smtp-Source: APXvYqzIJzeohKKZJQE3qciehJBrrZzTGGzGTPCHPjLItPBhDiW6bafqXJG4K1XVEIGlmTtCwtLfAg== X-Received: by 2002:a17:90a:77c6:: with SMTP id e6mr1343495pjs.93.1572293240291; Mon, 28 Oct 2019 13:07:20 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:19 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , Dennis Dalessandro , Mike Marciniszyn , Jason Gunthorpe , Doug Ledford Subject: [PATCH 8/9] IB/qib: Sign extend without triggering implementation-defined behavior Date: Mon, 28 Oct 2019 13:06:59 -0700 Message-Id: <20191028200700.213753-9-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From the C standard: "The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type or if E1 has a signed type and a nonnegative value, the value of the result is the integral part of the quotient of E1 / 2E2 . If E1 has a signed type and a negative value, the resulting value is implementation-defined." Hence use sign_extend_24_to_32() instead of "<< 8 >> 8". Cc: Dennis Dalessandro Cc: Mike Marciniszyn Cc: Jason Gunthorpe Cc: Doug Ledford Signed-off-by: Bart Van Assche --- drivers/infiniband/hw/qib/qib_rc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/qib/qib_rc.c b/drivers/infiniband/hw/qib/qib_rc.c index aaf7438258fa..2f1beaab6935 100644 --- a/drivers/infiniband/hw/qib/qib_rc.c +++ b/drivers/infiniband/hw/qib/qib_rc.c @@ -566,7 +566,7 @@ int qib_make_rc_req(struct rvt_qp *qp, unsigned long *flags) break; } qp->s_sending_hpsn = bth2; - delta = (((int) bth2 - (int) wqe->psn) << 8) >> 8; + delta = sign_extend_24_to_32(bth2 - wqe->psn); if (delta && delta % QIB_PSN_CREDIT == 0) bth2 |= IB_BTH_REQ_ACK; if (qp->s_flags & RVT_S_SEND_ONE) { From patchwork Mon Oct 28 20:07:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 11216339 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 49DC513BD for ; Mon, 28 Oct 2019 20:07:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 301FB208C0 for ; Mon, 28 Oct 2019 20:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732474AbfJ1UHX (ORCPT ); Mon, 28 Oct 2019 16:07:23 -0400 Received: from mail-pl1-f196.google.com ([209.85.214.196]:46263 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732442AbfJ1UHW (ORCPT ); Mon, 28 Oct 2019 16:07:22 -0400 Received: by mail-pl1-f196.google.com with SMTP id q21so6178984plr.13; Mon, 28 Oct 2019 13:07:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Xu2MsrFJD4eUUY87eX5Fz+Z25LNa6DOvv39jkp2X8H4=; b=mBwcN7PJhZOGf7VCBwtRqLL2lXMzTeQEwbRajQbkAUkBQDYEl8Tn0wj8LQJRFE6j+A DbzT/7oKMrDR6+57cqMoGhjwYvbuyp9Wmm7LcqIiOE1/N7aoeidESBHTdbMRZbJlIYLd LT+7QOW+HzOEg2s4HnjjxaR4Pr4b7SLqcafGTLa6yCd7Ht9qZ/cy69HEjvtuemDJG/R4 Yetmyabt6eFNztD7cAGOT49gHE/2hY1kyGtaEGhclRfh8xuVmOpW3/lz5jwMqamuECXy aAZNhcdbFAvSYdbLmTgr8hbtNYCklyUuyqytZbPCqKbs8ns01SvoSjhQLMiF+5mM1Ax5 Emzg== X-Gm-Message-State: APjAAAXDsjbnM1mRMpMBPf+O9bt0I5LXpaJHLqjn81E+Lh/r+qk0ImfB btAqJ4Dg8DRSzNDOsaO6UYE= X-Google-Smtp-Source: APXvYqxIHZsZXZMeazBK/X4tBg4jFwOvVK/Om3IES/q2EQZzHIDQR/guTFvrwm0mKoYnzsFB+m/u8w== X-Received: by 2002:a17:902:9a49:: with SMTP id x9mr20916371plv.330.1572293241873; Mon, 28 Oct 2019 13:07:21 -0700 (PDT) Received: from desktop-bart.svl.corp.google.com ([2620:15c:2cd:202:4308:52a3:24b6:2c60]) by smtp.gmail.com with ESMTPSA id p3sm11084218pgp.41.2019.10.28.13.07.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 13:07:21 -0700 (PDT) From: Bart Van Assche To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Christoph Hellwig , "Martin K . Petersen" , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Bart Van Assche , Timur Tabi , Nicolin Chen , Xiubo Li , Fabio Estevam , Liam Girdwood , Mark Brown , Jaroslav Kysela , Takashi Iwai Subject: [PATCH 9/9] ASoC/fsl_spdif: Use put_unaligned_be24() instead of open-coding it Date: Mon, 28 Oct 2019 13:07:00 -0700 Message-Id: <20191028200700.213753-10-bvanassche@acm.org> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog In-Reply-To: <20191028200700.213753-1-bvanassche@acm.org> References: <20191028200700.213753-1-bvanassche@acm.org> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch makes the code easier to read. Cc: Timur Tabi Cc: Nicolin Chen Cc: Xiubo Li Cc: Fabio Estevam Cc: Liam Girdwood Cc: Mark Brown Cc: Jaroslav Kysela Cc: Takashi Iwai Signed-off-by: Bart Van Assche --- sound/soc/fsl/fsl_spdif.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c index 7858a5499ac5..8e80ae16f566 100644 --- a/sound/soc/fsl/fsl_spdif.c +++ b/sound/soc/fsl/fsl_spdif.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -173,9 +174,7 @@ static void spdif_irq_uqrx_full(struct fsl_spdif_priv *spdif_priv, char name) } regmap_read(regmap, reg, &val); - ctrl->subcode[*pos++] = val >> 16; - ctrl->subcode[*pos++] = val >> 8; - ctrl->subcode[*pos++] = val; + put_unaligned_be24(val, &ctrl->subcode[*pos]); } /* U/Q Channel sync found */