From patchwork Fri Apr 22 11:54:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 12823315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 071B4C433EF for ; Fri, 22 Apr 2022 11:55:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1447203AbiDVL54 (ORCPT ); Fri, 22 Apr 2022 07:57:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447212AbiDVL5w (ORCPT ); Fri, 22 Apr 2022 07:57:52 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AF46BCE for ; Fri, 22 Apr 2022 04:54:58 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id u15so6947905ple.4 for ; Fri, 22 Apr 2022 04:54:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=iILGar1xM8wIrg6eWpsqSeiam3fSAUuG6+RaC42LZ/o=; b=gct3qcRuqoFFtRia4MZVuKXzi/lQkw51LcxZirTIxNrHSbpJbaacgdWNzvMpgqP4oi akG4eznSZ6b9GRgaxsncQSd7xyGxfv1eGUTSnZVopDlGJu+YtfO4j53g4gvdSQQQtElg OKuqUZyWpcsTrMuBe4ld9NUm2qXbuqVOuYles= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=iILGar1xM8wIrg6eWpsqSeiam3fSAUuG6+RaC42LZ/o=; b=Q+4avY0nDGoNEP/cKG/didUGjGRQy73wtDvd2Qa3rotmml4O2au/fvAOmBc8HQQQ5H dJ39GjkEOu/zqAGOdUCH1GJU6oOTARj3zjFE+8cU3KlNB3dn5jsaYJifECkBKJ9mqFCT YP4wieQYnx/p/83DLdpiwUoWi3ZjVTRzYBJek98kVFALbyq5t0O3xt1wO5Xu+0W7yQER c4XRolcKc1gFqdGB+z4wl4TQVaTB0hqdfuR8bJ6IWdoliKjc8KGFPO2dcZ1XIjj9P6HA lk8lkP038sojSMJvImVAAjkiTLKG7ARER3d+yx03xitkLK+G7+5XYEq0HJy9JFnnGXun KAsQ== X-Gm-Message-State: AOAM533hsyJXIgyGCVYKLiQsHXPhqw/p4J2KukFMN8xD4pMUVfnu028G 7+Klc3GrIbfk+GuS0C53UEZbLukr3C3WxybG9G+KfIAsZBu3VU0bv6mc2NsC4asf6zutf2Adz2G 9xWCmWlKD6usXoVyu68J/vkl9eozTKbRvjd9sYVvJ8aR+V7FC+MZ7LLMME0Gleu2rS8L8KZctCk VxJ83GLZw= X-Google-Smtp-Source: ABdhPJz7sxNeatBx59hXcoEOsJ3dfaCf8/j0/n7dykPoMrosxm0RoLa+kpurxDOcVo6l/lgmKeK2Ng== X-Received: by 2002:a17:902:7798:b0:158:ee95:f45b with SMTP id o24-20020a170902779800b00158ee95f45bmr3980912pll.97.1650628497767; Fri, 22 Apr 2022 04:54:57 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g6-20020a17090a714600b001d7f3bb11d7sm2367981pjs.53.2022.04.22.04.54.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 04:54:56 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, bvanassche@acm.org, hch@lst.de, hare@suse.de, himanshu.madhani@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH v5 1/8] mpi3mr: add BSG device support Date: Fri, 22 Apr 2022 07:54:16 -0400 Message-Id: <20220422115423.279805-2-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220422115423.279805-1-sumit.saxena@broadcom.com> References: <20220422115423.279805-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Create BSG device per controller for controller management purpose. BSG Device nodes will be named as /dev/bsg/mpi3mrctl0, /dev/bsg/mpi3mrctl1... Reviewed-by: Hannes Reinecke Reviewed-by: Himanshu Madhani Signed-off-by: Sumit Saxena Reported-by: kernel test robot --- drivers/scsi/mpi3mr/Kconfig | 1 + drivers/scsi/mpi3mr/Makefile | 1 + drivers/scsi/mpi3mr/mpi3mr.h | 20 ++++++ drivers/scsi/mpi3mr/mpi3mr_app.c | 105 +++++++++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_os.c | 2 + 5 files changed, 129 insertions(+) create mode 100644 drivers/scsi/mpi3mr/mpi3mr_app.c diff --git a/drivers/scsi/mpi3mr/Kconfig b/drivers/scsi/mpi3mr/Kconfig index f7882375e74f..8997531940c2 100644 --- a/drivers/scsi/mpi3mr/Kconfig +++ b/drivers/scsi/mpi3mr/Kconfig @@ -3,5 +3,6 @@ config SCSI_MPI3MR tristate "Broadcom MPI3 Storage Controller Device Driver" depends on PCI && SCSI + select BLK_DEV_BSGLIB help MPI3 based Storage & RAID Controllers Driver. diff --git a/drivers/scsi/mpi3mr/Makefile b/drivers/scsi/mpi3mr/Makefile index 7c2063e04c81..f5cdbe48c150 100644 --- a/drivers/scsi/mpi3mr/Makefile +++ b/drivers/scsi/mpi3mr/Makefile @@ -2,3 +2,4 @@ obj-m += mpi3mr.o mpi3mr-y += mpi3mr_os.o \ mpi3mr_fw.o \ + mpi3mr_app.o \ diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 6672d907d75d..f0515f929110 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -148,6 +148,7 @@ extern int prot_mask; #define MPI3MR_DEFAULT_MDTS (128 * 1024) #define MPI3MR_DEFAULT_PGSZEXP (12) + /* Command retry count definitions */ #define MPI3MR_DEV_RMHS_RETRY_COUNT 3 @@ -175,6 +176,18 @@ extern int prot_mask; /* MSI Index from Reply Queue Index */ #define REPLY_QUEUE_IDX_TO_MSIX_IDX(qidx, offset) (qidx + offset) +/* + * Maximum data transfer size definitions for management + * application commands + */ +#define MPI3MR_MAX_APP_XFER_SIZE (1 * 1024 * 1024) +#define MPI3MR_MAX_APP_XFER_SEGMENTS 512 +/* + * 2048 sectors are for data buffers and additional 512 sectors for + * other buffers + */ +#define MPI3MR_MAX_APP_XFER_SECTORS (2048 + 512) + /* IOC State definitions */ enum mpi3mr_iocstate { MRIOC_STATE_READY = 1, @@ -714,6 +727,8 @@ struct scmd_priv { * @default_qcount: Total Default queues * @active_poll_qcount: Currently active poll queue count * @requested_poll_qcount: User requested poll queue count + * @bsg_dev: BSG device structure + * @bsg_queue: Request queue for BSG device */ struct mpi3mr_ioc { struct list_head list; @@ -854,6 +869,9 @@ struct mpi3mr_ioc { u16 default_qcount; u16 active_poll_qcount; u16 requested_poll_qcount; + + struct device *bsg_dev; + struct request_queue *bsg_queue; }; /** @@ -962,5 +980,7 @@ void mpi3mr_check_rh_fault_ioc(struct mpi3mr_ioc *mrioc, u32 reason_code); int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc, struct op_reply_qinfo *op_reply_q); int mpi3mr_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num); +void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc); +void mpi3mr_bsg_exit(struct mpi3mr_ioc *mrioc); #endif /*MPI3MR_H_INCLUDED*/ diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c new file mode 100644 index 000000000000..9b6698525990 --- /dev/null +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -0,0 +1,105 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Driver for Broadcom MPI3 Storage Controllers + * + * Copyright (C) 2017-2022 Broadcom Inc. + * (mailto: mpi3mr-linuxdrv.pdl@broadcom.com) + * + */ + +#include "mpi3mr.h" +#include + +/** + * mpi3mr_bsg_request - bsg request entry point + * @job: BSG job reference + * + * This is driver's entry point for bsg requests + * + * Return: 0 on success and proper error codes on failure + */ +int mpi3mr_bsg_request(struct bsg_job *job) +{ + return 0; +} + +/** + * mpi3mr_bsg_exit - de-registration from bsg layer + * + * This will be called during driver unload and all + * bsg resources allocated during load will be freed. + * + * Return:Nothing + */ +void mpi3mr_bsg_exit(struct mpi3mr_ioc *mrioc) +{ + if (!mrioc->bsg_queue) + return; + + bsg_remove_queue(mrioc->bsg_queue); + mrioc->bsg_queue = NULL; + + device_del(mrioc->bsg_dev); + put_device(mrioc->bsg_dev); + kfree(mrioc->bsg_dev); +} + +/** + * mpi3mr_bsg_node_release -release bsg device node + * @dev: bsg device node + * + * decrements bsg dev reference count + * + * Return:Nothing + */ +void mpi3mr_bsg_node_release(struct device *dev) +{ + put_device(dev); +} + +/** + * mpi3mr_bsg_init - registration with bsg layer + * + * This will be called during driver load and it will + * register driver with bsg layer + * + * Return:Nothing + */ +void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc) +{ + mrioc->bsg_dev = kzalloc(sizeof(struct device), GFP_KERNEL); + if (!mrioc->bsg_dev) { + ioc_err(mrioc, "bsg device mem allocation failed\n"); + return; + } + + device_initialize(mrioc->bsg_dev); + dev_set_name(mrioc->bsg_dev, "mpi3mrctl%u", mrioc->id); + + if (device_add(mrioc->bsg_dev)) { + ioc_err(mrioc, "%s: bsg device add failed\n", + dev_name(mrioc->bsg_dev)); + goto err_device_add; + } + + mrioc->bsg_dev->release = mpi3mr_bsg_node_release; + + mrioc->bsg_queue = bsg_setup_queue(mrioc->bsg_dev, dev_name(mrioc->bsg_dev), + mpi3mr_bsg_request, NULL, 0); + if (!mrioc->bsg_queue) { + ioc_err(mrioc, "%s: bsg registration failed\n", + dev_name(mrioc->bsg_dev)); + goto err_setup_queue; + } + + blk_queue_max_segments(mrioc->bsg_queue, MPI3MR_MAX_APP_XFER_SEGMENTS); + blk_queue_max_hw_sectors(mrioc->bsg_queue, MPI3MR_MAX_APP_XFER_SECTORS); + + return; + +err_setup_queue: + device_del(mrioc->bsg_dev); + put_device(mrioc->bsg_dev); +err_device_add: + kfree(mrioc->bsg_dev); +} diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index f7cd70a15ea6..faf14a5f9123 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -4345,6 +4345,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id) } scsi_scan_host(shost); + mpi3mr_bsg_init(mrioc); return retval; addhost_failed: @@ -4389,6 +4390,7 @@ static void mpi3mr_remove(struct pci_dev *pdev) while (mrioc->reset_in_progress || mrioc->is_driver_loading) ssleep(1); + mpi3mr_bsg_exit(mrioc); mrioc->stop_drv_processing = 1; mpi3mr_cleanup_fwevt_list(mrioc); spin_lock_irqsave(&mrioc->fwevt_lock, flags); From patchwork Fri Apr 22 11:54:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 12823317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03EA8C433EF for ; Fri, 22 Apr 2022 11:55:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1447212AbiDVL6D (ORCPT ); Fri, 22 Apr 2022 07:58:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447211AbiDVL56 (ORCPT ); Fri, 22 Apr 2022 07:57:58 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41C531C908 for ; Fri, 22 Apr 2022 04:55:04 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id j8so10314014pll.11 for ; Fri, 22 Apr 2022 04:55:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=cH0tTqiDcyh9ph2+DK4SjFI/qMUjhiQP2iCA8uASj2w=; b=H0KQyU0guUwHjEwQH+qKD3Q7I5nvcz4d1oZ1nuuZSWMG3a/MvG0B4nHuaBLdD9dt4G qI7QxrMNwIYdCiuVTq1gPxL7bOmmIo6unAvSO6jRzZI7+DKob+KKTUfWiSbagYA3MkaE +0hdLEovhBIwXfJBiC3HlLrErQeWO6qLcS3TU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=cH0tTqiDcyh9ph2+DK4SjFI/qMUjhiQP2iCA8uASj2w=; b=YI7Oo99ata2zyusq7gAM7yxx1vFbFjm90nASFq3uPEs+GFTmmuybavRc6gckbfdho0 Qe9CM/MAAAFS2kNqhVlCavbLJJCKmFRMG/H8jNJuZjLPUrz1ULz5OL3ymcWly3JbvghP 3x59RHpzGhX3HYLDIaYR4X+pB3gbj8mP2SPBc4yhcFB60f9SL4cTwA9zfvQa2L+n5y8H RsHr/+P0ndRpMqvihNigDYkU71PjLovHyScPGS+pMi26lDrCKKi1+rgq9U58Tuccjgmx MP3nhCXpBkxfy1HDartcP9f/O1pG50P+ER0yTqDmi8xOBp+WD/uaBwQhA56eMlxD8Gvx sEcg== X-Gm-Message-State: AOAM530Y2JJjl4c82MvtOYfIplK5zOICZbm2XosrsVBoaMcg5rtPIcNu ULsFt6R7SXub5DbdoC0Xkwtxo3o6e5z4+S4yFHautgItq40zOWA8+aN0VP2LWdzcsc3acI9mYaa sDsqCBt1jIY89I41YUoMWEZ/OI8Z6ecWhyh+7zIC3juUHkdN6gqFm/GBcsemOTwREz89pe2vsh6 Ns61KLrHs= X-Google-Smtp-Source: ABdhPJyZq5Ou3EJOcS3BmOpO/8e7fge6P5ZENZ3du2cln/pLBlqCtHPZxIS9U2NVhvbvGbz10ISVtQ== X-Received: by 2002:a17:90b:1a87:b0:1d5:2320:9b6b with SMTP id ng7-20020a17090b1a8700b001d523209b6bmr11834113pjb.90.1650628502950; Fri, 22 Apr 2022 04:55:02 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g6-20020a17090a714600b001d7f3bb11d7sm2367981pjs.53.2022.04.22.04.54.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 04:55:01 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, bvanassche@acm.org, hch@lst.de, hare@suse.de, himanshu.madhani@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH v5 2/8] mpi3mr: add support for driver commands Date: Fri, 22 Apr 2022 07:54:17 -0400 Message-Id: <20220422115423.279805-3-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220422115423.279805-1-sumit.saxena@broadcom.com> References: <20220422115423.279805-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There are certain BSG commands which is to be completed by driver without involving firmware. These requests are termed as driver commands. This patch adds support for the same. Reviewed-by: Himanshu Madhani Signed-off-by: Sumit Saxena Reported-by: kernel test robot --- drivers/scsi/mpi3mr/mpi3mr.h | 16 +- drivers/scsi/mpi3mr/mpi3mr_app.c | 390 +++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_debug.h | 12 +- drivers/scsi/mpi3mr/mpi3mr_fw.c | 21 +- drivers/scsi/mpi3mr/mpi3mr_os.c | 3 + include/uapi/scsi/scsi_bsg_mpi3mr.h | 433 ++++++++++++++++++++++++++++ 6 files changed, 863 insertions(+), 12 deletions(-) create mode 100644 include/uapi/scsi/scsi_bsg_mpi3mr.h diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index f0515f929110..877b0925dbc5 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -89,7 +89,7 @@ extern int prot_mask; /* Reserved Host Tag definitions */ #define MPI3MR_HOSTTAG_INVALID 0xFFFF #define MPI3MR_HOSTTAG_INITCMDS 1 -#define MPI3MR_HOSTTAG_IOCTLCMDS 2 +#define MPI3MR_HOSTTAG_BSG_CMDS 2 #define MPI3MR_HOSTTAG_BLK_TMS 5 #define MPI3MR_NUM_DEVRMCMD 16 @@ -202,10 +202,10 @@ enum mpi3mr_iocstate { enum mpi3mr_reset_reason { MPI3MR_RESET_FROM_BRINGUP = 1, MPI3MR_RESET_FROM_FAULT_WATCH = 2, - MPI3MR_RESET_FROM_IOCTL = 3, + MPI3MR_RESET_FROM_APP = 3, MPI3MR_RESET_FROM_EH_HOS = 4, MPI3MR_RESET_FROM_TM_TIMEOUT = 5, - MPI3MR_RESET_FROM_IOCTL_TIMEOUT = 6, + MPI3MR_RESET_FROM_APP_TIMEOUT = 6, MPI3MR_RESET_FROM_MUR_FAILURE = 7, MPI3MR_RESET_FROM_CTLR_CLEANUP = 8, MPI3MR_RESET_FROM_CIACTIV_FAULT = 9, @@ -698,6 +698,7 @@ struct scmd_priv { * @chain_bitmap_sz: Chain buffer allocator bitmap size * @chain_bitmap: Chain buffer allocator bitmap * @chain_buf_lock: Chain buffer list lock + * @bsg_cmds: Command tracker for BSG command * @host_tm_cmds: Command tracker for task management commands * @dev_rmhs_cmds: Command tracker for device removal commands * @evtack_cmds: Command tracker for event ack commands @@ -729,6 +730,10 @@ struct scmd_priv { * @requested_poll_qcount: User requested poll queue count * @bsg_dev: BSG device structure * @bsg_queue: Request queue for BSG device + * @stop_bsgs: Stop BSG request flag + * @logdata_buf: Circular buffer to store log data entries + * @logdata_buf_idx: Index of entry in buffer to store + * @logdata_entry_sz: log data entry size */ struct mpi3mr_ioc { struct list_head list; @@ -835,6 +840,7 @@ struct mpi3mr_ioc { void *chain_bitmap; spinlock_t chain_buf_lock; + struct mpi3mr_drv_cmd bsg_cmds; struct mpi3mr_drv_cmd host_tm_cmds; struct mpi3mr_drv_cmd dev_rmhs_cmds[MPI3MR_NUM_DEVRMCMD]; struct mpi3mr_drv_cmd evtack_cmds[MPI3MR_NUM_EVTACKCMD]; @@ -872,6 +878,10 @@ struct mpi3mr_ioc { struct device *bsg_dev; struct request_queue *bsg_queue; + u8 stop_bsgs; + u8 *logdata_buf; + u16 logdata_buf_idx; + u16 logdata_entry_sz; }; /** diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index 9b6698525990..901a927cf4e0 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -9,6 +9,379 @@ #include "mpi3mr.h" #include +#include + +/** + * mpi3mr_bsg_verify_adapter - verify adapter number is valid + * @ioc_number: Adapter number + * + * This function returns the adapter instance pointer of given + * adapter number. If adapter number does not match with the + * driver's adapter list, driver returns NULL. + * + * Return: adapter instance reference + */ +static struct mpi3mr_ioc *mpi3mr_bsg_verify_adapter(int ioc_number) +{ + struct mpi3mr_ioc *mrioc = NULL; + + spin_lock(&mrioc_list_lock); + list_for_each_entry(mrioc, &mrioc_list, list) { + if (mrioc->id == ioc_number) { + spin_unlock(&mrioc_list_lock); + return mrioc; + } + } + spin_unlock(&mrioc_list_lock); + return NULL; +} + +/** + * mpi3mr_enable_logdata - Handler for log data enable + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function enables log data caching in the driver if not + * already enabled and return the maximum number of log data + * entries that can be cached in the driver. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_enable_logdata(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + struct mpi3mr_logdata_enable logdata_enable; + + if (!mrioc->logdata_buf) { + mrioc->logdata_entry_sz = + (mrioc->reply_sz - (sizeof(struct mpi3_event_notification_reply) - 4)) + + MPI3MR_BSG_LOGDATA_ENTRY_HEADER_SZ; + mrioc->logdata_buf_idx = 0; + mrioc->logdata_buf = kcalloc(MPI3MR_BSG_LOGDATA_MAX_ENTRIES, + mrioc->logdata_entry_sz, GFP_KERNEL); + + if (!mrioc->logdata_buf) + return -ENOMEM; + } + + memset(&logdata_enable, 0, sizeof(logdata_enable)); + logdata_enable.max_entries = + MPI3MR_BSG_LOGDATA_MAX_ENTRIES; + if (job->request_payload.payload_len >= sizeof(logdata_enable)) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &logdata_enable, sizeof(logdata_enable)); + return 0; + } + + return -EINVAL; +} +/** + * mpi3mr_get_logdata - Handler for get log data + * @mrioc: Adapter instance reference + * @job: BSG job pointer + * This function copies the log data entries to the user buffer + * when log caching is enabled in the driver. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_get_logdata(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + u16 num_entries, sz, entry_sz = mrioc->logdata_entry_sz; + + if ((!mrioc->logdata_buf) || (job->request_payload.payload_len < entry_sz)) + return -EINVAL; + + num_entries = job->request_payload.payload_len / entry_sz; + if (num_entries > MPI3MR_BSG_LOGDATA_MAX_ENTRIES) + num_entries = MPI3MR_BSG_LOGDATA_MAX_ENTRIES; + sz = num_entries * entry_sz; + + if (job->request_payload.payload_len >= sz) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + mrioc->logdata_buf, sz); + return 0; + } + return -EINVAL; +} + +/** + * mpi3mr_get_all_tgt_info - Get all target information + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function copies the driver managed target devices device + * handle, persistent ID, bus ID and taret ID to the user + * provided buffer for the specific controller. This function + * also provides the number of devices managed by the driver for + * the specific controller. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + long rval = -EINVAL; + u16 num_devices = 0, i = 0, size; + unsigned long flags; + struct mpi3mr_tgt_dev *tgtdev; + struct mpi3mr_device_map_info *devmap_info = NULL; + struct mpi3mr_all_tgt_info *alltgt_info = NULL; + uint32_t min_entrylen = 0, kern_entrylen = 0, usr_entrylen = 0; + + if (job->request_payload.payload_len < sizeof(u32)) { + dprint_bsg_err(mrioc, "%s: invalid size argument\n", + __func__); + return rval; + } + + spin_lock_irqsave(&mrioc->tgtdev_lock, flags); + list_for_each_entry(tgtdev, &mrioc->tgtdev_list, list) + num_devices++; + spin_unlock_irqrestore(&mrioc->tgtdev_lock, flags); + + if ((job->request_payload.payload_len == sizeof(u32)) || + list_empty(&mrioc->tgtdev_list)) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &num_devices, sizeof(num_devices)); + return 0; + } + + kern_entrylen = (num_devices - 1) * sizeof(*devmap_info); + size = sizeof(*alltgt_info) + kern_entrylen; + alltgt_info = kzalloc(size, GFP_KERNEL); + if (!alltgt_info) + return -ENOMEM; + + devmap_info = alltgt_info->dmi; + memset((u8 *)devmap_info, 0xFF, (kern_entrylen + sizeof(*devmap_info))); + spin_lock_irqsave(&mrioc->tgtdev_lock, flags); + list_for_each_entry(tgtdev, &mrioc->tgtdev_list, list) { + if (i < num_devices) { + devmap_info[i].handle = tgtdev->dev_handle; + devmap_info[i].perst_id = tgtdev->perst_id; + if (tgtdev->host_exposed && tgtdev->starget) { + devmap_info[i].target_id = tgtdev->starget->id; + devmap_info[i].bus_id = + tgtdev->starget->channel; + } + i++; + } + } + num_devices = i; + spin_unlock_irqrestore(&mrioc->tgtdev_lock, flags); + + memcpy(&alltgt_info->num_devices, &num_devices, sizeof(num_devices)); + + usr_entrylen = (job->request_payload.payload_len - sizeof(u32)) / sizeof(*devmap_info); + usr_entrylen *= sizeof(*devmap_info); + min_entrylen = min(usr_entrylen, kern_entrylen); + if (min_entrylen && (!memcpy(&alltgt_info->dmi, devmap_info, min_entrylen))) { + dprint_bsg_err(mrioc, "%s:%d: device map info copy failed\n", + __func__, __LINE__); + rval = -EFAULT; + goto out; + } + + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + alltgt_info, job->request_payload.payload_len); + rval = 0; +out: + kfree(alltgt_info); + return rval; +} + +/** + * mpi3mr_get_change_count - Get topology change count + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function copies the toplogy change count provided by the + * driver in events and cached in the driver to the user + * provided buffer for the specific controller. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_get_change_count(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + struct mpi3mr_change_count chgcnt; + + memset(&chgcnt, 0, sizeof(chgcnt)); + chgcnt.change_count = mrioc->change_count; + if (job->request_payload.payload_len >= sizeof(chgcnt)) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &chgcnt, sizeof(chgcnt)); + return 0; + } + return -EINVAL; +} + +/** + * mpi3mr_bsg_adp_reset - Issue controller reset + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function identifies the user provided reset type and + * issues approporiate reset to the controller and wait for that + * to complete and reinitialize the controller and then returns + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_bsg_adp_reset(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + long rval = -EINVAL; + u8 save_snapdump; + struct mpi3mr_bsg_adp_reset adpreset; + + if (job->request_payload.payload_len != + sizeof(adpreset)) { + dprint_bsg_err(mrioc, "%s: invalid size argument\n", + __func__); + goto out; + } + + sg_copy_to_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &adpreset, sizeof(adpreset)); + + switch (adpreset.reset_type) { + case MPI3MR_BSG_ADPRESET_SOFT: + save_snapdump = 0; + break; + case MPI3MR_BSG_ADPRESET_DIAG_FAULT: + save_snapdump = 1; + break; + default: + dprint_bsg_err(mrioc, "%s: unknown reset_type(%d)\n", + __func__, adpreset.reset_type); + goto out; + } + + rval = mpi3mr_soft_reset_handler(mrioc, MPI3MR_RESET_FROM_APP, + save_snapdump); + + if (rval) + dprint_bsg_err(mrioc, + "%s: reset handler returned error(%ld) for reset type %d\n", + __func__, rval, adpreset.reset_type); +out: + return rval; +} + +/** + * mpi3mr_bsg_populate_adpinfo - Get adapter info command handler + * @mrioc: Adapter instance reference + * @job: BSG job reference + * + * This function provides adapter information for the given + * controller + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_bsg_populate_adpinfo(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + enum mpi3mr_iocstate ioc_state; + struct mpi3mr_bsg_in_adpinfo adpinfo; + + memset(&adpinfo, 0, sizeof(adpinfo)); + adpinfo.adp_type = MPI3MR_BSG_ADPTYPE_AVGFAMILY; + adpinfo.pci_dev_id = mrioc->pdev->device; + adpinfo.pci_dev_hw_rev = mrioc->pdev->revision; + adpinfo.pci_subsys_dev_id = mrioc->pdev->subsystem_device; + adpinfo.pci_subsys_ven_id = mrioc->pdev->subsystem_vendor; + adpinfo.pci_bus = mrioc->pdev->bus->number; + adpinfo.pci_dev = PCI_SLOT(mrioc->pdev->devfn); + adpinfo.pci_func = PCI_FUNC(mrioc->pdev->devfn); + adpinfo.pci_seg_id = pci_domain_nr(mrioc->pdev->bus); + adpinfo.app_intfc_ver = MPI3MR_IOCTL_VERSION; + + ioc_state = mpi3mr_get_iocstate(mrioc); + if (ioc_state == MRIOC_STATE_UNRECOVERABLE) + adpinfo.adp_state = MPI3MR_BSG_ADPSTATE_UNRECOVERABLE; + else if ((mrioc->reset_in_progress) || (mrioc->stop_bsgs)) + adpinfo.adp_state = MPI3MR_BSG_ADPSTATE_IN_RESET; + else if (ioc_state == MRIOC_STATE_FAULT) + adpinfo.adp_state = MPI3MR_BSG_ADPSTATE_FAULT; + else + adpinfo.adp_state = MPI3MR_BSG_ADPSTATE_OPERATIONAL; + + memcpy((u8 *)&adpinfo.driver_info, (u8 *)&mrioc->driver_info, + sizeof(adpinfo.driver_info)); + + if (job->request_payload.payload_len >= sizeof(adpinfo)) { + sg_copy_from_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &adpinfo, sizeof(adpinfo)); + return 0; + } + return -EINVAL; +} + +/** + * mpi3mr_bsg_process_drv_cmds - Driver Command handler + * @job: BSG job reference + * + * This function is the top level handler for driver commands, + * this does basic validation of the buffer and identifies the + * opcode and switches to correct sub handler. + * + * Return: 0 on success and proper error codes on failure + */ +static long mpi3mr_bsg_process_drv_cmds(struct bsg_job *job) +{ + long rval = -EINVAL; + struct mpi3mr_ioc *mrioc = NULL; + struct mpi3mr_bsg_packet *bsg_req = NULL; + struct mpi3mr_bsg_drv_cmd *drvrcmd = NULL; + + bsg_req = job->request; + drvrcmd = &bsg_req->cmd.drvrcmd; + + mrioc = mpi3mr_bsg_verify_adapter(drvrcmd->mrioc_id); + if (!mrioc) + return -ENODEV; + + if (drvrcmd->opcode == MPI3MR_DRVBSG_OPCODE_ADPINFO) { + rval = mpi3mr_bsg_populate_adpinfo(mrioc, job); + return rval; + } + + if (mutex_lock_interruptible(&mrioc->bsg_cmds.mutex)) + return -ERESTARTSYS; + + switch (drvrcmd->opcode) { + case MPI3MR_DRVBSG_OPCODE_ADPRESET: + rval = mpi3mr_bsg_adp_reset(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_ALLTGTDEVINFO: + rval = mpi3mr_get_all_tgt_info(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_GETCHGCNT: + rval = mpi3mr_get_change_count(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_LOGDATAENABLE: + rval = mpi3mr_enable_logdata(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_GETLOGDATA: + rval = mpi3mr_get_logdata(mrioc, job); + break; + case MPI3MR_DRVBSG_OPCODE_UNKNOWN: + default: + pr_err("%s: unsupported driver command opcode %d\n", + MPI3MR_DRIVER_NAME, drvrcmd->opcode); + break; + } + mutex_unlock(&mrioc->bsg_cmds.mutex); + return rval; +} /** * mpi3mr_bsg_request - bsg request entry point @@ -20,6 +393,23 @@ */ int mpi3mr_bsg_request(struct bsg_job *job) { + long rval = -EINVAL; + unsigned int reply_payload_rcv_len = 0; + + struct mpi3mr_bsg_packet *bsg_req = job->request; + + switch (bsg_req->cmd_type) { + case MPI3MR_DRV_CMD: + rval = mpi3mr_bsg_process_drv_cmds(job); + break; + default: + pr_err("%s: unsupported BSG command(0x%08x)\n", + MPI3MR_DRIVER_NAME, bsg_req->cmd_type); + break; + } + + bsg_job_done(job, rval, reply_payload_rcv_len); + return 0; } diff --git a/drivers/scsi/mpi3mr/mpi3mr_debug.h b/drivers/scsi/mpi3mr/mpi3mr_debug.h index c7982443f45a..65bfac72948c 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_debug.h +++ b/drivers/scsi/mpi3mr/mpi3mr_debug.h @@ -23,8 +23,8 @@ #define MPI3_DEBUG_RESET 0x00000020 #define MPI3_DEBUG_SCSI_ERROR 0x00000040 #define MPI3_DEBUG_REPLY 0x00000080 -#define MPI3_DEBUG_IOCTL_ERROR 0x00008000 -#define MPI3_DEBUG_IOCTL_INFO 0x00010000 +#define MPI3_DEBUG_BSG_ERROR 0x00008000 +#define MPI3_DEBUG_BSG_INFO 0x00010000 #define MPI3_DEBUG_SCSI_INFO 0x00020000 #define MPI3_DEBUG 0x01000000 #define MPI3_DEBUG_SG 0x02000000 @@ -110,15 +110,15 @@ } while (0) -#define dprint_ioctl_info(ioc, fmt, ...) \ +#define dprint_bsg_info(ioc, fmt, ...) \ do { \ - if (ioc->logging_level & MPI3_DEBUG_IOCTL_INFO) \ + if (ioc->logging_level & MPI3_DEBUG_BSG_INFO) \ pr_info("%s: " fmt, (ioc)->name, ##__VA_ARGS__); \ } while (0) -#define dprint_ioctl_err(ioc, fmt, ...) \ +#define dprint_bsg_err(ioc, fmt, ...) \ do { \ - if (ioc->logging_level & MPI3_DEBUG_IOCTL_ERROR) \ + if (ioc->logging_level & MPI3_DEBUG_BSG_ERROR) \ pr_info("%s: " fmt, (ioc)->name, ##__VA_ARGS__); \ } while (0) diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c index e25c02466043..480730721f50 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c @@ -297,6 +297,8 @@ mpi3mr_get_drv_cmd(struct mpi3mr_ioc *mrioc, u16 host_tag, switch (host_tag) { case MPI3MR_HOSTTAG_INITCMDS: return &mrioc->init_cmds; + case MPI3MR_HOSTTAG_BSG_CMDS: + return &mrioc->bsg_cmds; case MPI3MR_HOSTTAG_BLK_TMS: return &mrioc->host_tm_cmds; case MPI3MR_HOSTTAG_INVALID: @@ -865,10 +867,10 @@ static const struct { } mpi3mr_reset_reason_codes[] = { { MPI3MR_RESET_FROM_BRINGUP, "timeout in bringup" }, { MPI3MR_RESET_FROM_FAULT_WATCH, "fault" }, - { MPI3MR_RESET_FROM_IOCTL, "application invocation" }, + { MPI3MR_RESET_FROM_APP, "application invocation" }, { MPI3MR_RESET_FROM_EH_HOS, "error handling" }, { MPI3MR_RESET_FROM_TM_TIMEOUT, "TM timeout" }, - { MPI3MR_RESET_FROM_IOCTL_TIMEOUT, "IOCTL timeout" }, + { MPI3MR_RESET_FROM_APP_TIMEOUT, "application command timeout" }, { MPI3MR_RESET_FROM_MUR_FAILURE, "MUR failure" }, { MPI3MR_RESET_FROM_CTLR_CLEANUP, "timeout in controller cleanup" }, { MPI3MR_RESET_FROM_CIACTIV_FAULT, "component image activation fault" }, @@ -2813,6 +2815,10 @@ static int mpi3mr_alloc_reply_sense_bufs(struct mpi3mr_ioc *mrioc) if (!mrioc->init_cmds.reply) goto out_failed; + mrioc->bsg_cmds.reply = kzalloc(mrioc->reply_sz, GFP_KERNEL); + if (!mrioc->bsg_cmds.reply) + goto out_failed; + for (i = 0; i < MPI3MR_NUM_DEVRMCMD; i++) { mrioc->dev_rmhs_cmds[i].reply = kzalloc(mrioc->reply_sz, GFP_KERNEL); @@ -3948,6 +3954,8 @@ void mpi3mr_memset_buffers(struct mpi3mr_ioc *mrioc) if (mrioc->init_cmds.reply) { memset(mrioc->init_cmds.reply, 0, sizeof(*mrioc->init_cmds.reply)); + memset(mrioc->bsg_cmds.reply, 0, + sizeof(*mrioc->bsg_cmds.reply)); memset(mrioc->host_tm_cmds.reply, 0, sizeof(*mrioc->host_tm_cmds.reply)); for (i = 0; i < MPI3MR_NUM_DEVRMCMD; i++) @@ -4050,6 +4058,9 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc) kfree(mrioc->init_cmds.reply); mrioc->init_cmds.reply = NULL; + kfree(mrioc->bsg_cmds.reply); + mrioc->bsg_cmds.reply = NULL; + kfree(mrioc->host_tm_cmds.reply); mrioc->host_tm_cmds.reply = NULL; @@ -4235,6 +4246,8 @@ static void mpi3mr_flush_drv_cmds(struct mpi3mr_ioc *mrioc) cmdptr = &mrioc->init_cmds; mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); + cmdptr = &mrioc->bsg_cmds; + mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); cmdptr = &mrioc->host_tm_cmds; mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); @@ -4258,7 +4271,7 @@ static void mpi3mr_flush_drv_cmds(struct mpi3mr_ioc *mrioc) * This is an handler for recovering controller by issuing soft * reset are diag fault reset. This is a blocking function and * when one reset is executed if any other resets they will be - * blocked. All IOCTLs/IO will be blocked during the reset. If + * blocked. All BSG requests will be blocked during the reset. If * controller reset is successful then the controller will be * reinitalized, otherwise the controller will be marked as not * recoverable @@ -4305,6 +4318,7 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, mpi3mr_reset_rc_name(reset_reason)); mrioc->reset_in_progress = 1; + mrioc->stop_bsgs = 1; mrioc->prev_reset_result = -1; if ((!snapdump) && (reset_reason != MPI3MR_RESET_FROM_FAULT_WATCH) && @@ -4377,6 +4391,7 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, &mrioc->watchdog_work, msecs_to_jiffies(MPI3MR_WATCHDOG_INTERVAL)); spin_unlock_irqrestore(&mrioc->watchdog_lock, flags); + mrioc->stop_bsgs = 0; } else { mpi3mr_issue_reset(mrioc, MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT, reset_reason); diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index faf14a5f9123..a03e39083a42 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -3589,6 +3589,7 @@ static int mpi3mr_scan_finished(struct Scsi_Host *shost, mpi3mr_start_watchdog(mrioc); mrioc->is_driver_loading = 0; + mrioc->stop_bsgs = 0; return 1; } @@ -4259,6 +4260,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id) mutex_init(&mrioc->reset_mutex); mpi3mr_init_drv_cmd(&mrioc->init_cmds, MPI3MR_HOSTTAG_INITCMDS); mpi3mr_init_drv_cmd(&mrioc->host_tm_cmds, MPI3MR_HOSTTAG_BLK_TMS); + mpi3mr_init_drv_cmd(&mrioc->bsg_cmds, MPI3MR_HOSTTAG_BSG_CMDS); for (i = 0; i < MPI3MR_NUM_DEVRMCMD; i++) mpi3mr_init_drv_cmd(&mrioc->dev_rmhs_cmds[i], @@ -4271,6 +4273,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id) mrioc->logging_level = logging_level; mrioc->shost = shost; mrioc->pdev = pdev; + mrioc->stop_bsgs = 1; /* init shost parameters */ shost->max_cmd_len = MPI3MR_MAX_CDB_LENGTH; diff --git a/include/uapi/scsi/scsi_bsg_mpi3mr.h b/include/uapi/scsi/scsi_bsg_mpi3mr.h new file mode 100644 index 000000000000..2319fc48ed78 --- /dev/null +++ b/include/uapi/scsi/scsi_bsg_mpi3mr.h @@ -0,0 +1,433 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Driver for Broadcom MPI3 Storage Controllers + * + * Copyright (C) 2017-2022 Broadcom Inc. + * (mailto: mpi3mr-linuxdrv.pdl@broadcom.com) + * + */ + +#ifndef SCSI_BSG_MPI3MR_H_INCLUDED +#define SCSI_BSG_MPI3MR_H_INCLUDED + +/* Definitions for BSG commands */ +#define MPI3MR_IOCTL_VERSION 0x06 + +#define MPI3MR_APP_DEFAULT_TIMEOUT (60) /*seconds*/ + +#define MPI3MR_BSG_ADPTYPE_UNKNOWN 0 +#define MPI3MR_BSG_ADPTYPE_AVGFAMILY 1 + +#define MPI3MR_BSG_ADPSTATE_UNKNOWN 0 +#define MPI3MR_BSG_ADPSTATE_OPERATIONAL 1 +#define MPI3MR_BSG_ADPSTATE_FAULT 2 +#define MPI3MR_BSG_ADPSTATE_IN_RESET 3 +#define MPI3MR_BSG_ADPSTATE_UNRECOVERABLE 4 + +#define MPI3MR_BSG_ADPRESET_UNKNOWN 0 +#define MPI3MR_BSG_ADPRESET_SOFT 1 +#define MPI3MR_BSG_ADPRESET_DIAG_FAULT 2 + +#define MPI3MR_BSG_LOGDATA_MAX_ENTRIES 400 +#define MPI3MR_BSG_LOGDATA_ENTRY_HEADER_SZ 4 + +#define MPI3MR_DRVBSG_OPCODE_UNKNOWN 0 +#define MPI3MR_DRVBSG_OPCODE_ADPINFO 1 +#define MPI3MR_DRVBSG_OPCODE_ADPRESET 2 +#define MPI3MR_DRVBSG_OPCODE_ALLTGTDEVINFO 4 +#define MPI3MR_DRVBSG_OPCODE_GETCHGCNT 5 +#define MPI3MR_DRVBSG_OPCODE_LOGDATAENABLE 6 +#define MPI3MR_DRVBSG_OPCODE_PELENABLE 7 +#define MPI3MR_DRVBSG_OPCODE_GETLOGDATA 8 +#define MPI3MR_DRVBSG_OPCODE_QUERY_HDB 9 +#define MPI3MR_DRVBSG_OPCODE_REPOST_HDB 10 +#define MPI3MR_DRVBSG_OPCODE_UPLOAD_HDB 11 +#define MPI3MR_DRVBSG_OPCODE_REFRESH_HDB_TRIGGERS 12 + + +#define MPI3MR_BSG_BUFTYPE_UNKNOWN 0 +#define MPI3MR_BSG_BUFTYPE_RAIDMGMT_CMD 1 +#define MPI3MR_BSG_BUFTYPE_RAIDMGMT_RESP 2 +#define MPI3MR_BSG_BUFTYPE_DATA_IN 3 +#define MPI3MR_BSG_BUFTYPE_DATA_OUT 4 +#define MPI3MR_BSG_BUFTYPE_MPI_REPLY 5 +#define MPI3MR_BSG_BUFTYPE_ERR_RESPONSE 6 +#define MPI3MR_BSG_BUFTYPE_MPI_REQUEST 0xFE + +#define MPI3MR_BSG_MPI_REPLY_BUFTYPE_UNKNOWN 0 +#define MPI3MR_BSG_MPI_REPLY_BUFTYPE_STATUS 1 +#define MPI3MR_BSG_MPI_REPLY_BUFTYPE_ADDRESS 2 + +#define MPI3MR_HDB_BUFTYPE_UNKNOWN 0 +#define MPI3MR_HDB_BUFTYPE_TRACE 1 +#define MPI3MR_HDB_BUFTYPE_FIRMWARE 2 +#define MPI3MR_HDB_BUFTYPE_RESERVED 3 + +#define MPI3MR_HDB_BUFSTATUS_UNKNOWN 0 +#define MPI3MR_HDB_BUFSTATUS_NOT_ALLOCATED 1 +#define MPI3MR_HDB_BUFSTATUS_POSTED_UNPAUSED 2 +#define MPI3MR_HDB_BUFSTATUS_POSTED_PAUSED 3 +#define MPI3MR_HDB_BUFSTATUS_RELEASED 4 + +#define MPI3MR_HDB_TRIGGER_TYPE_UNKNOWN 0 +#define MPI3MR_HDB_TRIGGER_TYPE_DIAGFAULT 1 +#define MPI3MR_HDB_TRIGGER_TYPE_ELEMENT 2 +#define MPI3MR_HDB_TRIGGER_TYPE_MASTER 3 + + +/* Supported BSG commands */ +enum command { + MPI3MR_DRV_CMD = 1, + MPI3MR_MPT_CMD = 2, +}; + +/** + * struct mpi3mr_bsg_in_adpinfo - Adapter information request + * data returned by the driver. + * + * @adp_type: Adapter type + * @rsvd1: Reserved + * @pci_dev_id: PCI device ID of the adapter + * @pci_dev_hw_rev: PCI revision of the adapter + * @pci_subsys_dev_id: PCI subsystem device ID of the adapter + * @pci_subsys_ven_id: PCI subsystem vendor ID of the adapter + * @pci_dev: PCI device + * @pci_func: PCI function + * @pci_bus: PCI bus + * @rsvd2: Reserved + * @pci_seg_id: PCI segment ID + * @app_intfc_ver: version of the application interface definition + * @rsvd3: Reserved + * @rsvd4: Reserved + * @rsvd5: Reserved + * @driver_info: Driver Information (Version/Name) + */ +struct mpi3mr_bsg_in_adpinfo { + uint32_t adp_type; + uint32_t rsvd1; + uint32_t pci_dev_id; + uint32_t pci_dev_hw_rev; + uint32_t pci_subsys_dev_id; + uint32_t pci_subsys_ven_id; + uint32_t pci_dev:5; + uint32_t pci_func:3; + uint32_t pci_bus:8; + uint16_t rsvd2; + uint32_t pci_seg_id; + uint32_t app_intfc_ver; + uint8_t adp_state; + uint8_t rsvd3; + uint16_t rsvd4; + uint32_t rsvd5[2]; + struct mpi3_driver_info_layout driver_info; +}; + +/** + * struct mpi3mr_bsg_adp_reset - Adapter reset request + * payload data to the driver. + * + * @reset_type: Reset type + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_bsg_adp_reset { + uint8_t reset_type; + uint8_t rsvd1; + uint16_t rsvd2; +}; + +/** + * struct mpi3mr_change_count - Topology change count + * returned by the driver. + * + * @change_count: Topology change count + * @rsvd: Reserved + */ +struct mpi3mr_change_count { + uint16_t change_count; + uint16_t rsvd; +}; + +/** + * struct mpi3mr_device_map_info - Target device mapping + * information + * + * @handle: Firmware device handle + * @perst_id: Persistent ID assigned by the firmware + * @target_id: Target ID assigned by the driver + * @bus_id: Bus ID assigned by the driver + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_device_map_info { + uint16_t handle; + uint16_t perst_id; + uint32_t target_id; + uint8_t bus_id; + uint8_t rsvd1; + uint16_t rsvd2; +}; + +/** + * struct mpi3mr_all_tgt_info - Target device mapping + * information returned by the driver + * + * @num_devices: The number of devices in driver's inventory + * @rsvd1: Reserved + * @rsvd2: Reserved + * @dmi: Variable length array of mapping information of targets + */ +struct mpi3mr_all_tgt_info { + uint16_t num_devices; + uint16_t rsvd1; + uint32_t rsvd2; + struct mpi3mr_device_map_info dmi[1]; +}; + +/** + * struct mpi3mr_logdata_enable - Number of log data + * entries saved by the driver returned as payload data for + * enable logdata BSG request by the driver. + * + * @max_entries: Number of log data entries cached by the driver + * @rsvd: Reserved + */ +struct mpi3mr_logdata_enable { + uint16_t max_entries; + uint16_t rsvd; +}; + +/** + * struct mpi3mr_bsg_out_pel_enable - PEL enable request payload + * data to the driver. + * + * @pel_locale: PEL locale to the firmware + * @pel_class: PEL class to the firmware + * @rsvd: Reserved + */ +struct mpi3mr_bsg_out_pel_enable { + uint16_t pel_locale; + uint8_t pel_class; + uint8_t rsvd; +}; + +/** + * struct mpi3mr_logdata_entry - Log data entry cached by the + * driver. + * + * @valid_entry: Is the entry valid + * @rsvd1: Reserved + * @rsvd2: Reserved + * @data: Variable length Log entry data + */ +struct mpi3mr_logdata_entry { + uint8_t valid_entry; + uint8_t rsvd1; + uint16_t rsvd2; + uint8_t data[1]; /* Variable length Array */ +}; + +/** + * struct mpi3mr_bsg_in_log_data - Log data entries saved by + * the driver returned as payload data for Get logdata request + * by the driver. + * + * @entry: Variable length Log data entry array + */ +struct mpi3mr_bsg_in_log_data { + struct mpi3mr_logdata_entry entry[1]; +}; + +/** + * struct mpi3mr_hdb_entry - host diag buffer entry. + * + * @buf_type: Buffer type + * @status: Buffer status + * @trigger_type: Trigger type + * @rsvd1: Reserved + * @size: Buffer size + * @rsvd2: Reserved + * @trigger_data: Trigger specific data + * @rsvd3: Reserved + * @rsvd4: Reserved + */ +struct mpi3mr_hdb_entry { + uint8_t buf_type; + uint8_t status; + uint8_t trigger_type; + uint8_t rsvd1; + uint16_t size; + uint16_t rsvd2; + uint64_t trigger_data; + uint32_t rsvd3; + uint32_t rsvd4; +}; + + +/** + * struct mpi3mr_bsg_in_hdb_status - This structure contains + * return data for the BSG request to retrieve the number of host + * diagnostic buffers supported by the driver and their current + * status and additional status specific data if any in forms of + * multiple hdb entries. + * + * @num_hdb_types: Number of host diag buffer types supported + * @rsvd1: Reserved + * @rsvd2: Reserved + * @rsvd3: Reserved + * @entry: Variable length Diag buffer status entry array + */ +struct mpi3mr_bsg_in_hdb_status { + uint8_t num_hdb_types; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t rsvd3; + struct mpi3mr_hdb_entry entry[1]; +}; + +/** + * struct mpi3mr_bsg_out_repost_hdb - Repost host diagnostic + * buffer request payload data to the driver. + * + * @buf_type: Buffer type + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_bsg_out_repost_hdb { + uint8_t buf_type; + uint8_t rsvd1; + uint16_t rsvd2; +}; + +/** + * struct mpi3mr_bsg_out_upload_hdb - Upload host diagnostic + * buffer request payload data to the driver. + * + * @buf_type: Buffer type + * @rsvd1: Reserved + * @rsvd2: Reserved + * @start_offset: Start offset of the buffer from where to copy + * @length: Length of the buffer to copy + */ +struct mpi3mr_bsg_out_upload_hdb { + uint8_t buf_type; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t start_offset; + uint32_t length; +}; + +/** + * struct mpi3mr_bsg_out_refresh_hdb_triggers - Refresh host + * diagnostic buffer triggers request payload data to the driver. + * + * @page_type: Page type + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_bsg_out_refresh_hdb_triggers { + uint8_t page_type; + uint8_t rsvd1; + uint16_t rsvd2; +}; +/** + * struct mpi3mr_bsg_drv_cmd - Generic bsg data + * structure for all driver specific requests. + * + * @mrioc_id: Controller ID + * @opcode: Driver specific opcode + * @rsvd1: Reserved + * @rsvd2: Reserved + */ +struct mpi3mr_bsg_drv_cmd { + uint8_t mrioc_id; + uint8_t opcode; + uint16_t rsvd1; + uint32_t rsvd2[4]; +}; +/** + * struct mpi3mr_bsg_in_reply_buf - MPI reply buffer returned + * for MPI Passthrough request . + * + * @mpi_reply_type: Type of MPI reply + * @rsvd1: Reserved + * @rsvd2: Reserved + * @reply_buf: Variable Length buffer based on mpirep type + */ +struct mpi3mr_bsg_in_reply_buf { + uint8_t mpi_reply_type; + uint8_t rsvd1; + uint16_t rsvd2; + uint8_t reply_buf[1]; +}; + +/** + * struct mpi3mr_buf_entry - User buffer descriptor for MPI + * Passthrough requests. + * + * @buf_type: Buffer type + * @rsvd1: Reserved + * @rsvd2: Reserved + * @buf_len: Buffer length + */ +struct mpi3mr_buf_entry { + uint8_t buf_type; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t buf_len; +}; +/** + * struct mpi3mr_bsg_buf_entry_list - list of user buffer + * descriptor for MPI Passthrough requests. + * + * @num_of_entries: Number of buffer descriptors + * @rsvd1: Reserved + * @rsvd2: Reserved + * @rsvd3: Reserved + * @buf_entry: Variable length array of buffer descriptors + */ +struct mpi3mr_buf_entry_list { + uint8_t num_of_entries; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t rsvd3; + struct mpi3mr_buf_entry buf_entry[1]; +}; +/** + * struct mpi3mr_bsg_mptcmd - Generic bsg data + * structure for all MPI Passthrough requests . + * + * @mrioc_id: Controller ID + * @rsvd1: Reserved + * @timeout: MPI request timeout + * @buf_entry_list: Buffer descriptor list + */ +struct mpi3mr_bsg_mptcmd { + uint8_t mrioc_id; + uint8_t rsvd1; + uint16_t timeout; + uint32_t rsvd2; + struct mpi3mr_buf_entry_list buf_entry_list; +}; + +/** + * struct mpi3mr_bsg_packet - Generic bsg data + * structure for all supported requests . + * + * @cmd_type: represents drvrcmd or mptcmd + * @rsvd1: Reserved + * @rsvd2: Reserved + * @drvrcmd: driver request structure + * @mptcmd: mpt request structure + */ +struct mpi3mr_bsg_packet { + uint8_t cmd_type; + uint8_t rsvd1; + uint16_t rsvd2; + uint32_t rsvd3; + union { + struct mpi3mr_bsg_drv_cmd drvrcmd; + struct mpi3mr_bsg_mptcmd mptcmd; + } cmd; +}; +#endif From patchwork Fri Apr 22 11:54:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 12823316 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2B78C433EF for ; Fri, 22 Apr 2022 11:55:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1447205AbiDVL6D (ORCPT ); Fri, 22 Apr 2022 07:58:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447212AbiDVL6C (ORCPT ); Fri, 22 Apr 2022 07:58:02 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6973A3153E for ; Fri, 22 Apr 2022 04:55:08 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id t12so10330550pll.7 for ; Fri, 22 Apr 2022 04:55:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=yE9roNU510JiSmQ7SmD0nllrX+0H5v8cbFkJtwCvmMo=; b=JKhMsLykP0lONPcKZ3Jv1YPaMxdimCnEciCt0hDlCxTduXP4aBEb700hO79uTLYHNA kbbzlLudYMvZ5v1UsRSsw1M0ARjTOkuXFtSn4U+N/rbq7BmKCpxLyb05skT5dhIVLHRb BHXtYtHLQoEOUra3OvORgFaZCYSsMjqYQJ7WA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=yE9roNU510JiSmQ7SmD0nllrX+0H5v8cbFkJtwCvmMo=; b=4H6V8UExOcE+Hslq4Pc7AwzVvUZsIZvAk1cp5Z/ikdgtvOlAJDx8Hh5kImsMR/XI5c o4Nq+OtR+R5ya46LUq1BKHJ+Fjnv3qDG0Myisj7ZUM3MzXg/o429FKlwQhuKLkalx+Ys MvOdXA6PfOS8CUX7uotmRrudJV/+ff5B9fwZ7ozkDMQw8bqaHwA3fePywggdU35N2Pcz zlwd3cXR1w4b+q9W85WaCKBUL9sQEmIN2Pr4WziJYZBqOYzG+bTch1y/fffrCaEJfoIB yBG3fWDKSYAMmIluOfJk4G0wmANH8vXUpHu6zQMq21CzhFNPCbrZUzCL0Ibf29WqNzI6 6qYQ== X-Gm-Message-State: AOAM532zcYf/DmxBKePUJKYbZRuyJWTUda7VIUDCWKCbGtC1pyE50QkD Nxf/TWNNtLHtGHoZlwr76Be+fP2OtBXqJoAKyU/HHFkUanSBUC7HSgHLN6xS1AJt9vRdnZcW/HZ zwAJ+EyvzP5YseN3XvlKMESctjO3MInLWx0hjIWIVBNEENZ7PpW/+PYrXpryrVCbhXBzKVBt5kk 5sZTEIj3s= X-Google-Smtp-Source: ABdhPJx7xESi3RwPHKFUvRISCArwLhDLMqMiEVxauGXx8mYuG1u5TY/eL9gqsrPqxSjPn9smP06b0Q== X-Received: by 2002:a17:902:9005:b0:158:e46e:688c with SMTP id a5-20020a170902900500b00158e46e688cmr4113377plp.173.1650628507531; Fri, 22 Apr 2022 04:55:07 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g6-20020a17090a714600b001d7f3bb11d7sm2367981pjs.53.2022.04.22.04.55.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 04:55:06 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, bvanassche@acm.org, hch@lst.de, hare@suse.de, himanshu.madhani@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH v5 3/8] mpi3mr: move data structures/definitions from MPI headers to uapi header Date: Fri, 22 Apr 2022 07:54:18 -0400 Message-Id: <20220422115423.279805-4-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220422115423.279805-1-sumit.saxena@broadcom.com> References: <20220422115423.279805-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch moves the data structures/definitions which are used by user space applications from MPI headers to uapi/scsi/scsi_bsg_mpi3mr.h Reviewed-by: Himanshu Madhani Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi/mpi30_init.h | 53 ---------- drivers/scsi/mpi3mr/mpi/mpi30_ioc.h | 28 ------ drivers/scsi/mpi3mr/mpi/mpi30_pci.h | 31 +----- drivers/scsi/mpi3mr/mpi3mr.h | 1 + include/uapi/scsi/scsi_bsg_mpi3mr.h | 139 +++++++++++++++++++++++++++ 5 files changed, 141 insertions(+), 111 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi/mpi30_init.h b/drivers/scsi/mpi3mr/mpi/mpi30_init.h index e2e8b22e9122..aac11c58cca9 100644 --- a/drivers/scsi/mpi3mr/mpi/mpi30_init.h +++ b/drivers/scsi/mpi3mr/mpi/mpi30_init.h @@ -115,57 +115,4 @@ struct mpi3_scsi_io_reply { #define MPI3_SCSI_RSP_ARI0_MASK (0xff000000) #define MPI3_SCSI_RSP_ARI0_SHIFT (24) #define MPI3_SCSI_TASKTAG_UNKNOWN (0xffff) -struct mpi3_scsi_task_mgmt_request { - __le16 host_tag; - u8 ioc_use_only02; - u8 function; - __le16 ioc_use_only04; - u8 ioc_use_only06; - u8 msg_flags; - __le16 change_count; - __le16 dev_handle; - __le16 task_host_tag; - u8 task_type; - u8 reserved0f; - __le16 task_request_queue_id; - __le16 reserved12; - __le32 reserved14; - u8 lun[8]; -}; - -#define MPI3_SCSITASKMGMT_MSGFLAGS_DO_NOT_SEND_TASK_IU (0x08) -#define MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK (0x01) -#define MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK_SET (0x02) -#define MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET (0x03) -#define MPI3_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET (0x05) -#define MPI3_SCSITASKMGMT_TASKTYPE_CLEAR_TASK_SET (0x06) -#define MPI3_SCSITASKMGMT_TASKTYPE_QUERY_TASK (0x07) -#define MPI3_SCSITASKMGMT_TASKTYPE_CLEAR_ACA (0x08) -#define MPI3_SCSITASKMGMT_TASKTYPE_QUERY_TASK_SET (0x09) -#define MPI3_SCSITASKMGMT_TASKTYPE_QUERY_ASYNC_EVENT (0x0a) -#define MPI3_SCSITASKMGMT_TASKTYPE_I_T_NEXUS_RESET (0x0b) -struct mpi3_scsi_task_mgmt_reply { - __le16 host_tag; - u8 ioc_use_only02; - u8 function; - __le16 ioc_use_only04; - u8 ioc_use_only06; - u8 msg_flags; - __le16 ioc_use_only08; - __le16 ioc_status; - __le32 ioc_log_info; - __le32 termination_count; - __le32 response_data; - __le32 reserved18; -}; - -#define MPI3_SCSITASKMGMT_RSPCODE_TM_COMPLETE (0x00) -#define MPI3_SCSITASKMGMT_RSPCODE_INVALID_FRAME (0x02) -#define MPI3_SCSITASKMGMT_RSPCODE_TM_FUNCTION_NOT_SUPPORTED (0x04) -#define MPI3_SCSITASKMGMT_RSPCODE_TM_FAILED (0x05) -#define MPI3_SCSITASKMGMT_RSPCODE_TM_SUCCEEDED (0x08) -#define MPI3_SCSITASKMGMT_RSPCODE_TM_INVALID_LUN (0x09) -#define MPI3_SCSITASKMGMT_RSPCODE_TM_OVERLAPPED_TAG (0x0a) -#define MPI3_SCSITASKMGMT_RSPCODE_IO_QUEUED_ON_IOC (0x80) -#define MPI3_SCSITASKMGMT_RSPCODE_TM_NVME_DENIED (0x81) #endif diff --git a/drivers/scsi/mpi3mr/mpi/mpi30_ioc.h b/drivers/scsi/mpi3mr/mpi/mpi30_ioc.h index 633037dc7012..7b306580d30f 100644 --- a/drivers/scsi/mpi3mr/mpi/mpi30_ioc.h +++ b/drivers/scsi/mpi3mr/mpi/mpi30_ioc.h @@ -38,17 +38,6 @@ struct mpi3_ioc_init_request { #define MPI3_WHOINIT_ROM_BIOS (0x02) #define MPI3_WHOINIT_HOST_DRIVER (0x03) #define MPI3_WHOINIT_MANUFACTURER (0x04) -struct mpi3_driver_info_layout { - __le32 information_length; - u8 driver_signature[12]; - u8 os_name[16]; - u8 os_version[12]; - u8 driver_name[20]; - u8 driver_version[32]; - u8 driver_release_date[20]; - __le32 driver_capabilities; -}; - struct mpi3_ioc_facts_request { __le16 host_tag; u8 ioc_use_only02; @@ -647,23 +636,6 @@ struct mpi3_event_data_diag_buffer_status_change { #define MPI3_EVENT_DIAG_BUFFER_STATUS_CHANGE_RC_RELEASED (0x01) #define MPI3_EVENT_DIAG_BUFFER_STATUS_CHANGE_RC_PAUSED (0x02) #define MPI3_EVENT_DIAG_BUFFER_STATUS_CHANGE_RC_RESUMED (0x03) -#define MPI3_PEL_LOCALE_FLAGS_NON_BLOCKING_BOOT_EVENT (0x0200) -#define MPI3_PEL_LOCALE_FLAGS_BLOCKING_BOOT_EVENT (0x0100) -#define MPI3_PEL_LOCALE_FLAGS_PCIE (0x0080) -#define MPI3_PEL_LOCALE_FLAGS_CONFIGURATION (0x0040) -#define MPI3_PEL_LOCALE_FLAGS_CONTROLER (0x0020) -#define MPI3_PEL_LOCALE_FLAGS_SAS (0x0010) -#define MPI3_PEL_LOCALE_FLAGS_EPACK (0x0008) -#define MPI3_PEL_LOCALE_FLAGS_ENCLOSURE (0x0004) -#define MPI3_PEL_LOCALE_FLAGS_PD (0x0002) -#define MPI3_PEL_LOCALE_FLAGS_VD (0x0001) -#define MPI3_PEL_CLASS_DEBUG (0x00) -#define MPI3_PEL_CLASS_PROGRESS (0x01) -#define MPI3_PEL_CLASS_INFORMATIONAL (0x02) -#define MPI3_PEL_CLASS_WARNING (0x03) -#define MPI3_PEL_CLASS_CRITICAL (0x04) -#define MPI3_PEL_CLASS_FATAL (0x05) -#define MPI3_PEL_CLASS_FAULT (0x06) #define MPI3_PEL_CLEARTYPE_CLEAR (0x00) #define MPI3_PEL_WAITTIME_INFINITE_WAIT (0x00) #define MPI3_PEL_ACTION_GET_SEQNUM (0x01) diff --git a/drivers/scsi/mpi3mr/mpi/mpi30_pci.h b/drivers/scsi/mpi3mr/mpi/mpi30_pci.h index 77270f577f90..901dbd788940 100644 --- a/drivers/scsi/mpi3mr/mpi/mpi30_pci.h +++ b/drivers/scsi/mpi3mr/mpi/mpi30_pci.h @@ -5,24 +5,6 @@ */ #ifndef MPI30_PCI_H #define MPI30_PCI_H 1 -#ifndef MPI3_NVME_ENCAP_CMD_MAX -#define MPI3_NVME_ENCAP_CMD_MAX (1) -#endif -struct mpi3_nvme_encapsulated_request { - __le16 host_tag; - u8 ioc_use_only02; - u8 function; - __le16 ioc_use_only04; - u8 ioc_use_only06; - u8 msg_flags; - __le16 change_count; - __le16 dev_handle; - __le16 encapsulated_command_length; - __le16 flags; - __le32 data_length; - __le32 reserved14[3]; - __le32 command[MPI3_NVME_ENCAP_CMD_MAX]; -}; #define MPI3_NVME_FLAGS_FORCE_ADMIN_ERR_REPLY_MASK (0x0002) #define MPI3_NVME_FLAGS_FORCE_ADMIN_ERR_REPLY_FAIL_ONLY (0x0000) @@ -30,16 +12,5 @@ struct mpi3_nvme_encapsulated_request { #define MPI3_NVME_FLAGS_SUBMISSIONQ_MASK (0x0001) #define MPI3_NVME_FLAGS_SUBMISSIONQ_IO (0x0000) #define MPI3_NVME_FLAGS_SUBMISSIONQ_ADMIN (0x0001) -struct mpi3_nvme_encapsulated_error_reply { - __le16 host_tag; - u8 ioc_use_only02; - u8 function; - __le16 ioc_use_only04; - u8 ioc_use_only06; - u8 msg_flags; - __le16 ioc_use_only08; - __le16 ioc_status; - __le32 ioc_log_info; - __le32 nvme_completion_entry[4]; -}; + #endif diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 877b0925dbc5..fb05aab48aa7 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -38,6 +38,7 @@ #include #include #include +#include #include "mpi/mpi30_transport.h" #include "mpi/mpi30_cnfg.h" diff --git a/include/uapi/scsi/scsi_bsg_mpi3mr.h b/include/uapi/scsi/scsi_bsg_mpi3mr.h index 2319fc48ed78..a6dc050dff72 100644 --- a/include/uapi/scsi/scsi_bsg_mpi3mr.h +++ b/include/uapi/scsi/scsi_bsg_mpi3mr.h @@ -81,6 +81,28 @@ enum command { MPI3MR_MPT_CMD = 2, }; +/** + * struct mpi3_driver_info_layout - Information about driver + * + * @information_length: Length of this structure in bytes + * @driver_signature: Driver Vendor name + * @os_name: Operating System Name + * @driver_name: Driver name + * @driver_version: Driver version + * @driver_release_date: Driver release date + * @driver_capabilities: Driver capabilities + */ +struct mpi3_driver_info_layout { + __le32 information_length; + u8 driver_signature[12]; + u8 os_name[16]; + u8 os_version[12]; + u8 driver_name[20]; + u8 driver_version[32]; + u8 driver_release_date[20]; + __le32 driver_capabilities; +}; + /** * struct mpi3mr_bsg_in_adpinfo - Adapter information request * data returned by the driver. @@ -430,4 +452,121 @@ struct mpi3mr_bsg_packet { struct mpi3mr_bsg_mptcmd mptcmd; } cmd; }; + + +/* MPI3: NVMe Encasulation related definitions */ +#ifndef MPI3_NVME_ENCAP_CMD_MAX +#define MPI3_NVME_ENCAP_CMD_MAX (1) +#endif + +struct mpi3_nvme_encapsulated_request { + __le16 host_tag; + u8 ioc_use_only02; + u8 function; + __le16 ioc_use_only04; + u8 ioc_use_only06; + u8 msg_flags; + __le16 change_count; + __le16 dev_handle; + __le16 encapsulated_command_length; + __le16 flags; + __le32 data_length; + __le32 reserved14[3]; + __le32 command[MPI3_NVME_ENCAP_CMD_MAX]; +}; + +struct mpi3_nvme_encapsulated_error_reply { + __le16 host_tag; + u8 ioc_use_only02; + u8 function; + __le16 ioc_use_only04; + u8 ioc_use_only06; + u8 msg_flags; + __le16 ioc_use_only08; + __le16 ioc_status; + __le32 ioc_log_info; + __le32 nvme_completion_entry[4]; +}; + +/* MPI3: task management related definitions */ +struct mpi3_scsi_task_mgmt_request { + __le16 host_tag; + u8 ioc_use_only02; + u8 function; + __le16 ioc_use_only04; + u8 ioc_use_only06; + u8 msg_flags; + __le16 change_count; + __le16 dev_handle; + __le16 task_host_tag; + u8 task_type; + u8 reserved0f; + __le16 task_request_queue_id; + __le16 reserved12; + __le32 reserved14; + u8 lun[8]; +}; + +#define MPI3_SCSITASKMGMT_MSGFLAGS_DO_NOT_SEND_TASK_IU (0x08) +#define MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK (0x01) +#define MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK_SET (0x02) +#define MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET (0x03) +#define MPI3_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET (0x05) +#define MPI3_SCSITASKMGMT_TASKTYPE_CLEAR_TASK_SET (0x06) +#define MPI3_SCSITASKMGMT_TASKTYPE_QUERY_TASK (0x07) +#define MPI3_SCSITASKMGMT_TASKTYPE_CLEAR_ACA (0x08) +#define MPI3_SCSITASKMGMT_TASKTYPE_QUERY_TASK_SET (0x09) +#define MPI3_SCSITASKMGMT_TASKTYPE_QUERY_ASYNC_EVENT (0x0a) +#define MPI3_SCSITASKMGMT_TASKTYPE_I_T_NEXUS_RESET (0x0b) +struct mpi3_scsi_task_mgmt_reply { + __le16 host_tag; + u8 ioc_use_only02; + u8 function; + __le16 ioc_use_only04; + u8 ioc_use_only06; + u8 msg_flags; + __le16 ioc_use_only08; + __le16 ioc_status; + __le32 ioc_log_info; + __le32 termination_count; + __le32 response_data; + __le32 reserved18; +}; + +#define MPI3_SCSITASKMGMT_RSPCODE_TM_COMPLETE (0x00) +#define MPI3_SCSITASKMGMT_RSPCODE_INVALID_FRAME (0x02) +#define MPI3_SCSITASKMGMT_RSPCODE_TM_FUNCTION_NOT_SUPPORTED (0x04) +#define MPI3_SCSITASKMGMT_RSPCODE_TM_FAILED (0x05) +#define MPI3_SCSITASKMGMT_RSPCODE_TM_SUCCEEDED (0x08) +#define MPI3_SCSITASKMGMT_RSPCODE_TM_INVALID_LUN (0x09) +#define MPI3_SCSITASKMGMT_RSPCODE_TM_OVERLAPPED_TAG (0x0a) +#define MPI3_SCSITASKMGMT_RSPCODE_IO_QUEUED_ON_IOC (0x80) +#define MPI3_SCSITASKMGMT_RSPCODE_TM_NVME_DENIED (0x81) + +/* MPI3: PEL related definitions */ +#define MPI3_PEL_LOCALE_FLAGS_NON_BLOCKING_BOOT_EVENT (0x0200) +#define MPI3_PEL_LOCALE_FLAGS_BLOCKING_BOOT_EVENT (0x0100) +#define MPI3_PEL_LOCALE_FLAGS_PCIE (0x0080) +#define MPI3_PEL_LOCALE_FLAGS_CONFIGURATION (0x0040) +#define MPI3_PEL_LOCALE_FLAGS_CONTROLER (0x0020) +#define MPI3_PEL_LOCALE_FLAGS_SAS (0x0010) +#define MPI3_PEL_LOCALE_FLAGS_EPACK (0x0008) +#define MPI3_PEL_LOCALE_FLAGS_ENCLOSURE (0x0004) +#define MPI3_PEL_LOCALE_FLAGS_PD (0x0002) +#define MPI3_PEL_LOCALE_FLAGS_VD (0x0001) +#define MPI3_PEL_CLASS_DEBUG (0x00) +#define MPI3_PEL_CLASS_PROGRESS (0x01) +#define MPI3_PEL_CLASS_INFORMATIONAL (0x02) +#define MPI3_PEL_CLASS_WARNING (0x03) +#define MPI3_PEL_CLASS_CRITICAL (0x04) +#define MPI3_PEL_CLASS_FATAL (0x05) +#define MPI3_PEL_CLASS_FAULT (0x06) + +/* MPI3: Function definitions */ +#define MPI3_BSG_FUNCTION_MGMT_PASSTHROUGH (0x0a) +#define MPI3_BSG_FUNCTION_SCSI_IO (0x20) +#define MPI3_BSG_FUNCTION_SCSI_TASK_MGMT (0x21) +#define MPI3_BSG_FUNCTION_SMP_PASSTHROUGH (0x22) +#define MPI3_BSG_FUNCTION_NVME_ENCAPSULATED (0x24) + #endif From patchwork Fri Apr 22 11:54:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 12823318 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10E9CC433EF for ; Fri, 22 Apr 2022 11:55:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1447215AbiDVL6L (ORCPT ); Fri, 22 Apr 2022 07:58:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447211AbiDVL6G (ORCPT ); Fri, 22 Apr 2022 07:58:06 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E27C53ED01 for ; Fri, 22 Apr 2022 04:55:12 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id r83so7170847pgr.2 for ; Fri, 22 Apr 2022 04:55:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=AL4u5i54UVSWxg8nlgjfGOrCoETEaRY4Qg8bwAI55uA=; b=TTr4J52bcAo9kVARAnIqbulJpKgib8P+900qKn+3Z7ih0/q88Dpef4X3VdnFazooqH A1a5hAZ0MIti8dlLCGiRw8C4nUB+s2SDJAaXSuOGdnvdUFgVZlfXJ2Gc4C5qyYd+4eVJ O7VgBrzRMTBFIa7vGvFF0P9UE/jZ9jaWHpyfw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=AL4u5i54UVSWxg8nlgjfGOrCoETEaRY4Qg8bwAI55uA=; b=5Y/T/Ymydx8Bbc5gkRU+2Fmxa6xj51Yxs9sq2tH8quDkfQuqAzsr45Ov3qM389Vi+F EbHrF06xFQ2ps+ZbDrgL1SPoWeQ7ycuVDt3I1eqa9IWARiCHLFKHG7HjFg7A+xpyj/iQ U2+aXaNE23lHlvjqTIWMIN38QK3xPlLvIpE/YoRhCV4/hg84FWhAykGtWATSQqIwy5ic Mxq2nXdDxUOfCNvNOSm6rF6pPK5pjE1VV44X1yyS8XxSolyyYcD9+LwRBchoykb0HImr WgiS5i6wfTw90GOrSZL7cDu4NQa1V4qQXXdMIiKE1aYr0IVBxbfjyUwrA/qoDTJJtYrT X1Vw== X-Gm-Message-State: AOAM532qIwEQ7YFifCOhnrVK/vE4rZsA0pqJz2sI18Vi7XWMHH88TywP erblkjsR5LsalKZT/tiNPeW2QxCGEepX1LS9prmwzUapSE9jdUM9zCaOJuvUmobAQLKT+PeZb1H FUCWgtD7rERkcP3zywVifeZDTwCM2IjS9sQohZOLKsn2gaUJWhvJ1hXa5SHU1/5adiRiZMngqPo Vw4KywkR0= X-Google-Smtp-Source: ABdhPJwXFxej4cfvRduTmokJlzHZkGSswrMuZj94GTX/d2HUHMx3aHYzL3oThC4Z0fXatX8tx7rkhw== X-Received: by 2002:a05:6a00:992:b0:50a:cf4c:d416 with SMTP id u18-20020a056a00099200b0050acf4cd416mr4537090pfg.27.1650628511927; Fri, 22 Apr 2022 04:55:11 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g6-20020a17090a714600b001d7f3bb11d7sm2367981pjs.53.2022.04.22.04.55.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 04:55:10 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, bvanassche@acm.org, hch@lst.de, hare@suse.de, himanshu.madhani@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH v5 4/8] mpi3mr: add support for MPT commands Date: Fri, 22 Apr 2022 07:54:19 -0400 Message-Id: <20220422115423.279805-5-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220422115423.279805-1-sumit.saxena@broadcom.com> References: <20220422115423.279805-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There are certain management commands requiring firmware intervention. These commands are termed as MPT commands. This patch adds support for the same. Reviewed-by: Himanshu Madhani Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 29 ++ drivers/scsi/mpi3mr/mpi3mr_app.c | 519 ++++++++++++++++++++++++++++- drivers/scsi/mpi3mr/mpi3mr_debug.h | 25 ++ drivers/scsi/mpi3mr/mpi3mr_os.c | 4 +- 4 files changed, 574 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index fb05aab48aa7..37be9e28e0b2 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -189,6 +189,27 @@ extern int prot_mask; */ #define MPI3MR_MAX_APP_XFER_SECTORS (2048 + 512) +/** + * struct mpi3mr_buf_map - local structure to + * track kernel and user buffers associated with an BSG + * structure. + * + * @bsg_buf: BSG buffer virtual address + * @bsg_buf_len: BSG buffer length + * @kern_buf: Kernel buffer virtual address + * @kern_buf_len: Kernel buffer length + * @kern_buf_dma: Kernel buffer DMA address + * @data_dir: Data direction. + */ +struct mpi3mr_buf_map { + void *bsg_buf; + u32 bsg_buf_len; + void *kern_buf; + u32 kern_buf_len; + dma_addr_t kern_buf_dma; + u8 data_dir; +}; + /* IOC State definitions */ enum mpi3mr_iocstate { MRIOC_STATE_READY = 1, @@ -557,6 +578,7 @@ struct mpi3mr_sdev_priv_data { * @ioc_status: IOC status from the firmware * @ioc_loginfo:IOC log info from the firmware * @is_waiting: Is the command issued in block mode + * @is_sense: Is Sense data present * @retry_count: Retry count for retriable commands * @host_tag: Host tag used by the command * @callback: Callback for non blocking commands @@ -572,6 +594,7 @@ struct mpi3mr_drv_cmd { u16 ioc_status; u32 ioc_loginfo; u8 is_waiting; + u8 is_sense; u8 retry_count; u16 host_tag; @@ -993,5 +1016,11 @@ int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc, int mpi3mr_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num); void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc); void mpi3mr_bsg_exit(struct mpi3mr_ioc *mrioc); +int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type, + u16 handle, uint lun, u16 htag, ulong timeout, + struct mpi3mr_drv_cmd *drv_cmd, + u8 *resp_code, struct scsi_cmnd *scmd); +struct mpi3mr_tgt_dev *mpi3mr_get_tgtdev_by_handle( + struct mpi3mr_ioc *mrioc, u16 handle); #endif /*MPI3MR_H_INCLUDED*/ diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index 901a927cf4e0..b665ba4c39ef 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -194,7 +194,6 @@ static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc, kfree(alltgt_info); return rval; } - /** * mpi3mr_get_change_count - Get topology change count * @mrioc: Adapter instance reference @@ -383,6 +382,521 @@ static long mpi3mr_bsg_process_drv_cmds(struct bsg_job *job) return rval; } +/** + * mpi3mr_bsg_build_sgl - SGL construction for MPI commands + * @mpi_req: MPI request + * @sgl_offset: offset to start sgl in the MPI request + * @drv_bufs: DMA address of the buffers to be placed in sgl + * @bufcnt: Number of DMA buffers + * @is_rmc: Does the buffer list has management command buffer + * @is_rmr: Does the buffer list has management response buffer + * @num_datasges: Number of data buffers in the list + * + * This function places the DMA address of the given buffers in + * proper format as SGEs in the given MPI request. + * + * Return: Nothing + */ +static void mpi3mr_bsg_build_sgl(u8 *mpi_req, uint32_t sgl_offset, + struct mpi3mr_buf_map *drv_bufs, u8 bufcnt, u8 is_rmc, + u8 is_rmr, u8 num_datasges) +{ + u8 *sgl = (mpi_req + sgl_offset), count = 0; + struct mpi3_mgmt_passthrough_request *rmgmt_req = + (struct mpi3_mgmt_passthrough_request *)mpi_req; + struct mpi3mr_buf_map *drv_buf_iter = drv_bufs; + u8 sgl_flags, sgl_flags_last; + + sgl_flags = MPI3_SGE_FLAGS_ELEMENT_TYPE_SIMPLE | + MPI3_SGE_FLAGS_DLAS_SYSTEM | MPI3_SGE_FLAGS_END_OF_BUFFER; + sgl_flags_last = sgl_flags | MPI3_SGE_FLAGS_END_OF_LIST; + + if (is_rmc) { + mpi3mr_add_sg_single(&rmgmt_req->command_sgl, + sgl_flags_last, drv_buf_iter->kern_buf_len, + drv_buf_iter->kern_buf_dma); + sgl = (u8 *)drv_buf_iter->kern_buf + drv_buf_iter->bsg_buf_len; + drv_buf_iter++; + count++; + if (is_rmr) { + mpi3mr_add_sg_single(&rmgmt_req->response_sgl, + sgl_flags_last, drv_buf_iter->kern_buf_len, + drv_buf_iter->kern_buf_dma); + drv_buf_iter++; + count++; + } else + mpi3mr_build_zero_len_sge( + &rmgmt_req->response_sgl); + } + if (!num_datasges) { + mpi3mr_build_zero_len_sge(sgl); + return; + } + for (; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + if (num_datasges == 1 || !is_rmc) + mpi3mr_add_sg_single(sgl, sgl_flags_last, + drv_buf_iter->kern_buf_len, drv_buf_iter->kern_buf_dma); + else + mpi3mr_add_sg_single(sgl, sgl_flags, + drv_buf_iter->kern_buf_len, drv_buf_iter->kern_buf_dma); + sgl += sizeof(struct mpi3_sge_common); + num_datasges--; + } +} + +/** + * mpi3mr_bsg_process_mpt_cmds - MPI Pass through BSG handler + * @job: BSG job reference + * + * This function is the top level handler for MPI Pass through + * command, this does basic validation of the input data buffers, + * identifies the given buffer types and MPI command, allocates + * DMAable memory for user given buffers, construstcs SGL + * properly and passes the command to the firmware. + * + * Once the MPI command is completed the driver copies the data + * if any and reply, sense information to user provided buffers. + * If the command is timed out then issues controller reset + * prior to returning. + * + * Return: 0 on success and proper error codes on failure + */ + +static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job, unsigned int *reply_payload_rcv_len) +{ + long rval = -EINVAL; + + struct mpi3mr_ioc *mrioc = NULL; + u8 *mpi_req = NULL, *sense_buff_k = NULL; + u8 mpi_msg_size = 0; + struct mpi3mr_bsg_packet *bsg_req = NULL; + struct mpi3mr_bsg_mptcmd *karg; + struct mpi3mr_buf_entry *buf_entries = NULL; + struct mpi3mr_buf_map *drv_bufs = NULL, *drv_buf_iter = NULL; + u8 count, bufcnt = 0, is_rmcb = 0, is_rmrb = 0, din_cnt = 0, dout_cnt = 0; + u8 invalid_be = 0, erb_offset = 0xFF, mpirep_offset = 0xFF, sg_entries = 0; + u8 block_io = 0, resp_code = 0; + struct mpi3_request_header *mpi_header = NULL; + struct mpi3_status_reply_descriptor *status_desc; + struct mpi3_scsi_task_mgmt_request *tm_req; + u32 erbsz = MPI3MR_SENSE_BUF_SZ, tmplen; + u16 dev_handle; + struct mpi3mr_tgt_dev *tgtdev; + struct mpi3mr_stgt_priv_data *stgt_priv = NULL; + struct mpi3mr_bsg_in_reply_buf *bsg_reply_buf = NULL; + u32 din_size = 0, dout_size = 0; + u8 *din_buf = NULL, *dout_buf = NULL; + u8 *sgl_iter = NULL, *sgl_din_iter = NULL, *sgl_dout_iter = NULL; + + bsg_req = job->request; + karg = (struct mpi3mr_bsg_mptcmd *)&bsg_req->cmd.mptcmd; + + mrioc = mpi3mr_bsg_verify_adapter(karg->mrioc_id); + if (!mrioc) + return -ENODEV; + + if (karg->timeout < MPI3MR_APP_DEFAULT_TIMEOUT) + karg->timeout = MPI3MR_APP_DEFAULT_TIMEOUT; + + mpi_req = kzalloc(MPI3MR_ADMIN_REQ_FRAME_SZ, GFP_KERNEL); + if (!mpi_req) + return -ENOMEM; + mpi_header = (struct mpi3_request_header *)mpi_req; + + bufcnt = karg->buf_entry_list.num_of_entries; + drv_bufs = kzalloc((sizeof(*drv_bufs) * bufcnt), GFP_KERNEL); + if (!drv_bufs) { + rval = -ENOMEM; + goto out; + } + + dout_buf = kzalloc(job->request_payload.payload_len, + GFP_KERNEL); + if (!dout_buf) { + rval = -ENOMEM; + goto out; + } + + din_buf = kzalloc(job->reply_payload.payload_len, + GFP_KERNEL); + if (!din_buf) { + rval = -ENOMEM; + goto out; + } + + sg_copy_to_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + dout_buf, job->request_payload.payload_len); + + buf_entries = karg->buf_entry_list.buf_entry; + sgl_din_iter = din_buf; + sgl_dout_iter = dout_buf; + drv_buf_iter = drv_bufs; + + for (count = 0; count < bufcnt; count++, buf_entries++, drv_buf_iter++) { + + if (sgl_dout_iter > (dout_buf + job->request_payload.payload_len)) { + dprint_bsg_err(mrioc, "%s: data_out buffer length mismatch\n", + __func__); + rval = -EINVAL; + goto out; + } + if (sgl_din_iter > (din_buf + job->reply_payload.payload_len)) { + dprint_bsg_err(mrioc, "%s: data_in buffer length mismatch\n", + __func__); + rval = -EINVAL; + goto out; + } + + switch (buf_entries->buf_type) { + case MPI3MR_BSG_BUFTYPE_RAIDMGMT_CMD: + sgl_iter = sgl_dout_iter; + sgl_dout_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_TO_DEVICE; + is_rmcb = 1; + if (count != 0) + invalid_be = 1; + break; + case MPI3MR_BSG_BUFTYPE_RAIDMGMT_RESP: + sgl_iter = sgl_din_iter; + sgl_din_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_FROM_DEVICE; + is_rmrb = 1; + if (count != 1 || !is_rmcb) + invalid_be = 1; + break; + case MPI3MR_BSG_BUFTYPE_DATA_IN: + sgl_iter = sgl_din_iter; + sgl_din_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_FROM_DEVICE; + din_cnt++; + din_size += drv_buf_iter->bsg_buf_len; + if ((din_cnt > 1) && !is_rmcb) + invalid_be = 1; + break; + case MPI3MR_BSG_BUFTYPE_DATA_OUT: + sgl_iter = sgl_dout_iter; + sgl_dout_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_TO_DEVICE; + dout_cnt++; + dout_size += drv_buf_iter->bsg_buf_len; + if ((dout_cnt > 1) && !is_rmcb) + invalid_be = 1; + break; + case MPI3MR_BSG_BUFTYPE_MPI_REPLY: + sgl_iter = sgl_din_iter; + sgl_din_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_NONE; + mpirep_offset = count; + break; + case MPI3MR_BSG_BUFTYPE_ERR_RESPONSE: + sgl_iter = sgl_din_iter; + sgl_din_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_NONE; + erb_offset = count; + break; + case MPI3MR_BSG_BUFTYPE_MPI_REQUEST: + sgl_iter = sgl_dout_iter; + sgl_dout_iter += buf_entries->buf_len; + drv_buf_iter->data_dir = DMA_NONE; + mpi_msg_size = buf_entries->buf_len; + if ((!mpi_msg_size || (mpi_msg_size % 4)) || + (mpi_msg_size > MPI3MR_ADMIN_REQ_FRAME_SZ)) { + dprint_bsg_err(mrioc, "%s: invalid MPI message size\n", + __func__); + rval = -EINVAL; + goto out; + } + memcpy(mpi_req, sgl_iter, buf_entries->buf_len); + break; + default: + invalid_be = 1; + break; + } + if (invalid_be) { + dprint_bsg_err(mrioc, "%s: invalid buffer entries passed\n", + __func__); + rval = -EINVAL; + goto out; + } + + drv_buf_iter->bsg_buf = sgl_iter; + drv_buf_iter->bsg_buf_len = buf_entries->buf_len; + + } + if (!is_rmcb && (dout_cnt || din_cnt)) { + sg_entries = dout_cnt + din_cnt; + if (((mpi_msg_size) + (sg_entries * + sizeof(struct mpi3_sge_common))) > MPI3MR_ADMIN_REQ_FRAME_SZ) { + dprint_bsg_err(mrioc, + "%s:%d: invalid message size passed\n", + __func__, __LINE__); + rval = -EINVAL; + goto out; + } + } + if (din_size > MPI3MR_MAX_APP_XFER_SIZE) { + dprint_bsg_err(mrioc, + "%s:%d: invalid data transfer size passed for function 0x%x din_size=%d\n", + __func__, __LINE__, mpi_header->function, din_size); + rval = -EINVAL; + goto out; + } + if (dout_size > MPI3MR_MAX_APP_XFER_SIZE) { + dprint_bsg_err(mrioc, + "%s:%d: invalid data transfer size passed for function 0x%x dout_size = %d\n", + __func__, __LINE__, mpi_header->function, dout_size); + rval = -EINVAL; + goto out; + } + + drv_buf_iter = drv_bufs; + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + + drv_buf_iter->kern_buf_len = drv_buf_iter->bsg_buf_len; + if (is_rmcb && !count) + drv_buf_iter->kern_buf_len += ((dout_cnt + din_cnt) * + sizeof(struct mpi3_sge_common)); + + if (!drv_buf_iter->kern_buf_len) + continue; + + drv_buf_iter->kern_buf = dma_alloc_coherent(&mrioc->pdev->dev, + drv_buf_iter->kern_buf_len, &drv_buf_iter->kern_buf_dma, + GFP_KERNEL); + if (!drv_buf_iter->kern_buf) { + rval = -ENOMEM; + goto out; + } + if (drv_buf_iter->data_dir == DMA_TO_DEVICE) { + tmplen = min(drv_buf_iter->kern_buf_len, + drv_buf_iter->bsg_buf_len); + memcpy(drv_buf_iter->kern_buf, drv_buf_iter->bsg_buf, tmplen); + } + } + + if (erb_offset != 0xFF) { + sense_buff_k = kzalloc(erbsz, GFP_KERNEL); + if (!sense_buff_k) { + rval = -ENOMEM; + goto out; + } + } + + if (mutex_lock_interruptible(&mrioc->bsg_cmds.mutex)) { + rval = -ERESTARTSYS; + goto out; + } + if (mrioc->bsg_cmds.state & MPI3MR_CMD_PENDING) { + rval = -EAGAIN; + dprint_bsg_err(mrioc, "%s: command is in use\n", __func__); + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + if (mrioc->unrecoverable) { + dprint_bsg_err(mrioc, "%s: unrecoverable controller\n", + __func__); + rval = -EFAULT; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + if (mrioc->reset_in_progress) { + dprint_bsg_err(mrioc, "%s: reset in progress\n", __func__); + rval = -EAGAIN; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + if (mrioc->stop_bsgs) { + dprint_bsg_err(mrioc, "%s: bsgs are blocked\n", __func__); + rval = -EAGAIN; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + + if (mpi_header->function != MPI3_BSG_FUNCTION_NVME_ENCAPSULATED) { + mpi3mr_bsg_build_sgl(mpi_req, (mpi_msg_size), + drv_bufs, bufcnt, is_rmcb, is_rmrb, + (dout_cnt + din_cnt)); + } + + if (mpi_header->function == MPI3_BSG_FUNCTION_SCSI_TASK_MGMT) { + tm_req = (struct mpi3_scsi_task_mgmt_request *)mpi_req; + if (tm_req->task_type != + MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK) { + dev_handle = tm_req->dev_handle; + block_io = 1; + } + } + if (block_io) { + tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle); + if (tgtdev && tgtdev->starget && tgtdev->starget->hostdata) { + stgt_priv = (struct mpi3mr_stgt_priv_data *) + tgtdev->starget->hostdata; + atomic_inc(&stgt_priv->block_io); + mpi3mr_tgtdev_put(tgtdev); + } + } + + mrioc->bsg_cmds.state = MPI3MR_CMD_PENDING; + mrioc->bsg_cmds.is_waiting = 1; + mrioc->bsg_cmds.callback = NULL; + mrioc->bsg_cmds.is_sense = 0; + mrioc->bsg_cmds.sensebuf = sense_buff_k; + memset(mrioc->bsg_cmds.reply, 0, mrioc->reply_sz); + mpi_header->host_tag = cpu_to_le16(MPI3MR_HOSTTAG_BSG_CMDS); + if (mrioc->logging_level & MPI3_DEBUG_BSG_INFO) { + dprint_bsg_info(mrioc, + "%s: posting bsg request to the controller\n", __func__); + dprint_dump(mpi_req, MPI3MR_ADMIN_REQ_FRAME_SZ, + "bsg_mpi3_req"); + if (mpi_header->function == MPI3_BSG_FUNCTION_MGMT_PASSTHROUGH) { + drv_buf_iter = &drv_bufs[0]; + dprint_dump(drv_buf_iter->kern_buf, + drv_buf_iter->kern_buf_len, "mpi3_mgmt_req"); + } + } + + init_completion(&mrioc->bsg_cmds.done); + rval = mpi3mr_admin_request_post(mrioc, mpi_req, + MPI3MR_ADMIN_REQ_FRAME_SZ, 0); + + + if (rval) { + mrioc->bsg_cmds.is_waiting = 0; + dprint_bsg_err(mrioc, + "%s: posting bsg request is failed\n", __func__); + rval = -EAGAIN; + goto out_unlock; + } + wait_for_completion_timeout(&mrioc->bsg_cmds.done, + (karg->timeout * HZ)); + if (block_io && stgt_priv) + atomic_dec(&stgt_priv->block_io); + if (!(mrioc->bsg_cmds.state & MPI3MR_CMD_COMPLETE)) { + mrioc->bsg_cmds.is_waiting = 0; + rval = -EAGAIN; + if (mrioc->bsg_cmds.state & MPI3MR_CMD_RESET) + goto out_unlock; + dprint_bsg_err(mrioc, + "%s: bsg request timedout after %d seconds\n", __func__, + karg->timeout); + if (mrioc->logging_level & MPI3_DEBUG_BSG_ERROR) { + dprint_dump(mpi_req, MPI3MR_ADMIN_REQ_FRAME_SZ, + "bsg_mpi3_req"); + if (mpi_header->function == + MPI3_BSG_FUNCTION_MGMT_PASSTHROUGH) { + drv_buf_iter = &drv_bufs[0]; + dprint_dump(drv_buf_iter->kern_buf, + drv_buf_iter->kern_buf_len, "mpi3_mgmt_req"); + } + } + + if (mpi_header->function == MPI3_BSG_FUNCTION_SCSI_IO) + mpi3mr_issue_tm(mrioc, + MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET, + mpi_header->function_dependent, 0, + MPI3MR_HOSTTAG_BLK_TMS, MPI3MR_RESETTM_TIMEOUT, + &mrioc->host_tm_cmds, &resp_code, NULL); + if (!(mrioc->bsg_cmds.state & MPI3MR_CMD_COMPLETE) && + !(mrioc->bsg_cmds.state & MPI3MR_CMD_RESET)) + mpi3mr_soft_reset_handler(mrioc, + MPI3MR_RESET_FROM_APP_TIMEOUT, 1); + goto out_unlock; + } + dprint_bsg_info(mrioc, "%s: bsg request is completed\n", __func__); + + if ((mrioc->bsg_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK) + != MPI3_IOCSTATUS_SUCCESS) { + dprint_bsg_info(mrioc, + "%s: command failed, ioc_status(0x%04x) log_info(0x%08x)\n", + __func__, + (mrioc->bsg_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK), + mrioc->bsg_cmds.ioc_loginfo); + } + + if ((mpirep_offset != 0xFF) && + drv_bufs[mpirep_offset].bsg_buf_len) { + drv_buf_iter = &drv_bufs[mpirep_offset]; + drv_buf_iter->kern_buf_len = (sizeof(*bsg_reply_buf) - 1 + + mrioc->reply_sz); + bsg_reply_buf = kzalloc(drv_buf_iter->kern_buf_len, GFP_KERNEL); + + if (!bsg_reply_buf) { + rval = -ENOMEM; + goto out_unlock; + } + if (mrioc->bsg_cmds.state & MPI3MR_CMD_REPLY_VALID) { + bsg_reply_buf->mpi_reply_type = + MPI3MR_BSG_MPI_REPLY_BUFTYPE_ADDRESS; + memcpy(bsg_reply_buf->reply_buf, + mrioc->bsg_cmds.reply, mrioc->reply_sz); + } else { + bsg_reply_buf->mpi_reply_type = + MPI3MR_BSG_MPI_REPLY_BUFTYPE_STATUS; + status_desc = (struct mpi3_status_reply_descriptor *) + bsg_reply_buf->reply_buf; + status_desc->ioc_status = mrioc->bsg_cmds.ioc_status; + status_desc->ioc_log_info = mrioc->bsg_cmds.ioc_loginfo; + } + tmplen = min(drv_buf_iter->kern_buf_len, + drv_buf_iter->bsg_buf_len); + memcpy(drv_buf_iter->bsg_buf, bsg_reply_buf, tmplen); + } + + if (erb_offset != 0xFF && mrioc->bsg_cmds.sensebuf && + mrioc->bsg_cmds.is_sense) { + drv_buf_iter = &drv_bufs[erb_offset]; + tmplen = min(erbsz, drv_buf_iter->bsg_buf_len); + memcpy(drv_buf_iter->bsg_buf, sense_buff_k, tmplen); + } + + drv_buf_iter = drv_bufs; + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + if (drv_buf_iter->data_dir == DMA_FROM_DEVICE) { + tmplen = min(drv_buf_iter->kern_buf_len, + drv_buf_iter->bsg_buf_len); + memcpy(drv_buf_iter->bsg_buf, + drv_buf_iter->kern_buf, tmplen); + } + } + +out_unlock: + if (din_buf) { + *reply_payload_rcv_len = + sg_copy_from_buffer(job->reply_payload.sg_list, + job->reply_payload.sg_cnt, + din_buf, job->reply_payload.payload_len); + } + mrioc->bsg_cmds.is_sense = 0; + mrioc->bsg_cmds.sensebuf = NULL; + mrioc->bsg_cmds.state = MPI3MR_CMD_NOTUSED; + mutex_unlock(&mrioc->bsg_cmds.mutex); +out: + kfree(sense_buff_k); + kfree(dout_buf); + kfree(din_buf); + kfree(mpi_req); + if (drv_bufs) { + drv_buf_iter = drv_bufs; + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->kern_buf && drv_buf_iter->kern_buf_dma) + dma_free_coherent(&mrioc->pdev->dev, + drv_buf_iter->kern_buf_len, + drv_buf_iter->kern_buf, + drv_buf_iter->kern_buf_dma); + } + kfree(drv_bufs); + } + kfree(bsg_reply_buf); + return rval; +} + /** * mpi3mr_bsg_request - bsg request entry point * @job: BSG job reference @@ -402,6 +916,9 @@ int mpi3mr_bsg_request(struct bsg_job *job) case MPI3MR_DRV_CMD: rval = mpi3mr_bsg_process_drv_cmds(job); break; + case MPI3MR_MPT_CMD: + rval = mpi3mr_bsg_process_mpt_cmds(job, &reply_payload_rcv_len); + break; default: pr_err("%s: unsupported BSG command(0x%08x)\n", MPI3MR_DRIVER_NAME, bsg_req->cmd_type); diff --git a/drivers/scsi/mpi3mr/mpi3mr_debug.h b/drivers/scsi/mpi3mr/mpi3mr_debug.h index 65bfac72948c..2464c400a5a4 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_debug.h +++ b/drivers/scsi/mpi3mr/mpi3mr_debug.h @@ -124,6 +124,31 @@ #endif /* MPT3SAS_DEBUG_H_INCLUDED */ +/** + * dprint_dump - print contents of a memory buffer + * @req: Pointer to a memory buffer + * @sz: Memory buffer size + * @namestr: Name String to identify the buffer type + */ +static inline void +dprint_dump(void *req, int sz, const char *name_string) +{ + int i; + __le32 *mfp = (__le32 *)req; + + sz = sz/4; + if (name_string) + pr_info("%s:\n\t", name_string); + else + pr_info("request:\n\t"); + for (i = 0; i < sz; i++) { + if (i && ((i % 8) == 0)) + pr_info("\n\t"); + pr_info("%08x ", le32_to_cpu(mfp[i])); + } + pr_info("\n"); +} + /** * dprint_dump_req - print message frame contents * @req: pointer to message frame diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index a03e39083a42..450574fc1fec 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -634,7 +634,7 @@ static struct mpi3mr_tgt_dev *__mpi3mr_get_tgtdev_by_handle( * * Return: Target device reference. */ -static struct mpi3mr_tgt_dev *mpi3mr_get_tgtdev_by_handle( +struct mpi3mr_tgt_dev *mpi3mr_get_tgtdev_by_handle( struct mpi3mr_ioc *mrioc, u16 handle) { struct mpi3mr_tgt_dev *tgtdev; @@ -2996,7 +2996,7 @@ inline void mpi3mr_poll_pend_io_completions(struct mpi3mr_ioc *mrioc) * * Return: 0 on success, non-zero on errors */ -static int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type, +int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type, u16 handle, uint lun, u16 htag, ulong timeout, struct mpi3mr_drv_cmd *drv_cmd, u8 *resp_code, struct scsi_cmnd *scmd) From patchwork Fri Apr 22 11:54:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 12823319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1438C433FE for ; Fri, 22 Apr 2022 11:55:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1447214AbiDVL6O (ORCPT ); Fri, 22 Apr 2022 07:58:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447216AbiDVL6L (ORCPT ); Fri, 22 Apr 2022 07:58:11 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9FAC54FB5 for ; Fri, 22 Apr 2022 04:55:17 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id y14so7027081pfe.10 for ; Fri, 22 Apr 2022 04:55:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=JSD8h60s8kip2RKi/ERh+BVNlkD9o7n5obd6asbh+Ts=; b=Ls+FtRLWmqoNYJKdoP2q3G+GYStMMCE7EhDrRW9/U3exb3LN9sN9htPqQroEPcc3sg CkojMnc9ExeoJmCuFYX4IA0dLihFDUNJFGfSwheiD5XIodzMUNHGoxT7o9mcm08hX0ts Ji7SFer0caCKbt4JUdSl2WZrIf/ESYyObqB9E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=JSD8h60s8kip2RKi/ERh+BVNlkD9o7n5obd6asbh+Ts=; b=JQ/NXGxhyvut2h4V36b2LQShkomT82UGS1iz3XQ9i8Q13jZEeQXJRtoLDOuOe5WYW9 wvLIs1pywOS/FTzNkVk5MTfNvVS9HX02BNZ3eCYpdcQTjO9qsKE17mYZ7FskBeRc308Q LXvAdED6khdx/uZtulUg89H1NlUoL9D5KkBPCmYIJShJCzspC3P7TSsIJASSVJFzutGH lrCPGgxWo4boS3LRA2u0/5bVoNPtCeoQVWGRYR4EI8au9850/qIQfSjsg43WPq5sY8wx Z8R5u1NpkLTQX7MsZ7YoMNBihxSyzH3KPrkq5vpbN0ZkEOpyeKYL6LSAm9DP2IjmMTua lovQ== X-Gm-Message-State: AOAM530nZwmSAQDQCGd3cDF+iRUFdqnNugeaJGOVMSZhrAnpBn5CvyfP 3yat4ElMROZNmtfEDpBLUvie1NjX4IrNq4bQJJilwZLkNXffFwbRCl0/0yWZtZtFxpEHTJCzs+t 0EB80TCkeCbRxswAvfjlqmeCfVmN7bxyqqZv5VP/1cdYeglpEHXYB5piV4Hx7iCS8SlMTLLED3A ohTppQ+/s= X-Google-Smtp-Source: ABdhPJzn3Pov+3EBWJKXvlixjSbXAWdC3TwfhaFhy5oR7BSbYTTXkOQhpdFHkXExy/ef3fVI30pAjw== X-Received: by 2002:a05:6a00:1310:b0:4ca:cc46:20c7 with SMTP id j16-20020a056a00131000b004cacc4620c7mr4617313pfu.44.1650628516780; Fri, 22 Apr 2022 04:55:16 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g6-20020a17090a714600b001d7f3bb11d7sm2367981pjs.53.2022.04.22.04.55.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 04:55:15 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, bvanassche@acm.org, hch@lst.de, hare@suse.de, himanshu.madhani@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH v5 5/8] mpi3mr: add support for PEL commands Date: Fri, 22 Apr 2022 07:54:20 -0400 Message-Id: <20220422115423.279805-6-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220422115423.279805-1-sumit.saxena@broadcom.com> References: <20220422115423.279805-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch includes driver support for the management applications to enable the persistent event log(PEL) notification. Upon receipt of events, driver will increment sysfs variable named as event_counter. Application would poll for event_counter value and any change in it would signal the applications about events. Reviewed-by: Himanshu Madhani Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 36 +++- drivers/scsi/mpi3mr/mpi3mr_app.c | 205 ++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_fw.c | 310 +++++++++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_os.c | 42 +++++ 4 files changed, 592 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 37be9e28e0b2..cc54231da658 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -53,6 +53,7 @@ extern spinlock_t mrioc_list_lock; extern struct list_head mrioc_list; extern int prot_mask; +extern atomic64_t event_counter; #define MPI3MR_DRIVER_VERSION "8.0.0.68.0" #define MPI3MR_DRIVER_RELDATE "10-February-2022" @@ -91,6 +92,8 @@ extern int prot_mask; #define MPI3MR_HOSTTAG_INVALID 0xFFFF #define MPI3MR_HOSTTAG_INITCMDS 1 #define MPI3MR_HOSTTAG_BSG_CMDS 2 +#define MPI3MR_HOSTTAG_PEL_ABORT 3 +#define MPI3MR_HOSTTAG_PEL_WAIT 4 #define MPI3MR_HOSTTAG_BLK_TMS 5 #define MPI3MR_NUM_DEVRMCMD 16 @@ -152,6 +155,7 @@ extern int prot_mask; /* Command retry count definitions */ #define MPI3MR_DEV_RMHS_RETRY_COUNT 3 +#define MPI3MR_PEL_RETRY_COUNT 3 /* Default target device queue depth */ #define MPI3MR_DEFAULT_SDEV_QD 32 @@ -748,6 +752,16 @@ struct scmd_priv { * @current_event: Firmware event currently in process * @driver_info: Driver, Kernel, OS information to firmware * @change_count: Topology change count + * @pel_enabled: Persistent Event Log(PEL) enabled or not + * @pel_abort_requested: PEL abort is requested or not + * @pel_class: PEL Class identifier + * @pel_locale: PEL Locale identifier + * @pel_cmds: Command tracker for PEL wait command + * @pel_abort_cmd: Command tracker for PEL abort command + * @pel_newest_seqnum: Newest PEL sequenece number + * @pel_seqnum_virt: PEL sequence number virtual address + * @pel_seqnum_dma: PEL sequence number DMA address + * @pel_seqnum_sz: PEL sequenece number size * @op_reply_q_offset: Operational reply queue offset with MSIx * @default_qcount: Total Default queues * @active_poll_qcount: Currently active poll queue count @@ -894,8 +908,20 @@ struct mpi3mr_ioc { struct mpi3mr_fwevt *current_event; struct mpi3_driver_info_layout driver_info; u16 change_count; - u16 op_reply_q_offset; + u8 pel_enabled; + u8 pel_abort_requested; + u8 pel_class; + u16 pel_locale; + struct mpi3mr_drv_cmd pel_cmds; + struct mpi3mr_drv_cmd pel_abort_cmd; + + u32 pel_newest_seqnum; + void *pel_seqnum_virt; + dma_addr_t pel_seqnum_dma; + u32 pel_seqnum_sz; + + u16 op_reply_q_offset; u16 default_qcount; u16 active_poll_qcount; u16 requested_poll_qcount; @@ -918,6 +944,7 @@ struct mpi3mr_ioc { * @send_ack: Event acknowledgment required or not * @process_evt: Bottomhalf processing required or not * @evt_ctx: Event context to send in Ack + * @event_data_size: size of the event data in bytes * @pending_at_sml: waiting for device add/remove API to complete * @discard: discard this event * @ref_count: kref count @@ -931,6 +958,7 @@ struct mpi3mr_fwevt { bool send_ack; bool process_evt; u32 evt_ctx; + u16 event_data_size; bool pending_at_sml; bool discard; struct kref ref_count; @@ -1022,5 +1050,11 @@ int mpi3mr_issue_tm(struct mpi3mr_ioc *mrioc, u8 tm_type, u8 *resp_code, struct scsi_cmnd *scmd); struct mpi3mr_tgt_dev *mpi3mr_get_tgtdev_by_handle( struct mpi3mr_ioc *mrioc, u16 handle); +void mpi3mr_pel_get_seqnum_complete(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd); +int mpi3mr_pel_get_seqnum_post(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd); +void mpi3mr_app_save_logdata(struct mpi3mr_ioc *mrioc, char *event_data, + u16 event_data_size); #endif /*MPI3MR_H_INCLUDED*/ diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index b665ba4c39ef..075dcc34f0e8 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -11,6 +11,95 @@ #include #include +/** + * mpi3mr_bsg_pel_abort - sends PEL abort request + * @mrioc: Adapter instance reference + * + * This function sends PEL abort request to the firmware through + * admin request queue. + * + * Return: 0 on success, -1 on failure + */ +static int mpi3mr_bsg_pel_abort(struct mpi3mr_ioc *mrioc) +{ + struct mpi3_pel_req_action_abort pel_abort_req; + struct mpi3_pel_reply *pel_reply; + int retval = 0; + u16 pe_log_status; + + if (mrioc->reset_in_progress) { + dprint_bsg_err(mrioc, "%s: reset in progress\n", __func__); + return -1; + } + if (mrioc->stop_bsgs) { + dprint_bsg_err(mrioc, "%s: bsgs are blocked\n", __func__); + return -1; + } + + memset(&pel_abort_req, 0, sizeof(pel_abort_req)); + mutex_lock(&mrioc->pel_abort_cmd.mutex); + if (mrioc->pel_abort_cmd.state & MPI3MR_CMD_PENDING) { + dprint_bsg_err(mrioc, "%s: command is in use\n", __func__); + mutex_unlock(&mrioc->pel_abort_cmd.mutex); + return -1; + } + mrioc->pel_abort_cmd.state = MPI3MR_CMD_PENDING; + mrioc->pel_abort_cmd.is_waiting = 1; + mrioc->pel_abort_cmd.callback = NULL; + pel_abort_req.host_tag = cpu_to_le16(MPI3MR_HOSTTAG_PEL_ABORT); + pel_abort_req.function = MPI3_FUNCTION_PERSISTENT_EVENT_LOG; + pel_abort_req.action = MPI3_PEL_ACTION_ABORT; + pel_abort_req.abort_host_tag = cpu_to_le16(MPI3MR_HOSTTAG_PEL_WAIT); + + mrioc->pel_abort_requested = 1; + init_completion(&mrioc->pel_abort_cmd.done); + retval = mpi3mr_admin_request_post(mrioc, &pel_abort_req, + sizeof(pel_abort_req), 0); + if (retval) { + retval = -1; + dprint_bsg_err(mrioc, "%s: admin request post failed\n", + __func__); + mrioc->pel_abort_requested = 0; + goto out_unlock; + } + + wait_for_completion_timeout(&mrioc->pel_abort_cmd.done, + (MPI3MR_INTADMCMD_TIMEOUT * HZ)); + if (!(mrioc->pel_abort_cmd.state & MPI3MR_CMD_COMPLETE)) { + mrioc->pel_abort_cmd.is_waiting = 0; + dprint_bsg_err(mrioc, "%s: command timedout\n", __func__); + if (!(mrioc->pel_abort_cmd.state & MPI3MR_CMD_RESET)) + mpi3mr_soft_reset_handler(mrioc, + MPI3MR_RESET_FROM_PELABORT_TIMEOUT, 1); + retval = -1; + goto out_unlock; + } + if ((mrioc->pel_abort_cmd.ioc_status & MPI3_IOCSTATUS_STATUS_MASK) + != MPI3_IOCSTATUS_SUCCESS) { + dprint_bsg_err(mrioc, + "%s: command failed, ioc_status(0x%04x) log_info(0x%08x)\n", + __func__, (mrioc->pel_abort_cmd.ioc_status & + MPI3_IOCSTATUS_STATUS_MASK), + mrioc->pel_abort_cmd.ioc_loginfo); + retval = -1; + goto out_unlock; + } + if (mrioc->pel_abort_cmd.state & MPI3MR_CMD_REPLY_VALID) { + pel_reply = (struct mpi3_pel_reply *)mrioc->pel_abort_cmd.reply; + pe_log_status = le16_to_cpu(pel_reply->pe_log_status); + if (pe_log_status != MPI3_PEL_STATUS_SUCCESS) { + dprint_bsg_err(mrioc, + "%s: command failed, pel_status(0x%04x)\n", + __func__, pe_log_status); + retval = -1; + } + } + +out_unlock: + mrioc->pel_abort_cmd.state = MPI3MR_CMD_NOTUSED; + mutex_unlock(&mrioc->pel_abort_cmd.mutex); + return retval; +} /** * mpi3mr_bsg_verify_adapter - verify adapter number is valid * @ioc_number: Adapter number @@ -107,6 +196,87 @@ static long mpi3mr_get_logdata(struct mpi3mr_ioc *mrioc, return -EINVAL; } +/** + * mpi3mr_bsg_pel_enable - Handler for PEL enable driver + * @mrioc: Adapter instance reference + * @job: BSG job pointer + * + * This function is the handler for PEL enable driver. + * Validates the application given class and locale and if + * requires aborts the existing PEL wait request and/or issues + * new PEL wait request to the firmware and returns. + * + * Return: 0 on success and proper error codes on failure. + */ +static long mpi3mr_bsg_pel_enable(struct mpi3mr_ioc *mrioc, + struct bsg_job *job) +{ + long rval = -EINVAL; + struct mpi3mr_bsg_out_pel_enable pel_enable; + u8 issue_pel_wait; + u8 tmp_class; + u16 tmp_locale; + + if (job->request_payload.payload_len != sizeof(pel_enable)) { + dprint_bsg_err(mrioc, "%s: invalid size argument\n", + __func__); + return rval; + } + + sg_copy_to_buffer(job->request_payload.sg_list, + job->request_payload.sg_cnt, + &pel_enable, sizeof(pel_enable)); + + if (pel_enable.pel_class > MPI3_PEL_CLASS_FAULT) { + dprint_bsg_err(mrioc, "%s: out of range class %d sent\n", + __func__, pel_enable.pel_class); + rval = 0; + goto out; + } + if (!mrioc->pel_enabled) + issue_pel_wait = 1; + else { + if ((mrioc->pel_class <= pel_enable.pel_class) && + !((mrioc->pel_locale & pel_enable.pel_locale) ^ + pel_enable.pel_locale)) { + issue_pel_wait = 0; + rval = 0; + } else { + pel_enable.pel_locale |= mrioc->pel_locale; + + if (mrioc->pel_class < pel_enable.pel_class) + pel_enable.pel_class = mrioc->pel_class; + + rval = mpi3mr_bsg_pel_abort(mrioc); + if (rval) { + dprint_bsg_err(mrioc, + "%s: pel_abort failed, status(%ld)\n", + __func__, rval); + goto out; + } + issue_pel_wait = 1; + } + } + if (issue_pel_wait) { + tmp_class = mrioc->pel_class; + tmp_locale = mrioc->pel_locale; + mrioc->pel_class = pel_enable.pel_class; + mrioc->pel_locale = pel_enable.pel_locale; + mrioc->pel_enabled = 1; + rval = mpi3mr_pel_get_seqnum_post(mrioc, NULL); + if (rval) { + mrioc->pel_class = tmp_class; + mrioc->pel_locale = tmp_locale; + mrioc->pel_enabled = 0; + dprint_bsg_err(mrioc, + "%s: pel get sequence number failed, status(%ld)\n", + __func__, rval); + } + } + +out: + return rval; +} /** * mpi3mr_get_all_tgt_info - Get all target information * @mrioc: Adapter instance reference @@ -372,6 +542,9 @@ static long mpi3mr_bsg_process_drv_cmds(struct bsg_job *job) case MPI3MR_DRVBSG_OPCODE_GETLOGDATA: rval = mpi3mr_get_logdata(mrioc, job); break; + case MPI3MR_DRVBSG_OPCODE_PELENABLE: + rval = mpi3mr_bsg_pel_enable(mrioc, job); + break; case MPI3MR_DRVBSG_OPCODE_UNKNOWN: default: pr_err("%s: unsupported driver command opcode %d\n", @@ -897,6 +1070,38 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job, unsigned int *reply return rval; } +/** + * mpi3mr_app_save_logdata - Save Log Data events + * @mrioc: Adapter instance reference + * @event_data: event data associated with log data event + * @event_data_size: event data size to copy + * + * If log data event caching is enabled by the applicatiobns, + * then this function saves the log data in the circular queue + * and Sends async signal SIGIO to indicate there is an async + * event from the firmware to the event monitoring applications. + * + * Return:Nothing + */ +void mpi3mr_app_save_logdata(struct mpi3mr_ioc *mrioc, char *event_data, + u16 event_data_size) +{ + u32 index = mrioc->logdata_buf_idx, sz; + struct mpi3mr_logdata_entry *entry; + + if (!(mrioc->logdata_buf)) + return; + + entry = (struct mpi3mr_logdata_entry *) + (mrioc->logdata_buf + (index * mrioc->logdata_entry_sz)); + entry->valid_entry = 1; + sz = min(mrioc->logdata_entry_sz, event_data_size); + memcpy(entry->data, event_data, sz); + mrioc->logdata_buf_idx = + ((++index) % MPI3MR_BSG_LOGDATA_MAX_ENTRIES); + atomic64_inc(&event_counter); +} + /** * mpi3mr_bsg_request - bsg request entry point * @job: BSG job reference diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c index 480730721f50..74e09727a1b8 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c @@ -15,6 +15,8 @@ mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type, u32 reset_reason); static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc); static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc, struct mpi3_ioc_facts_data *facts_data); +static void mpi3mr_pel_wait_complete(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd); static int poll_queues; module_param(poll_queues, int, 0444); @@ -301,6 +303,10 @@ mpi3mr_get_drv_cmd(struct mpi3mr_ioc *mrioc, u16 host_tag, return &mrioc->bsg_cmds; case MPI3MR_HOSTTAG_BLK_TMS: return &mrioc->host_tm_cmds; + case MPI3MR_HOSTTAG_PEL_ABORT: + return &mrioc->pel_abort_cmd; + case MPI3MR_HOSTTAG_PEL_WAIT: + return &mrioc->pel_cmds; case MPI3MR_HOSTTAG_INVALID: if (def_reply && def_reply->function == MPI3_FUNCTION_EVENT_NOTIFICATION) @@ -2837,6 +2843,14 @@ static int mpi3mr_alloc_reply_sense_bufs(struct mpi3mr_ioc *mrioc) if (!mrioc->host_tm_cmds.reply) goto out_failed; + mrioc->pel_cmds.reply = kzalloc(mrioc->reply_sz, GFP_KERNEL); + if (!mrioc->pel_cmds.reply) + goto out_failed; + + mrioc->pel_abort_cmd.reply = kzalloc(mrioc->reply_sz, GFP_KERNEL); + if (!mrioc->pel_abort_cmd.reply) + goto out_failed; + mrioc->dev_handle_bitmap_sz = mrioc->facts.max_devhandle / 8; if (mrioc->facts.max_devhandle % 8) mrioc->dev_handle_bitmap_sz++; @@ -3734,6 +3748,16 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc) goto out_failed; } + if (!mrioc->pel_seqnum_virt) { + dprint_init(mrioc, "allocating memory for pel_seqnum_virt\n"); + mrioc->pel_seqnum_sz = sizeof(struct mpi3_pel_seq); + mrioc->pel_seqnum_virt = dma_alloc_coherent(&mrioc->pdev->dev, + mrioc->pel_seqnum_sz, &mrioc->pel_seqnum_dma, + GFP_KERNEL); + if (!mrioc->pel_seqnum_virt) + goto out_failed_noretry; + } + retval = mpi3mr_enable_events(mrioc); if (retval) { ioc_err(mrioc, "failed to enable events %d\n", @@ -3843,6 +3867,16 @@ int mpi3mr_reinit_ioc(struct mpi3mr_ioc *mrioc, u8 is_resume) goto out_failed; } + if (!mrioc->pel_seqnum_virt) { + dprint_reset(mrioc, "allocating memory for pel_seqnum_virt\n"); + mrioc->pel_seqnum_sz = sizeof(struct mpi3_pel_seq); + mrioc->pel_seqnum_virt = dma_alloc_coherent(&mrioc->pdev->dev, + mrioc->pel_seqnum_sz, &mrioc->pel_seqnum_dma, + GFP_KERNEL); + if (!mrioc->pel_seqnum_virt) + goto out_failed_noretry; + } + if (mrioc->shost->nr_hw_queues > mrioc->num_op_reply_q) { ioc_err(mrioc, "cannot create minimum number of operational queues expected:%d created:%d\n", @@ -3958,6 +3992,10 @@ void mpi3mr_memset_buffers(struct mpi3mr_ioc *mrioc) sizeof(*mrioc->bsg_cmds.reply)); memset(mrioc->host_tm_cmds.reply, 0, sizeof(*mrioc->host_tm_cmds.reply)); + memset(mrioc->pel_cmds.reply, 0, + sizeof(*mrioc->pel_cmds.reply)); + memset(mrioc->pel_abort_cmd.reply, 0, + sizeof(*mrioc->pel_abort_cmd.reply)); for (i = 0; i < MPI3MR_NUM_DEVRMCMD; i++) memset(mrioc->dev_rmhs_cmds[i].reply, 0, sizeof(*mrioc->dev_rmhs_cmds[i].reply)); @@ -4064,6 +4102,12 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc) kfree(mrioc->host_tm_cmds.reply); mrioc->host_tm_cmds.reply = NULL; + kfree(mrioc->pel_cmds.reply); + mrioc->pel_cmds.reply = NULL; + + kfree(mrioc->pel_abort_cmd.reply); + mrioc->pel_abort_cmd.reply = NULL; + for (i = 0; i < MPI3MR_NUM_EVTACKCMD; i++) { kfree(mrioc->evtack_cmds[i].reply); mrioc->evtack_cmds[i].reply = NULL; @@ -4112,6 +4156,16 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc) mrioc->admin_req_base, mrioc->admin_req_dma); mrioc->admin_req_base = NULL; } + + if (mrioc->pel_seqnum_virt) { + dma_free_coherent(&mrioc->pdev->dev, mrioc->pel_seqnum_sz, + mrioc->pel_seqnum_virt, mrioc->pel_seqnum_dma); + mrioc->pel_seqnum_virt = NULL; + } + + kfree(mrioc->logdata_buf); + mrioc->logdata_buf = NULL; + } /** @@ -4260,6 +4314,254 @@ static void mpi3mr_flush_drv_cmds(struct mpi3mr_ioc *mrioc) cmdptr = &mrioc->evtack_cmds[i]; mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); } + + cmdptr = &mrioc->pel_cmds; + mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); + + cmdptr = &mrioc->pel_abort_cmd; + mpi3mr_drv_cmd_comp_reset(mrioc, cmdptr); + +} + +/** + * mpi3mr_pel_wait_post - Issue PEL Wait + * @mrioc: Adapter instance reference + * @drv_cmd: Internal command tracker + * + * Issue PEL Wait MPI request through admin queue and return. + * + * Return: Nothing. + */ +static void mpi3mr_pel_wait_post(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd) +{ + struct mpi3_pel_req_action_wait pel_wait; + + mrioc->pel_abort_requested = false; + + memset(&pel_wait, 0, sizeof(pel_wait)); + drv_cmd->state = MPI3MR_CMD_PENDING; + drv_cmd->is_waiting = 0; + drv_cmd->callback = mpi3mr_pel_wait_complete; + drv_cmd->ioc_status = 0; + drv_cmd->ioc_loginfo = 0; + pel_wait.host_tag = cpu_to_le16(MPI3MR_HOSTTAG_PEL_WAIT); + pel_wait.function = MPI3_FUNCTION_PERSISTENT_EVENT_LOG; + pel_wait.action = MPI3_PEL_ACTION_WAIT; + pel_wait.starting_sequence_number = cpu_to_le32(mrioc->pel_newest_seqnum); + pel_wait.locale = cpu_to_le16(mrioc->pel_locale); + pel_wait.class = cpu_to_le16(mrioc->pel_class); + pel_wait.wait_time = MPI3_PEL_WAITTIME_INFINITE_WAIT; + dprint_bsg_info(mrioc, "sending pel_wait seqnum(%d), class(%d), locale(0x%08x)\n", + mrioc->pel_newest_seqnum, mrioc->pel_class, mrioc->pel_locale); + + if (mpi3mr_admin_request_post(mrioc, &pel_wait, sizeof(pel_wait), 0)) { + dprint_bsg_err(mrioc, + "Issuing PELWait: Admin post failed\n"); + drv_cmd->state = MPI3MR_CMD_NOTUSED; + drv_cmd->callback = NULL; + drv_cmd->retry_count = 0; + mrioc->pel_enabled = false; + } +} + +/** + * mpi3mr_pel_get_seqnum_post - Issue PEL Get Sequence number + * @mrioc: Adapter instance reference + * @drv_cmd: Internal command tracker + * + * Issue PEL get sequence number MPI request through admin queue + * and return. + * + * Return: 0 on success, non-zero on failure. + */ +int mpi3mr_pel_get_seqnum_post(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd) +{ + struct mpi3_pel_req_action_get_sequence_numbers pel_getseq_req; + u8 sgl_flags = MPI3MR_SGEFLAGS_SYSTEM_SIMPLE_END_OF_LIST; + int retval = 0; + + memset(&pel_getseq_req, 0, sizeof(pel_getseq_req)); + mrioc->pel_cmds.state = MPI3MR_CMD_PENDING; + mrioc->pel_cmds.is_waiting = 0; + mrioc->pel_cmds.ioc_status = 0; + mrioc->pel_cmds.ioc_loginfo = 0; + mrioc->pel_cmds.callback = mpi3mr_pel_get_seqnum_complete; + pel_getseq_req.host_tag = cpu_to_le16(MPI3MR_HOSTTAG_PEL_WAIT); + pel_getseq_req.function = MPI3_FUNCTION_PERSISTENT_EVENT_LOG; + pel_getseq_req.action = MPI3_PEL_ACTION_GET_SEQNUM; + mpi3mr_add_sg_single(&pel_getseq_req.sgl, sgl_flags, + mrioc->pel_seqnum_sz, mrioc->pel_seqnum_dma); + + retval = mpi3mr_admin_request_post(mrioc, &pel_getseq_req, + sizeof(pel_getseq_req), 0); + if (retval) { + if (drv_cmd) { + drv_cmd->state = MPI3MR_CMD_NOTUSED; + drv_cmd->callback = NULL; + drv_cmd->retry_count = 0; + } + mrioc->pel_enabled = false; + } + + return retval; +} + +/** + * mpi3mr_pel_wait_complete - PELWait Completion callback + * @mrioc: Adapter instance reference + * @drv_cmd: Internal command tracker + * + * This is a callback handler for the PELWait request and + * firmware completes a PELWait request when it is aborted or a + * new PEL entry is available. This sends AEN to the application + * and if the PELwait completion is not due to PELAbort then + * this will send a request for new PEL Sequence number + * + * Return: Nothing. + */ +static void mpi3mr_pel_wait_complete(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd) +{ + struct mpi3_pel_reply *pel_reply = NULL; + u16 ioc_status, pe_log_status; + bool do_retry = false; + + if (drv_cmd->state & MPI3MR_CMD_RESET) + goto cleanup_drv_cmd; + + ioc_status = drv_cmd->ioc_status & MPI3_IOCSTATUS_STATUS_MASK; + if (ioc_status != MPI3_IOCSTATUS_SUCCESS) { + ioc_err(mrioc, "%s: Failed ioc_status(0x%04x) Loginfo(0x%08x)\n", + __func__, ioc_status, drv_cmd->ioc_loginfo); + dprint_bsg_err(mrioc, + "pel_wait: failed with ioc_status(0x%04x), log_info(0x%08x)\n", + ioc_status, drv_cmd->ioc_loginfo); + do_retry = true; + } + + if (drv_cmd->state & MPI3MR_CMD_REPLY_VALID) + pel_reply = (struct mpi3_pel_reply *)drv_cmd->reply; + + if (!pel_reply) { + dprint_bsg_err(mrioc, + "pel_wait: failed due to no reply\n"); + goto out_failed; + } + + pe_log_status = le16_to_cpu(pel_reply->pe_log_status); + if ((pe_log_status != MPI3_PEL_STATUS_SUCCESS) && + (pe_log_status != MPI3_PEL_STATUS_ABORTED)) { + ioc_err(mrioc, "%s: Failed pe_log_status(0x%04x)\n", + __func__, pe_log_status); + dprint_bsg_err(mrioc, + "pel_wait: failed due to pel_log_status(0x%04x)\n", + pe_log_status); + do_retry = true; + } + + if (do_retry) { + if (drv_cmd->retry_count < MPI3MR_PEL_RETRY_COUNT) { + drv_cmd->retry_count++; + dprint_bsg_err(mrioc, "pel_wait: retrying(%d)\n", + drv_cmd->retry_count); + mpi3mr_pel_wait_post(mrioc, drv_cmd); + return; + } + dprint_bsg_err(mrioc, + "pel_wait: failed after all retries(%d)\n", + drv_cmd->retry_count); + goto out_failed; + } + atomic64_inc(&event_counter); + if (!mrioc->pel_abort_requested) { + mrioc->pel_cmds.retry_count = 0; + mpi3mr_pel_get_seqnum_post(mrioc, &mrioc->pel_cmds); + } + + return; +out_failed: + mrioc->pel_enabled = false; +cleanup_drv_cmd: + drv_cmd->state = MPI3MR_CMD_NOTUSED; + drv_cmd->callback = NULL; + drv_cmd->retry_count = 0; +} + +/** + * mpi3mr_pel_get_seqnum_complete - PELGetSeqNum Completion callback + * @mrioc: Adapter instance reference + * @drv_cmd: Internal command tracker + * + * This is a callback handler for the PEL get sequence number + * request and a new PEL wait request will be issued to the + * firmware from this + * + * Return: Nothing. + */ +void mpi3mr_pel_get_seqnum_complete(struct mpi3mr_ioc *mrioc, + struct mpi3mr_drv_cmd *drv_cmd) +{ + struct mpi3_pel_reply *pel_reply = NULL; + struct mpi3_pel_seq *pel_seqnum_virt; + u16 ioc_status; + bool do_retry = false; + + pel_seqnum_virt = (struct mpi3_pel_seq *)mrioc->pel_seqnum_virt; + + if (drv_cmd->state & MPI3MR_CMD_RESET) + goto cleanup_drv_cmd; + + ioc_status = drv_cmd->ioc_status & MPI3_IOCSTATUS_STATUS_MASK; + if (ioc_status != MPI3_IOCSTATUS_SUCCESS) { + dprint_bsg_err(mrioc, + "pel_get_seqnum: failed with ioc_status(0x%04x), log_info(0x%08x)\n", + ioc_status, drv_cmd->ioc_loginfo); + do_retry = true; + } + + if (drv_cmd->state & MPI3MR_CMD_REPLY_VALID) + pel_reply = (struct mpi3_pel_reply *)drv_cmd->reply; + if (!pel_reply) { + dprint_bsg_err(mrioc, + "pel_get_seqnum: failed due to no reply\n"); + goto out_failed; + } + + if (le16_to_cpu(pel_reply->pe_log_status) != MPI3_PEL_STATUS_SUCCESS) { + dprint_bsg_err(mrioc, + "pel_get_seqnum: failed due to pel_log_status(0x%04x)\n", + le16_to_cpu(pel_reply->pe_log_status)); + do_retry = true; + } + + if (do_retry) { + if (drv_cmd->retry_count < MPI3MR_PEL_RETRY_COUNT) { + drv_cmd->retry_count++; + dprint_bsg_err(mrioc, + "pel_get_seqnum: retrying(%d)\n", + drv_cmd->retry_count); + mpi3mr_pel_get_seqnum_post(mrioc, drv_cmd); + return; + } + + dprint_bsg_err(mrioc, + "pel_get_seqnum: failed after all retries(%d)\n", + drv_cmd->retry_count); + goto out_failed; + } + mrioc->pel_newest_seqnum = le32_to_cpu(pel_seqnum_virt->newest) + 1; + drv_cmd->retry_count = 0; + mpi3mr_pel_wait_post(mrioc, drv_cmd); + + return; +out_failed: + mrioc->pel_enabled = false; +cleanup_drv_cmd: + drv_cmd->state = MPI3MR_CMD_NOTUSED; + drv_cmd->callback = NULL; + drv_cmd->retry_count = 0; } /** @@ -4383,6 +4685,12 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, if (!retval) { mrioc->diagsave_timeout = 0; mrioc->reset_in_progress = 0; + mrioc->pel_abort_requested = 0; + if (mrioc->pel_enabled) { + mrioc->pel_cmds.retry_count = 0; + mpi3mr_pel_wait_post(mrioc, &mrioc->pel_cmds); + } + mpi3mr_rfresh_tgtdevs(mrioc); mrioc->ts_update_counter = 0; spin_lock_irqsave(&mrioc->watchdog_lock, flags); @@ -4392,6 +4700,8 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, msecs_to_jiffies(MPI3MR_WATCHDOG_INTERVAL)); spin_unlock_irqrestore(&mrioc->watchdog_lock, flags); mrioc->stop_bsgs = 0; + if (mrioc->pel_enabled) + atomic64_inc(&event_counter); } else { mpi3mr_issue_reset(mrioc, MPI3_SYSIF_HOST_DIAG_RESET_ACTION_DIAG_FAULT, reset_reason); diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index 450574fc1fec..19298136edb6 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -14,6 +14,7 @@ LIST_HEAD(mrioc_list); DEFINE_SPINLOCK(mrioc_list_lock); static int mrioc_ids; static int warn_non_secure_ctlr; +atomic64_t event_counter; MODULE_AUTHOR(MPI3MR_DRIVER_AUTHOR); MODULE_DESCRIPTION(MPI3MR_DRIVER_DESC); @@ -1415,6 +1416,23 @@ static void mpi3mr_pcietopochg_evt_bh(struct mpi3mr_ioc *mrioc, } } +/** + * mpi3mr_logdata_evt_bh - Log data event bottomhalf + * @mrioc: Adapter instance reference + * @fwevt: Firmware event reference + * + * Extracts the event data and calls application interfacing + * function to process the event further. + * + * Return: Nothing. + */ +static void mpi3mr_logdata_evt_bh(struct mpi3mr_ioc *mrioc, + struct mpi3mr_fwevt *fwevt) +{ + mpi3mr_app_save_logdata(mrioc, fwevt->event_data, + fwevt->event_data_size); +} + /** * mpi3mr_fwevt_bh - Firmware event bottomhalf handler * @mrioc: Adapter instance reference @@ -1467,6 +1485,11 @@ static void mpi3mr_fwevt_bh(struct mpi3mr_ioc *mrioc, mpi3mr_pcietopochg_evt_bh(mrioc, fwevt); break; } + case MPI3_EVENT_LOG_DATA: + { + mpi3mr_logdata_evt_bh(mrioc, fwevt); + break; + } default: break; } @@ -2298,6 +2321,7 @@ void mpi3mr_os_handle_events(struct mpi3mr_ioc *mrioc, break; } case MPI3_EVENT_DEVICE_INFO_CHANGED: + case MPI3_EVENT_LOG_DATA: { process_evt_bh = 1; break; @@ -4568,6 +4592,12 @@ static struct pci_driver mpi3mr_pci_driver = { #endif }; +static ssize_t event_counter_show(struct device_driver *dd, char *buf) +{ + return sprintf(buf, "%llu\n", atomic64_read(&event_counter)); +} +static DRIVER_ATTR_RO(event_counter); + static int __init mpi3mr_init(void) { int ret_val; @@ -4576,6 +4606,16 @@ static int __init mpi3mr_init(void) MPI3MR_DRIVER_VERSION); ret_val = pci_register_driver(&mpi3mr_pci_driver); + if (ret_val) { + pr_err("%s failed to load due to pci register driver failure\n", + MPI3MR_DRIVER_NAME); + return ret_val; + } + + ret_val = driver_create_file(&mpi3mr_pci_driver.driver, + &driver_attr_event_counter); + if (ret_val) + pci_unregister_driver(&mpi3mr_pci_driver); return ret_val; } @@ -4590,6 +4630,8 @@ static void __exit mpi3mr_exit(void) pr_info("Unloading %s version %s\n", MPI3MR_DRIVER_NAME, MPI3MR_DRIVER_VERSION); + driver_remove_file(&mpi3mr_pci_driver.driver, + &driver_attr_event_counter); pci_unregister_driver(&mpi3mr_pci_driver); } From patchwork Fri Apr 22 11:54:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 12823320 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5246C433F5 for ; Fri, 22 Apr 2022 11:55:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1447213AbiDVL6Y (ORCPT ); Fri, 22 Apr 2022 07:58:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447217AbiDVL6P (ORCPT ); Fri, 22 Apr 2022 07:58:15 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFC5F3ED01 for ; Fri, 22 Apr 2022 04:55:21 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id j8so10315553pll.11 for ; Fri, 22 Apr 2022 04:55:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=YTwzzyGktuM5g0mmcPuj3Zv804/aNINZ4fPza1mJSEw=; b=PhPCTDDH9iw1bDfaMjbWLmp1ZbRh17ACsblLCzgH5u8/EqT4JT48c+/1xGzAUZw0nm pQvmuoayrf6QVNFIOUx1ShmqwqqTxDQG7OHl2KFVddqLRvv/YCzJzv1KNCiI3lBA9yJ6 gfe4q9IViGstrZ6Nr/xJcdwg1XfcQowNV4fZs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=YTwzzyGktuM5g0mmcPuj3Zv804/aNINZ4fPza1mJSEw=; b=lqNbv5gykcpU+4oZ1EpbtQ6O17UHzKqOJ5QLxWe0YvHhFUrSQSxt6jlkwlnY6PJGUV oNCK1eAoOMQwxGFyGlG30Atev+bmxU/nYIsgh5i0H06HaPr3S6Hg2NCJejNankFMq+kx ANSRtmfGrQnuPY3uvAFiEk0ND0Ryb8ryZf3q11XJsjKBUPA658YQB328UqgKq02OxiVZ MD0kAbcx1b80JOHyKvURzrxRBbPCycGUgJA8NnmDx4XO1gQwL066IDyxuWDJHEfSnw5M KNZ+6qPo0JwmsrRbup/p3PQchggavcQ5r3jcb+abGu+bOfrpxIUScGfYtg6s4owcdcIs o4TA== X-Gm-Message-State: AOAM531msceCPOU5Marsz2QXAsIC93rk51ntZhoq9YFUvtj0sIbQ2jQj JcS6v19CgvVnud6RuT2qgwa89hJW1Y2Iyg85KuqZvs2bx0bPo7Pkc1LvkBm21Yxsnn3dDdJwITA 0NuzTHwaYAD4rC2256MJxr6CtMLHu9U4kmzZ6MW4jw53RWBcPpyVQEu1toEL8bK0ceb2wPOygBM 0JYkzs9xs= X-Google-Smtp-Source: ABdhPJx8X16MZ7KNvgeepSMe0fcARl3rbU1BMOKwdeCZozvLFH1DFz14X8J5E/3Zm4yQZZK+EwQCzA== X-Received: by 2002:a17:90b:1e0b:b0:1d2:dabc:9929 with SMTP id pg11-20020a17090b1e0b00b001d2dabc9929mr4973014pjb.39.1650628521130; Fri, 22 Apr 2022 04:55:21 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g6-20020a17090a714600b001d7f3bb11d7sm2367981pjs.53.2022.04.22.04.55.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 04:55:20 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, bvanassche@acm.org, hch@lst.de, hare@suse.de, himanshu.madhani@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH v5 6/8] mpi3mr: expose adapter state to sysfs Date: Fri, 22 Apr 2022 07:54:21 -0400 Message-Id: <20220422115423.279805-7-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220422115423.279805-1-sumit.saxena@broadcom.com> References: <20220422115423.279805-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Reviewed-by: Himanshu Madhani Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 2 +- drivers/scsi/mpi3mr/mpi3mr_app.c | 46 ++++++++++++++++++++++++++++++++ drivers/scsi/mpi3mr/mpi3mr_os.c | 1 + 3 files changed, 48 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index cc54231da658..1de3b006f444 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -1056,5 +1056,5 @@ int mpi3mr_pel_get_seqnum_post(struct mpi3mr_ioc *mrioc, struct mpi3mr_drv_cmd *drv_cmd); void mpi3mr_app_save_logdata(struct mpi3mr_ioc *mrioc, char *event_data, u16 event_data_size); - +extern const struct attribute_group *mpi3mr_host_groups[]; #endif /*MPI3MR_H_INCLUDED*/ diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index 075dcc34f0e8..953891df4753 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -1215,3 +1215,49 @@ void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc) err_device_add: kfree(mrioc->bsg_dev); } + +/** + * adapter_state_show - SysFS callback for adapter state show + * @dev: class device + * @attr: Device attributes + * @buf: Buffer to copy + * + * Return: snprintf() return after copying adapter state + */ +static ssize_t +adp_state_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct mpi3mr_ioc *mrioc = shost_priv(shost); + enum mpi3mr_iocstate ioc_state; + uint8_t adp_state; + + ioc_state = mpi3mr_get_iocstate(mrioc); + if (ioc_state == MRIOC_STATE_UNRECOVERABLE) + adp_state = MPI3MR_BSG_ADPSTATE_UNRECOVERABLE; + else if ((mrioc->reset_in_progress) || (mrioc->stop_bsgs)) + adp_state = MPI3MR_BSG_ADPSTATE_IN_RESET; + else if (ioc_state == MRIOC_STATE_FAULT) + adp_state = MPI3MR_BSG_ADPSTATE_FAULT; + else + adp_state = MPI3MR_BSG_ADPSTATE_OPERATIONAL; + + return snprintf(buf, PAGE_SIZE, "%u\n", adp_state); +} + +static DEVICE_ATTR_RO(adp_state); + +static struct attribute *mpi3mr_host_attrs[] = { + &dev_attr_adp_state.attr, + NULL, +}; + +static const struct attribute_group mpi3mr_host_attr_group = { + .attrs = mpi3mr_host_attrs +}; + +const struct attribute_group *mpi3mr_host_groups[] = { + &mpi3mr_host_attr_group, + NULL, +}; diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c index 19298136edb6..89a4918c4a9e 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_os.c +++ b/drivers/scsi/mpi3mr/mpi3mr_os.c @@ -4134,6 +4134,7 @@ static struct scsi_host_template mpi3mr_driver_template = { .max_segment_size = 0xffffffff, .track_queue_depth = 1, .cmd_size = sizeof(struct scmd_priv), + .shost_groups = mpi3mr_host_groups, }; /** From patchwork Fri Apr 22 11:54:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 12823321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 549F9C433FE for ; Fri, 22 Apr 2022 11:55:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1447219AbiDVL6Z (ORCPT ); Fri, 22 Apr 2022 07:58:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376524AbiDVL6U (ORCPT ); Fri, 22 Apr 2022 07:58:20 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47AF254FBA for ; Fri, 22 Apr 2022 04:55:26 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id h12so6662522plf.12 for ; Fri, 22 Apr 2022 04:55:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=1b/+X6yxVimq9aOKlbxzlIWrmCFef3xxloMe4Ji4SXY=; b=VSS5u7WKpKKFo6b0Iqr2/+bK/k0HYCGfRSNAaX6+8ZUQFGM9ZRKYlplEATFKJ3F+X0 f+mHjr2zfiE+Jl0/1a8mf7AjVz3/mrdsqVsgQBwWVJy2dn5suXvqH3Rjdy3GB9W0IOu1 jcekQ5hW3dH2eC54FGJdBDx8OZ5ICcKoW47Sc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=1b/+X6yxVimq9aOKlbxzlIWrmCFef3xxloMe4Ji4SXY=; b=DAocPzEcPUj8bCY/yhvMNmRLAIHhhdtVFPFPrU2TkyRX9wBpPh/tR1blJSbbVK5dez kt9AvjZMsQw+8itPBcqhTCtjGMLYmtKXO70qP4PmngSltSVJLshMtOH0WVX8z1aO5IKf 26NJU+nDPFmvk5Lgh/kAzgSE9OtIad5mc0xPP3AP7nrKHlnClUGF8Qk7vuSoUMbXkjeg 5rpY+Bg2MKtWdPRPScCnzTMhn/L4Txr2AnXciaMkFvnVZCGBmkm5IV3gF7LJ82hP/xc6 2wxNSAKrBaP4D2pcCfmf90bGqz4Wtr3rLmjk5DrO4ZQ34dVFBCoI2mh1fMFD2J3FFtMz EhCQ== X-Gm-Message-State: AOAM53315KvFXXsW9/YSivV1ABArowWMJbxsbtgPAMOVuQvWDVv3XVqW 3XQTS8ekcpTu7btjoQFdMfxngw8LhdxAV0+2PTu/TRatj5NahH1Cq6MoL85GsxbNFaW8kUZau32 6uXl2dLfZXuxi4mBuOiP0l8dq7ka6uFshaOAgXtXd8URhSwgfFXy3NeK1Hv5PSN9OUKNUMnwjn3 F0Qj6TbbU= X-Google-Smtp-Source: ABdhPJyQuLjEJ2/mn2wV3f60L/N13cu7NZIBZvjXs5SZCFprmUiFzvC7/VDiYINQtixrmq5zc0NVMw== X-Received: by 2002:a17:902:d583:b0:15a:9d39:a1b with SMTP id k3-20020a170902d58300b0015a9d390a1bmr4151540plh.114.1650628525304; Fri, 22 Apr 2022 04:55:25 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g6-20020a17090a714600b001d7f3bb11d7sm2367981pjs.53.2022.04.22.04.55.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 04:55:24 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, bvanassche@acm.org, hch@lst.de, hare@suse.de, himanshu.madhani@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH v5 7/8] mpi3mr: add support for nvme pass-through Date: Fri, 22 Apr 2022 07:54:22 -0400 Message-Id: <20220422115423.279805-8-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220422115423.279805-1-sumit.saxena@broadcom.com> References: <20220422115423.279805-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch adds support for management applications to send an MPI3 Encapsulated NVMe passthru commands to the NVMe devices attached to the Avenger controller. Since the NVMe drives are exposed as SCSI devices by the controller the standard NVMe applications cannot be used to interact with the drives and the command sets supported is also limited by the controller firmware. Special handling is required for MPI3 Encapsulated NVMe passthru commands for PRP/SGL setup in the commands hence the additional changes. Reviewed-by: Himanshu Madhani Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 25 ++ drivers/scsi/mpi3mr/mpi3mr_app.c | 348 +++++++++++++++++++++++++++- include/uapi/scsi/scsi_bsg_mpi3mr.h | 8 + 3 files changed, 378 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index 1de3b006f444..b2dbb6543a9b 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -193,6 +193,24 @@ extern atomic64_t event_counter; */ #define MPI3MR_MAX_APP_XFER_SECTORS (2048 + 512) +/** + * struct mpi3mr_nvme_pt_sge - Structure to store SGEs for NVMe + * Encapsulated commands. + * + * @base_addr: Physical address + * @length: SGE length + * @rsvd: Reserved + * @rsvd1: Reserved + * @sgl_type: sgl type + */ +struct mpi3mr_nvme_pt_sge { + u64 base_addr; + u32 length; + u16 rsvd; + u8 rsvd1; + u8 sgl_type; +}; + /** * struct mpi3mr_buf_map - local structure to * track kernel and user buffers associated with an BSG @@ -746,6 +764,9 @@ struct scmd_priv { * @reset_waitq: Controller reset wait queue * @prepare_for_reset: Prepare for reset event received * @prepare_for_reset_timeout_counter: Prepare for reset timeout + * @prp_list_virt: NVMe encapsulated PRP list virtual base + * @prp_list_dma: NVMe encapsulated PRP list DMA + * @prp_sz: NVME encapsulated PRP list size * @diagsave_timeout: Diagnostic information save timeout * @logging_level: Controller debug logging level * @flush_io_count: I/O count to flush after reset @@ -901,6 +922,10 @@ struct mpi3mr_ioc { u8 prepare_for_reset; u16 prepare_for_reset_timeout_counter; + void *prp_list_virt; + dma_addr_t prp_list_dma; + u32 prp_sz; + u16 diagsave_timeout; int logging_level; u16 flush_io_count; diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c index 953891df4753..a83fa0a097f2 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_app.c +++ b/drivers/scsi/mpi3mr/mpi3mr_app.c @@ -619,6 +619,314 @@ static void mpi3mr_bsg_build_sgl(u8 *mpi_req, uint32_t sgl_offset, } } +/** + * mpi3mr_get_nvme_data_fmt - returns the NVMe data format + * @nvme_encap_request: NVMe encapsulated MPI request + * + * This function returns the type of the data format specified + * in user provided NVMe command in NVMe encapsulated request. + * + * Return: Data format of the NVMe command (PRP/SGL etc) + */ +static unsigned int mpi3mr_get_nvme_data_fmt( + struct mpi3_nvme_encapsulated_request *nvme_encap_request) +{ + u8 format = 0; + + format = ((nvme_encap_request->command[0] & 0xc000) >> 14); + return format; + +} + +/** + * mpi3mr_build_nvme_sgl - SGL constructor for NVME + * encapsulated request + * @mrioc: Adapter instance reference + * @nvme_encap_request: NVMe encapsulated MPI request + * @drv_bufs: DMA address of the buffers to be placed in sgl + * @bufcnt: Number of DMA buffers + * + * This function places the DMA address of the given buffers in + * proper format as SGEs in the given NVMe encapsulated request. + * + * Return: 0 on success, -1 on failure + */ +static int mpi3mr_build_nvme_sgl(struct mpi3mr_ioc *mrioc, + struct mpi3_nvme_encapsulated_request *nvme_encap_request, + struct mpi3mr_buf_map *drv_bufs, u8 bufcnt) +{ + struct mpi3mr_nvme_pt_sge *nvme_sgl; + u64 sgl_ptr; + u8 count; + size_t length = 0; + struct mpi3mr_buf_map *drv_buf_iter = drv_bufs; + u64 sgemod_mask = ((u64)((mrioc->facts.sge_mod_mask) << + mrioc->facts.sge_mod_shift) << 32); + u64 sgemod_val = ((u64)(mrioc->facts.sge_mod_value) << + mrioc->facts.sge_mod_shift) << 32; + + /* + * Not all commands require a data transfer. If no data, just return + * without constructing any sgl. + */ + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + sgl_ptr = (u64)drv_buf_iter->kern_buf_dma; + length = drv_buf_iter->kern_buf_len; + break; + } + if (!length) + return 0; + + if (sgl_ptr & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: SGL address collides with SGE modifier\n", + __func__); + return -1; + } + + sgl_ptr &= ~sgemod_mask; + sgl_ptr |= sgemod_val; + nvme_sgl = (struct mpi3mr_nvme_pt_sge *) + ((u8 *)(nvme_encap_request->command) + MPI3MR_NVME_CMD_SGL_OFFSET); + memset(nvme_sgl, 0, sizeof(struct mpi3mr_nvme_pt_sge)); + nvme_sgl->base_addr = sgl_ptr; + nvme_sgl->length = length; + return 0; +} + +/** + * mpi3mr_build_nvme_prp - PRP constructor for NVME + * encapsulated request + * @mrioc: Adapter instance reference + * @nvme_encap_request: NVMe encapsulated MPI request + * @drv_bufs: DMA address of the buffers to be placed in SGL + * @bufcnt: Number of DMA buffers + * + * This function places the DMA address of the given buffers in + * proper format as PRP entries in the given NVMe encapsulated + * request. + * + * Return: 0 on success, -1 on failure + */ +static int mpi3mr_build_nvme_prp(struct mpi3mr_ioc *mrioc, + struct mpi3_nvme_encapsulated_request *nvme_encap_request, + struct mpi3mr_buf_map *drv_bufs, u8 bufcnt) +{ + int prp_size = MPI3MR_NVME_PRP_SIZE; + __le64 *prp_entry, *prp1_entry, *prp2_entry; + __le64 *prp_page; + dma_addr_t prp_entry_dma, prp_page_dma, dma_addr; + u32 offset, entry_len, dev_pgsz; + u32 page_mask_result, page_mask; + size_t length = 0; + u8 count; + struct mpi3mr_buf_map *drv_buf_iter = drv_bufs; + u64 sgemod_mask = ((u64)((mrioc->facts.sge_mod_mask) << + mrioc->facts.sge_mod_shift) << 32); + u64 sgemod_val = ((u64)(mrioc->facts.sge_mod_value) << + mrioc->facts.sge_mod_shift) << 32; + u16 dev_handle = nvme_encap_request->dev_handle; + struct mpi3mr_tgt_dev *tgtdev; + + tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle); + if (!tgtdev) { + dprint_bsg_err(mrioc, "%s: invalid device handle 0x%04x\n", + __func__, dev_handle); + return -1; + } + + if (tgtdev->dev_spec.pcie_inf.pgsz == 0) { + dprint_bsg_err(mrioc, + "%s: NVMe device page size is zero for handle 0x%04x\n", + __func__, dev_handle); + mpi3mr_tgtdev_put(tgtdev); + return -1; + } + + dev_pgsz = 1 << (tgtdev->dev_spec.pcie_inf.pgsz); + mpi3mr_tgtdev_put(tgtdev); + + /* + * Not all commands require a data transfer. If no data, just return + * without constructing any PRP. + */ + for (count = 0; count < bufcnt; count++, drv_buf_iter++) { + if (drv_buf_iter->data_dir == DMA_NONE) + continue; + dma_addr = drv_buf_iter->kern_buf_dma; + length = drv_buf_iter->kern_buf_len; + break; + } + + if (!length) + return 0; + + mrioc->prp_sz = 0; + mrioc->prp_list_virt = dma_alloc_coherent(&mrioc->pdev->dev, + dev_pgsz, &mrioc->prp_list_dma, GFP_KERNEL); + + if (!mrioc->prp_list_virt) + return -1; + mrioc->prp_sz = dev_pgsz; + + /* + * Set pointers to PRP1 and PRP2, which are in the NVMe command. + * PRP1 is located at a 24 byte offset from the start of the NVMe + * command. Then set the current PRP entry pointer to PRP1. + */ + prp1_entry = (__le64 *)((u8 *)(nvme_encap_request->command) + + MPI3MR_NVME_CMD_PRP1_OFFSET); + prp2_entry = (__le64 *)((u8 *)(nvme_encap_request->command) + + MPI3MR_NVME_CMD_PRP2_OFFSET); + prp_entry = prp1_entry; + /* + * For the PRP entries, use the specially allocated buffer of + * contiguous memory. + */ + prp_page = (__le64 *)mrioc->prp_list_virt; + prp_page_dma = mrioc->prp_list_dma; + + /* + * Check if we are within 1 entry of a page boundary we don't + * want our first entry to be a PRP List entry. + */ + page_mask = dev_pgsz - 1; + page_mask_result = (uintptr_t)((u8 *)prp_page + prp_size) & page_mask; + if (!page_mask_result) { + dprint_bsg_err(mrioc, "%s: PRP page is not page aligned\n", + __func__); + goto err_out; + } + + /* + * Set PRP physical pointer, which initially points to the current PRP + * DMA memory page. + */ + prp_entry_dma = prp_page_dma; + + + /* Loop while the length is not zero. */ + while (length) { + page_mask_result = (prp_entry_dma + prp_size) & page_mask; + if (!page_mask_result && (length > dev_pgsz)) { + dprint_bsg_err(mrioc, + "%s: single PRP page is not sufficient\n", + __func__); + goto err_out; + } + + /* Need to handle if entry will be part of a page. */ + offset = dma_addr & page_mask; + entry_len = dev_pgsz - offset; + + if (prp_entry == prp1_entry) { + /* + * Must fill in the first PRP pointer (PRP1) before + * moving on. + */ + *prp1_entry = cpu_to_le64(dma_addr); + if (*prp1_entry & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: PRP1 address collides with SGE modifier\n", + __func__); + goto err_out; + } + *prp1_entry &= ~sgemod_mask; + *prp1_entry |= sgemod_val; + + /* + * Now point to the second PRP entry within the + * command (PRP2). + */ + prp_entry = prp2_entry; + } else if (prp_entry == prp2_entry) { + /* + * Should the PRP2 entry be a PRP List pointer or just + * a regular PRP pointer? If there is more than one + * more page of data, must use a PRP List pointer. + */ + if (length > dev_pgsz) { + /* + * PRP2 will contain a PRP List pointer because + * more PRP's are needed with this command. The + * list will start at the beginning of the + * contiguous buffer. + */ + *prp2_entry = cpu_to_le64(prp_entry_dma); + if (*prp2_entry & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: PRP list address collides with SGE modifier\n", + __func__); + goto err_out; + } + *prp2_entry &= ~sgemod_mask; + *prp2_entry |= sgemod_val; + + /* + * The next PRP Entry will be the start of the + * first PRP List. + */ + prp_entry = prp_page; + continue; + } else { + /* + * After this, the PRP Entries are complete. + * This command uses 2 PRP's and no PRP list. + */ + *prp2_entry = cpu_to_le64(dma_addr); + if (*prp2_entry & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: PRP2 collides with SGE modifier\n", + __func__); + goto err_out; + } + *prp2_entry &= ~sgemod_mask; + *prp2_entry |= sgemod_val; + } + } else { + /* + * Put entry in list and bump the addresses. + * + * After PRP1 and PRP2 are filled in, this will fill in + * all remaining PRP entries in a PRP List, one per + * each time through the loop. + */ + *prp_entry = cpu_to_le64(dma_addr); + if (*prp1_entry & sgemod_mask) { + dprint_bsg_err(mrioc, + "%s: PRP address collides with SGE modifier\n", + __func__); + goto err_out; + } + *prp_entry &= ~sgemod_mask; + *prp_entry |= sgemod_val; + prp_entry++; + prp_entry_dma++; + } + + /* + * Bump the phys address of the command's data buffer by the + * entry_len. + */ + dma_addr += entry_len; + + /* decrement length accounting for last partial page. */ + if (entry_len > length) + length = 0; + else + length -= entry_len; + } + return 0; +err_out: + if (mrioc->prp_list_virt) { + dma_free_coherent(&mrioc->pdev->dev, mrioc->prp_sz, + mrioc->prp_list_virt, mrioc->prp_list_dma); + mrioc->prp_list_virt = NULL; + } + return -1; +} /** * mpi3mr_bsg_process_mpt_cmds - MPI Pass through BSG handler * @job: BSG job reference @@ -650,7 +958,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job, unsigned int *reply struct mpi3mr_buf_map *drv_bufs = NULL, *drv_buf_iter = NULL; u8 count, bufcnt = 0, is_rmcb = 0, is_rmrb = 0, din_cnt = 0, dout_cnt = 0; u8 invalid_be = 0, erb_offset = 0xFF, mpirep_offset = 0xFF, sg_entries = 0; - u8 block_io = 0, resp_code = 0; + u8 block_io = 0, resp_code = 0, nvme_fmt = 0; struct mpi3_request_header *mpi_header = NULL; struct mpi3_status_reply_descriptor *status_desc; struct mpi3_scsi_task_mgmt_request *tm_req; @@ -890,7 +1198,34 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job, unsigned int *reply goto out; } - if (mpi_header->function != MPI3_BSG_FUNCTION_NVME_ENCAPSULATED) { + if (mpi_header->function == MPI3_BSG_FUNCTION_NVME_ENCAPSULATED) { + nvme_fmt = mpi3mr_get_nvme_data_fmt( + (struct mpi3_nvme_encapsulated_request *)mpi_req); + if (nvme_fmt == MPI3MR_NVME_DATA_FORMAT_PRP) { + if (mpi3mr_build_nvme_prp(mrioc, + (struct mpi3_nvme_encapsulated_request *)mpi_req, + drv_bufs, bufcnt)) { + rval = -ENOMEM; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + } else if (nvme_fmt == MPI3MR_NVME_DATA_FORMAT_SGL1 || + nvme_fmt == MPI3MR_NVME_DATA_FORMAT_SGL2) { + if (mpi3mr_build_nvme_sgl(mrioc, + (struct mpi3_nvme_encapsulated_request *)mpi_req, + drv_bufs, bufcnt)) { + rval = -EINVAL; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + } else { + dprint_bsg_err(mrioc, + "%s:invalid NVMe command format\n", __func__); + rval = -EINVAL; + mutex_unlock(&mrioc->bsg_cmds.mutex); + goto out; + } + } else { mpi3mr_bsg_build_sgl(mpi_req, (mpi_msg_size), drv_bufs, bufcnt, is_rmcb, is_rmrb, (dout_cnt + din_cnt)); @@ -968,7 +1303,8 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job, unsigned int *reply } } - if (mpi_header->function == MPI3_BSG_FUNCTION_SCSI_IO) + if ((mpi_header->function == MPI3_BSG_FUNCTION_NVME_ENCAPSULATED) || + (mpi_header->function == MPI3_BSG_FUNCTION_SCSI_IO)) mpi3mr_issue_tm(mrioc, MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET, mpi_header->function_dependent, 0, @@ -982,6 +1318,12 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job, unsigned int *reply } dprint_bsg_info(mrioc, "%s: bsg request is completed\n", __func__); + if (mrioc->prp_list_virt) { + dma_free_coherent(&mrioc->pdev->dev, mrioc->prp_sz, + mrioc->prp_list_virt, mrioc->prp_list_dma); + mrioc->prp_list_virt = NULL; + } + if ((mrioc->bsg_cmds.ioc_status & MPI3_IOCSTATUS_STATUS_MASK) != MPI3_IOCSTATUS_SUCCESS) { dprint_bsg_info(mrioc, diff --git a/include/uapi/scsi/scsi_bsg_mpi3mr.h b/include/uapi/scsi/scsi_bsg_mpi3mr.h index a6dc050dff72..d34bef4670b3 100644 --- a/include/uapi/scsi/scsi_bsg_mpi3mr.h +++ b/include/uapi/scsi/scsi_bsg_mpi3mr.h @@ -488,6 +488,14 @@ struct mpi3_nvme_encapsulated_error_reply { __le32 nvme_completion_entry[4]; }; +#define MPI3MR_NVME_PRP_SIZE 8 /* PRP size */ +#define MPI3MR_NVME_CMD_PRP1_OFFSET 24 /* PRP1 offset in NVMe cmd */ +#define MPI3MR_NVME_CMD_PRP2_OFFSET 32 /* PRP2 offset in NVMe cmd */ +#define MPI3MR_NVME_CMD_SGL_OFFSET 24 /* SGL offset in NVMe cmd */ +#define MPI3MR_NVME_DATA_FORMAT_PRP 0 +#define MPI3MR_NVME_DATA_FORMAT_SGL1 1 +#define MPI3MR_NVME_DATA_FORMAT_SGL2 2 + /* MPI3: task management related definitions */ struct mpi3_scsi_task_mgmt_request { __le16 host_tag; From patchwork Fri Apr 22 11:54:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Saxena X-Patchwork-Id: 12823322 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D133AC433EF for ; Fri, 22 Apr 2022 11:55:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1447218AbiDVL60 (ORCPT ); Fri, 22 Apr 2022 07:58:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447221AbiDVL6X (ORCPT ); Fri, 22 Apr 2022 07:58:23 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BF86554B0 for ; Fri, 22 Apr 2022 04:55:30 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id j8so10316308pll.11 for ; Fri, 22 Apr 2022 04:55:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=rQEU2x7Wasu/xkvyoMILXaQ8MEpcnoJQcFv1VQn6ufY=; b=gO/+HY7LfqFYpwkI9S4omZl5OJDmu/v2svgT9twksBo6BEl3jdb8bz9wYDhjtvpKLb DKW6/gyl0PImrp8iU4JGhxO47MFrSeaI0+Bb116Nv4ErnEdwUF2aw2FjgSq5/f9XM0rh L/6tQuQBPTP2oI4vsqDQNT3t7cfVw42q+65Os= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=rQEU2x7Wasu/xkvyoMILXaQ8MEpcnoJQcFv1VQn6ufY=; b=7aV5GAuYkdH+P0+g9J+Ibd1mWJ6uHNZGiWymgbJJRP55Wwo+NKdBmypV9nR0Ez13Ac PLg811WVCKfvQHR6iE1i8BjI0v+KbvHy19rur094TaXu8wrEi3x6whxn9E+53EY9115L VqgcRXQqYs0wQQdJTpKbTm7GLeDPswWUuSxhgwVncIutuHNmO8L8VlCrxUsMgyrAOu7g Gq9ct3/xNC0FMl7OSnWFbNF+sdnnSvoQKSoNGZMyAy0d0OBRgm6XVFGQ1l8Kx+RikajG NuQsjD4Bb5cUK0Ya7wSjlzUZ32wiFDtclfKmUeaTV7kCXWrcqPcdsLvMV9Y6YgeKH9Uy LKqQ== X-Gm-Message-State: AOAM532fNlpWyiyOZ8kvQo08ncNRI/nr7oLGqFrKmp07opij4OcRQaaT Bv7fx3ZP/3aVtmrWQ6PAgg5uhG7Hjr520un4BMkShAYajcbp+vjKA8xZtQNF43AJ6AOuyKpv3pU +7Xf0DFzRS1v/7jewKaNP+Lg2vJXAGoBKRo7M0R9907RHMYJ7LjMywYcW7BDt4NFwZsmFC+nRNH 08RvbfvKs= X-Google-Smtp-Source: ABdhPJyQntioDxyhe2dbSgdifnBwP2kTyFf4S6/gfP6KNEL9lbHVGddjwSu5mvpXMX1j1mo6H4wlmw== X-Received: by 2002:a17:902:e94e:b0:158:91e6:501 with SMTP id b14-20020a170902e94e00b0015891e60501mr4115340pll.29.1650628529822; Fri, 22 Apr 2022 04:55:29 -0700 (PDT) Received: from dhcp-10-123-20-15.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g6-20020a17090a714600b001d7f3bb11d7sm2367981pjs.53.2022.04.22.04.55.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 04:55:28 -0700 (PDT) From: Sumit Saxena To: linux-scsi@vger.kernel.org Cc: martin.petersen@oracle.com, bvanassche@acm.org, hch@lst.de, hare@suse.de, himanshu.madhani@oracle.com, sathya.prakash@broadcom.com, kashyap.desai@broadcom.com, chandrakanth.patil@broadcom.com, sreekanth.reddy@broadcom.com, prayas.patel@broadcom.com, Sumit Saxena Subject: [PATCH v5 8/8] mpi3mr: update driver version to 8.0.0.69.0 Date: Fri, 22 Apr 2022 07:54:23 -0400 Message-Id: <20220422115423.279805-9-sumit.saxena@broadcom.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220422115423.279805-1-sumit.saxena@broadcom.com> References: <20220422115423.279805-1-sumit.saxena@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Signed-off-by: Sumit Saxena --- drivers/scsi/mpi3mr/mpi3mr.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h index b2dbb6543a9b..3130caac0d03 100644 --- a/drivers/scsi/mpi3mr/mpi3mr.h +++ b/drivers/scsi/mpi3mr/mpi3mr.h @@ -55,8 +55,8 @@ extern struct list_head mrioc_list; extern int prot_mask; extern atomic64_t event_counter; -#define MPI3MR_DRIVER_VERSION "8.0.0.68.0" -#define MPI3MR_DRIVER_RELDATE "10-February-2022" +#define MPI3MR_DRIVER_VERSION "8.0.0.69.0" +#define MPI3MR_DRIVER_RELDATE "16-March-2022" #define MPI3MR_DRIVER_NAME "mpi3mr" #define MPI3MR_DRIVER_LICENSE "GPL"