From patchwork Fri Apr 7 20:05:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 13205356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B781C77B6C for ; Fri, 7 Apr 2023 20:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230446AbjDGUJy (ORCPT ); Fri, 7 Apr 2023 16:09:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231414AbjDGUJi (ORCPT ); Fri, 7 Apr 2023 16:09:38 -0400 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE050C66C; Fri, 7 Apr 2023 13:09:30 -0700 (PDT) Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 337Fx1H9018319; Fri, 7 Apr 2023 20:06:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=3BsdfYcLKOqKy7DkrgI4zi/SUbKic51UaoPDK/bqgtY=; b=wky1HJQysPAelAx4wkDHaAERffndYxAQoYvCFDoEexHygXBbAx9HgnsOkwf6v1t+yiMI NSNHS8rDPChFMu4X65R1+Iq8vuAvUl/L1LiI7PoxzeS5MQnp3DOeV2WcAIuxwVnirTtH H7SeNmtfVU2wi9do5BfDfYBHW0/goegL9YQf8kQrjoEa83tUHX+GAdouwC3I7AdDuVjg Sq5+2i48lsJfO2dT04YNH8W8S/r4RzldLkspy1UHltnJOB6y3qrHHxRTOfpFqH7HB76a ysGVtXR4qRIeGh96Nh9a8VoZ+cP4nhmk15dxPCNIjUVAVyKcQ51nOD8RbcHVTvCkAWm5 Cw== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ppb1dws0v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 07 Apr 2023 20:06:44 +0000 Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 337IerJH034336; Fri, 7 Apr 2023 20:06:43 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3pptjxefe2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 07 Apr 2023 20:06:43 +0000 Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 337K5trS010228; Fri, 7 Apr 2023 20:06:42 GMT Received: from mnchrist-mac.us.oracle.com (dhcp-10-154-128-1.vpn.oracle.com [10.154.128.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3pptjxeeam-19; Fri, 07 Apr 2023 20:06:42 +0000 From: Mike Christie To: bvanassche@acm.org, hch@lst.de, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, james.bottomley@hansenpartnership.com, linux-block@vger.kernel.org, dm-devel@redhat.com, snitzer@kernel.org, axboe@kernel.dk, linux-nvme@lists.infradead.org, chaitanyak@nvidia.com, kbusch@kernel.org, target-devel@vger.kernel.org Cc: Mike Christie Subject: [PATCH v6 18/18] scsi: target: Add block PR support to iblock Date: Fri, 7 Apr 2023 15:05:51 -0500 Message-Id: <20230407200551.12660-19-michael.christie@oracle.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230407200551.12660-1-michael.christie@oracle.com> References: <20230407200551.12660-1-michael.christie@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-07_12,2023-04-06_03,2023-02-09_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 adultscore=0 phishscore=0 bulkscore=0 mlxscore=0 suspectscore=0 spamscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304070180 X-Proofpoint-GUID: wgNPWn59kb7q0qv-O_WHxnVyZb7PNacf X-Proofpoint-ORIG-GUID: wgNPWn59kb7q0qv-O_WHxnVyZb7PNacf Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org This adds support for the block PR callouts to target_core_iblock. This patch doesn't attempt to implement the entire spec because there's no way support it all like SPEC_I_PT and ALL_TG_PT. This only supports exporting the iblock device from one path on the local target. Signed-off-by: Mike Christie Reviewed-by: Christoph Hellwig Reviewed-by: Hannes Reinecke --- drivers/target/target_core_iblock.c | 271 +++++++++++++++++++++++++++- 1 file changed, 266 insertions(+), 5 deletions(-) diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c index d93f24f9687d..e6029ea87e2f 100644 --- a/drivers/target/target_core_iblock.c +++ b/drivers/target/target_core_iblock.c @@ -23,13 +23,16 @@ #include #include #include +#include #include +#include #include #include #include #include "target_core_iblock.h" +#include "target_core_pr.h" #define IBLOCK_MAX_BIO_PER_TASK 32 /* max # of bios to submit at a time */ #define IBLOCK_BIO_POOL_SIZE 128 @@ -310,7 +313,7 @@ static sector_t iblock_get_blocks(struct se_device *dev) return blocks_long; } -static void iblock_complete_cmd(struct se_cmd *cmd) +static void iblock_complete_cmd(struct se_cmd *cmd, blk_status_t blk_status) { struct iblock_req *ibr = cmd->priv; u8 status; @@ -318,7 +321,9 @@ static void iblock_complete_cmd(struct se_cmd *cmd) if (!refcount_dec_and_test(&ibr->pending)) return; - if (atomic_read(&ibr->ib_bio_err_cnt)) + if (blk_status == BLK_STS_RESV_CONFLICT) + status = SAM_STAT_RESERVATION_CONFLICT; + else if (atomic_read(&ibr->ib_bio_err_cnt)) status = SAM_STAT_CHECK_CONDITION; else status = SAM_STAT_GOOD; @@ -331,6 +336,7 @@ static void iblock_bio_done(struct bio *bio) { struct se_cmd *cmd = bio->bi_private; struct iblock_req *ibr = cmd->priv; + blk_status_t blk_status = bio->bi_status; if (bio->bi_status) { pr_err("bio error: %p, err: %d\n", bio, bio->bi_status); @@ -343,7 +349,7 @@ static void iblock_bio_done(struct bio *bio) bio_put(bio); - iblock_complete_cmd(cmd); + iblock_complete_cmd(cmd, blk_status); } static struct bio *iblock_get_bio(struct se_cmd *cmd, sector_t lba, u32 sg_num, @@ -759,7 +765,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents, if (!sgl_nents) { refcount_set(&ibr->pending, 1); - iblock_complete_cmd(cmd); + iblock_complete_cmd(cmd, BLK_STS_OK); return 0; } @@ -817,7 +823,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents, } iblock_submit_bios(&list); - iblock_complete_cmd(cmd); + iblock_complete_cmd(cmd, BLK_STS_OK); return 0; fail_put_bios: @@ -829,6 +835,258 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents, return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; } +static sense_reason_t iblock_execute_pr_out(struct se_cmd *cmd, u8 sa, u64 key, + u64 sa_key, u8 type, bool aptpl) +{ + struct se_device *dev = cmd->se_dev; + struct iblock_dev *ib_dev = IBLOCK_DEV(dev); + struct block_device *bdev = ib_dev->ibd_bd; + const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops; + int ret; + + if (!ops) { + pr_err("Block device does not support pr_ops but iblock device has been configured for PR passthrough.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + switch (sa) { + case PRO_REGISTER: + case PRO_REGISTER_AND_IGNORE_EXISTING_KEY: + if (!ops->pr_register) { + pr_err("block device does not support pr_register.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + /* The block layer pr ops always enables aptpl */ + if (!aptpl) + pr_info("APTPL not set by initiator, but will be used.\n"); + + ret = ops->pr_register(bdev, key, sa_key, + sa == PRO_REGISTER ? 0 : PR_FL_IGNORE_KEY); + break; + case PRO_RESERVE: + if (!ops->pr_reserve) { + pr_err("block_device does not support pr_reserve.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + ret = ops->pr_reserve(bdev, key, scsi_pr_type_to_block(type), 0); + break; + case PRO_CLEAR: + if (!ops->pr_clear) { + pr_err("block_device does not support pr_clear.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + ret = ops->pr_clear(bdev, key); + break; + case PRO_PREEMPT: + case PRO_PREEMPT_AND_ABORT: + if (!ops->pr_clear) { + pr_err("block_device does not support pr_preempt.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + ret = ops->pr_preempt(bdev, key, sa_key, + scsi_pr_type_to_block(type), + sa == PRO_PREEMPT ? false : true); + break; + case PRO_RELEASE: + if (!ops->pr_clear) { + pr_err("block_device does not support pr_pclear.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + ret = ops->pr_release(bdev, key, scsi_pr_type_to_block(type)); + break; + default: + pr_err("Unknown PERSISTENT_RESERVE_OUT SA: 0x%02x\n", sa); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + if (!ret) + return TCM_NO_SENSE; + else if (ret == PR_STS_RESERVATION_CONFLICT) + return TCM_RESERVATION_CONFLICT; + else + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; +} + +static void iblock_pr_report_caps(unsigned char *param_data) +{ + u16 len = 8; + + put_unaligned_be16(len, ¶m_data[0]); + /* + * When using the pr_ops passthrough method we only support exporting + * the device through one target port because from the backend module + * level we can't see the target port config. As a result we only + * support registration directly from the I_T nexus the cmd is sent + * through and do not set ATP_C here. + * + * The block layer pr_ops do not support passing in initiators so + * we don't set SIP_C here. + */ + /* PTPL_C: Persistence across Target Power Loss bit */ + param_data[2] |= 0x01; + /* + * We are filling in the PERSISTENT RESERVATION TYPE MASK below, so + * set the TMV: Task Mask Valid bit. + */ + param_data[3] |= 0x80; + /* + * Change ALLOW COMMANDs to 0x20 or 0x40 later from Table 166 + */ + param_data[3] |= 0x10; /* ALLOW COMMANDs field 001b */ + /* + * PTPL_A: Persistence across Target Power Loss Active bit. The block + * layer pr ops always enables this so report it active. + */ + param_data[3] |= 0x01; + /* + * Setup the PERSISTENT RESERVATION TYPE MASK from Table 212 spc4r37. + */ + param_data[4] |= 0x80; /* PR_TYPE_EXCLUSIVE_ACCESS_ALLREG */ + param_data[4] |= 0x40; /* PR_TYPE_EXCLUSIVE_ACCESS_REGONLY */ + param_data[4] |= 0x20; /* PR_TYPE_WRITE_EXCLUSIVE_REGONLY */ + param_data[4] |= 0x08; /* PR_TYPE_EXCLUSIVE_ACCESS */ + param_data[4] |= 0x02; /* PR_TYPE_WRITE_EXCLUSIVE */ + param_data[5] |= 0x01; /* PR_TYPE_EXCLUSIVE_ACCESS_ALLREG */ +} + +static sense_reason_t iblock_pr_read_keys(struct se_cmd *cmd, + unsigned char *param_data) +{ + struct se_device *dev = cmd->se_dev; + struct iblock_dev *ib_dev = IBLOCK_DEV(dev); + struct block_device *bdev = ib_dev->ibd_bd; + const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops; + int i, len, paths, data_offset; + struct pr_keys *keys; + sense_reason_t ret; + + if (!ops) { + pr_err("Block device does not support pr_ops but iblock device has been configured for PR passthrough.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + if (!ops->pr_read_keys) { + pr_err("Block device does not support read_keys.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + /* + * We don't know what's under us, but dm-multipath will register every + * path with the same key, so start off with enough space for 16 paths. + * which is not a lot of memory and should normally be enough. + */ + paths = 16; +retry: + len = 8 * paths; + keys = kzalloc(sizeof(*keys) + len, GFP_KERNEL); + if (!keys) + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; + + keys->num_keys = paths; + if (!ops->pr_read_keys(bdev, keys)) { + if (keys->num_keys > paths) { + kfree(keys); + paths *= 2; + goto retry; + } + } else { + ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; + goto free_keys; + } + + ret = TCM_NO_SENSE; + + put_unaligned_be32(keys->generation, ¶m_data[0]); + if (!keys->num_keys) { + put_unaligned_be32(0, ¶m_data[4]); + goto free_keys; + } + + put_unaligned_be32(8 * keys->num_keys, ¶m_data[4]); + + data_offset = 8; + for (i = 0; i < keys->num_keys; i++) { + if (data_offset + 8 > cmd->data_length) + break; + + put_unaligned_be64(keys->keys[i], ¶m_data[data_offset]); + data_offset += 8; + } + +free_keys: + kfree(keys); + return ret; +} + +static sense_reason_t iblock_pr_read_reservation(struct se_cmd *cmd, + unsigned char *param_data) +{ + struct se_device *dev = cmd->se_dev; + struct iblock_dev *ib_dev = IBLOCK_DEV(dev); + struct block_device *bdev = ib_dev->ibd_bd; + const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops; + struct pr_held_reservation rsv = { }; + + if (!ops) { + pr_err("Block device does not support pr_ops but iblock device has been configured for PR passthrough.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + if (!ops->pr_read_reservation) { + pr_err("Block device does not support read_keys.\n"); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + if (ops->pr_read_reservation(bdev, &rsv)) + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; + + put_unaligned_be32(rsv.generation, ¶m_data[0]); + if (!block_pr_type_to_scsi(rsv.type)) { + put_unaligned_be32(0, ¶m_data[4]); + return TCM_NO_SENSE; + } + + put_unaligned_be32(16, ¶m_data[4]); + + if (cmd->data_length < 16) + return TCM_NO_SENSE; + put_unaligned_be64(rsv.key, ¶m_data[8]); + + if (cmd->data_length < 22) + return TCM_NO_SENSE; + param_data[21] = block_pr_type_to_scsi(rsv.type); + + return TCM_NO_SENSE; +} + +static sense_reason_t iblock_execute_pr_in(struct se_cmd *cmd, u8 sa, + unsigned char *param_data) +{ + sense_reason_t ret = TCM_NO_SENSE; + + switch (sa) { + case PRI_REPORT_CAPABILITIES: + iblock_pr_report_caps(param_data); + break; + case PRI_READ_KEYS: + ret = iblock_pr_read_keys(cmd, param_data); + break; + case PRI_READ_RESERVATION: + ret = iblock_pr_read_reservation(cmd, param_data); + break; + default: + pr_err("Unknown PERSISTENT_RESERVE_IN SA: 0x%02x\n", sa); + return TCM_UNSUPPORTED_SCSI_OPCODE; + } + + return ret; +} + static sector_t iblock_get_alignment_offset_lbas(struct se_device *dev) { struct iblock_dev *ib_dev = IBLOCK_DEV(dev); @@ -874,6 +1132,8 @@ static struct exec_cmd_ops iblock_exec_cmd_ops = { .execute_sync_cache = iblock_execute_sync_cache, .execute_write_same = iblock_execute_write_same, .execute_unmap = iblock_execute_unmap, + .execute_pr_out = iblock_execute_pr_out, + .execute_pr_in = iblock_execute_pr_in, }; static sense_reason_t @@ -890,6 +1150,7 @@ static bool iblock_get_write_cache(struct se_device *dev) static const struct target_backend_ops iblock_ops = { .name = "iblock", .inquiry_prod = "IBLOCK", + .transport_flags_changeable = TRANSPORT_FLAG_PASSTHROUGH_PGR, .inquiry_rev = IBLOCK_VERSION, .owner = THIS_MODULE, .attach_hba = iblock_attach_hba,