From patchwork Sat Jun 27 04:35:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629139 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A6BF1731 for ; Sat, 27 Jun 2020 04:35:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3C1020857 for ; Sat, 27 Jun 2020 04:35:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="iYQQaZaa" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725922AbgF0EfY (ORCPT ); Sat, 27 Jun 2020 00:35:24 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:39054 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725770AbgF0EfY (ORCPT ); Sat, 27 Jun 2020 00:35:24 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4Y1ki119310; Sat, 27 Jun 2020 04:35:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=b3D4jm4aEOnw78Xypa/uwSDc0ZpLBdNt43Rpb2vlEOM=; b=iYQQaZaaqiT0EZYU1+dJ43/L/MJcdo4YGNAJF6p7DLLYEhDlZQ1tPCSajMTaRt5TYCnG JbCy26uXajVW0s3Tg/eGkTHyJhCXRlHdZ5Fc/hUBvr2nCd1CfMkIfRtwU0EpdISxjLy8 t1yOeKRyaX8r2euczoviC95Lk197qNRP2Rc9AFN+yOaceU+KX6OHE2VKXLX2f1qpu6Bh P/PWv9JC+Ng0ZTbw4mQ5JmzrdKPjANXDPP2YtLRsGFXJ4M5sKYpkMoHlrnnRXCiEHe3q RBumt7m8gWT6bVZumTlB1HV0Bve+jaBQk4BCAmcfgOHsI0smfjub8YFM67HWA5FNsF4D oA== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2130.oracle.com with ESMTP id 31wwhr83sj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:16 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4Y92K121991; Sat, 27 Jun 2020 04:35:15 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserp3030.oracle.com with ESMTP id 31wv58v89n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:15 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 05R4ZBni010305; Sat, 27 Jun 2020 04:35:11 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:11 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 01/10] target: add common session id Date: Fri, 26 Jun 2020 23:35:00 -0500 Message-Id: <1593232509-13720-2-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 suspectscore=2 bulkscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 malwarescore=0 phishscore=0 priorityscore=1501 clxscore=1015 cotscore=-2147483648 mlxscore=0 adultscore=0 lowpriorityscore=0 impostorscore=0 bulkscore=0 spamscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org The iscsi target and lio core, use a unique session id for a couple uses like scsiAttIntrPortIndex, and transport specifics like logging sessions under a tpgt. This adds a common id that is managed by lio core, which lio core and all transports can use. It will also then be used for in the next patches for the session identifier value used for the configfs dir name. Signed-off-by: Mike Christie --- V3: This is actually the 3rd version of this patch. Bart, in one version you requested that it be per tpg or target. I did that but then I reversed here. Userspace apps would prefer that it's module wide, so that they can have a single lookup table when exporting a device through multiple targets. Also to keep compat with the old iscsi mod use cases, we needed it to be module wide. drivers/target/target_core_transport.c | 22 +++++++++++++++++++--- include/target/target_core_base.h | 1 + 2 files changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index 90ecdd7..3d06f52 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -50,6 +50,8 @@ struct kmem_cache *t10_alua_lba_map_cache; struct kmem_cache *t10_alua_lba_map_mem_cache; +static DEFINE_IDA(se_sess_ida); + static void transport_complete_task_attr(struct se_cmd *cmd); static void translate_sense_reason(struct se_cmd *cmd, sense_reason_t reason); static void transport_handle_queue_full(struct se_cmd *cmd, @@ -153,6 +155,7 @@ int init_se_kmem_caches(void) void release_se_kmem_caches(void) { + ida_destroy(&se_sess_ida); destroy_workqueue(target_completion_wq); kmem_cache_destroy(se_sess_cache); kmem_cache_destroy(se_ua_cache); @@ -251,14 +254,26 @@ struct se_session *transport_alloc_session(enum target_prot_op sup_prot_ops) " se_sess_cache\n"); return ERR_PTR(-ENOMEM); } - ret = transport_init_session(se_sess); + + ret = ida_simple_get(&se_sess_ida, 1, 0, GFP_KERNEL); if (ret < 0) { - kmem_cache_free(se_sess_cache, se_sess); - return ERR_PTR(ret); + pr_err("Unable to allocate session index.\n"); + goto free_sess; } + se_sess->sid = ret; + + ret = transport_init_session(se_sess); + if (ret < 0) + goto free_ida; se_sess->sup_prot_ops = sup_prot_ops; return se_sess; + +free_ida: + ida_simple_remove(&se_sess_ida, se_sess->sid); +free_sess: + kmem_cache_free(se_sess_cache, se_sess); + return ERR_PTR(ret); } EXPORT_SYMBOL(transport_alloc_session); @@ -580,6 +595,7 @@ void transport_free_session(struct se_session *se_sess) kvfree(se_sess->sess_cmd_map); } percpu_ref_exit(&se_sess->cmd_count); + ida_simple_remove(&se_sess_ida, se_sess->sid); kmem_cache_free(se_sess_cache, se_sess); } EXPORT_SYMBOL(transport_free_session); diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h index 18c3f27..adea3bd 100644 --- a/include/target/target_core_base.h +++ b/include/target/target_core_base.h @@ -623,6 +623,7 @@ struct se_session { wait_queue_head_t cmd_list_wq; void *sess_cmd_map; struct sbitmap_queue sess_tag_pool; + int sid; }; struct se_device; From patchwork Sat Jun 27 04:35:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629143 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E780C1731 for ; Sat, 27 Jun 2020 04:35:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C66C320857 for ; Sat, 27 Jun 2020 04:35:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="vKdRqWKl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725948AbgF0Ef0 (ORCPT ); Sat, 27 Jun 2020 00:35:26 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:42098 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725861AbgF0EfZ (ORCPT ); Sat, 27 Jun 2020 00:35:25 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4XMoV045503; Sat, 27 Jun 2020 04:35:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=bFauG9J0QSOQTPezXO9LjXZIsjaX/YTF7L9Oyz6xfv0=; b=vKdRqWKltFn01qMRp0wYaWQZx4wE5rR1iFOGoVCO4uX2OzYisqspf+m/sMxIzi06OYvZ vWAuy9CTAMe06IGDUHiqdV4ZKrwuge/tHbJ5vFpd30UHnl/l0I37HmF/VJYftMH4j5wj 3ScgKN+WjoSrIw1el6xd3hHAf8FYLVuB/Or+iZf3DKO/E9/dO0xo54eQ4tsGdQZeK+V3 sPj93i8mZI46dBs18eKN31OiSkn0nyXyeLjjLcUU3g2X9LTrVyuKJWxeCKhprfq6erEC ASx5uKwhJIE6wG6HxZ/z6BokeJwiYCdQt1KP737UkaGW7YrF8/cve0M3HL8NYvbGEg++ PA== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by aserp2120.oracle.com with ESMTP id 31wx2m82g3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:15 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4XZ2I061234; Sat, 27 Jun 2020 04:35:15 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userp3030.oracle.com with ESMTP id 31wu7rq97j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:14 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 05R4ZC21004121; Sat, 27 Jun 2020 04:35:13 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:11 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 02/10] iscsi target: replace module sids with lio's sid Date: Fri, 26 Jun 2020 23:35:01 -0500 Message-Id: <1593232509-13720-3-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 bulkscore=0 spamscore=0 malwarescore=0 adultscore=0 mlxscore=0 suspectscore=2 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 impostorscore=0 cotscore=-2147483648 priorityscore=1501 malwarescore=0 phishscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org This is the first phase in hooking iscsi into the sysfs API. This patch has it use lio core's sid instead of its internal ones. We have 2 sids in the iscsi target layer: - module sid: int id that is unique across all iscsi targets. Used for sess_get_index(). - iscsi target port group sid: int id that is unique in the tpg. Uses for logging. The lio one works exactly like the iscsi target module one, and the iscsi tpg one is not very useful because when you have multiple tpgs you can't tell which tpg the session is under. In the latter case the lio core one is more useful, because it matches what we see in userspace and logs and we can distinguish what fabric/target/tpg the session is under. Signed-off-by: Mike Christie --- drivers/target/iscsi/iscsi_target.c | 6 ++---- drivers/target/iscsi/iscsi_target_configfs.c | 6 ++---- drivers/target/iscsi/iscsi_target_erl0.c | 11 ++++++----- drivers/target/iscsi/iscsi_target_erl2.c | 8 ++++---- drivers/target/iscsi/iscsi_target_login.c | 20 ++------------------ drivers/target/iscsi/iscsi_target_stat.c | 3 +-- drivers/target/iscsi/iscsi_target_tmr.c | 2 +- drivers/target/iscsi/iscsi_target_tpg.c | 16 +++++++--------- drivers/target/iscsi/iscsi_target_util.c | 2 +- include/target/iscsi/iscsi_target_core.h | 6 ------ 10 files changed, 26 insertions(+), 54 deletions(-) diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c index c968961..e3b3de6 100644 --- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -49,7 +49,6 @@ static DEFINE_MUTEX(np_lock); static struct idr tiqn_idr; -DEFINE_IDA(sess_ida); struct mutex auth_id_lock; struct iscsit_global *iscsit_global; @@ -2331,7 +2330,7 @@ int iscsit_logout_closesession(struct iscsi_cmd *cmd, struct iscsi_conn *conn) struct iscsi_session *sess = conn->sess; pr_debug("Received logout request CLOSESESSION on CID: %hu" - " for SID: %u.\n", conn->cid, conn->sess->sid); + " for SID: %u.\n", conn->cid, conn->sess->se_sess->sid); atomic_set(&sess->session_logout, 1); atomic_set(&conn->conn_logout_remove, 1); @@ -4110,7 +4109,7 @@ int iscsit_close_connection( struct iscsi_session *sess = conn->sess; pr_debug("Closing iSCSI connection CID %hu on SID:" - " %u\n", conn->cid, sess->sid); + " %u\n", conn->cid, sess->se_sess->sid); /* * Always up conn_logout_comp for the traditional TCP and HW_OFFLOAD * case just in case the RX Thread in iscsi_target_rx_opcode() is @@ -4406,7 +4405,6 @@ int iscsit_close_session(struct iscsi_session *sess) pr_debug("Decremented number of active iSCSI Sessions on" " iSCSI TPG: %hu to %u\n", tpg->tpgt, tpg->nsessions); - ida_free(&sess_ida, sess->session_index); kfree(sess->sess_ops); sess->sess_ops = NULL; spin_unlock_bh(&se_tpg->session_lock); diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c index 0fa1d57..d0021bf 100644 --- a/drivers/target/iscsi/iscsi_target_configfs.c +++ b/drivers/target/iscsi/iscsi_target_configfs.c @@ -520,7 +520,7 @@ static ssize_t lio_target_nacl_info_show(struct config_item *item, char *page) rb += sprintf(page+rb, "LIO Session ID: %u ISID: 0x%6ph TSIH: %hu ", - sess->sid, sess->isid, sess->tsih); + se_sess->sid, sess->isid, sess->tsih); rb += sprintf(page+rb, "SessionType: %s\n", (sess->sess_ops->SessionType) ? "Discovery" : "Normal"); @@ -1344,9 +1344,7 @@ static int iscsi_get_cmd_state(struct se_cmd *se_cmd) static u32 lio_sess_get_index(struct se_session *se_sess) { - struct iscsi_session *sess = se_sess->fabric_sess_ptr; - - return sess->session_index; + return se_sess->sid; } static u32 lio_sess_get_initiator_sid( diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c index b4abd7b..e6acd54 100644 --- a/drivers/target/iscsi/iscsi_target_erl0.c +++ b/drivers/target/iscsi/iscsi_target_erl0.c @@ -761,7 +761,7 @@ void iscsit_handle_time2retain_timeout(struct timer_list *t) sess->time2retain_timer_flags |= ISCSI_TF_EXPIRED; pr_err("Time2Retain timer expired for SID: %u, cleaning up" - " iSCSI session.\n", sess->sid); + " iSCSI session.\n", sess->se_sess->sid); iscsit_fill_cxn_timeout_err_stats(sess); spin_unlock_bh(&se_tpg->session_lock); @@ -786,7 +786,8 @@ void iscsit_start_time2retain_handler(struct iscsi_session *sess) return; pr_debug("Starting Time2Retain timer for %u seconds on" - " SID: %u\n", sess->sess_ops->DefaultTime2Retain, sess->sid); + " SID: %u\n", sess->sess_ops->DefaultTime2Retain, + sess->se_sess->sid); sess->time2retain_timer_flags &= ~ISCSI_TF_STOP; sess->time2retain_timer_flags |= ISCSI_TF_RUNNING; @@ -815,7 +816,7 @@ int iscsit_stop_time2retain_timer(struct iscsi_session *sess) spin_lock(&se_tpg->session_lock); sess->time2retain_timer_flags &= ~ISCSI_TF_RUNNING; pr_debug("Stopped Time2Retain Timer for SID: %u\n", - sess->sid); + sess->se_sess->sid); return 0; } @@ -882,8 +883,8 @@ void iscsit_cause_connection_reinstatement(struct iscsi_conn *conn, int sleep) void iscsit_fall_back_to_erl0(struct iscsi_session *sess) { - pr_debug("Falling back to ErrorRecoveryLevel=0 for SID:" - " %u\n", sess->sid); + pr_debug("Falling back to ErrorRecoveryLevel=0 for SID: %u\n", + sess->se_sess->sid); atomic_set(&sess->session_fall_back_to_erl0, 1); } diff --git a/drivers/target/iscsi/iscsi_target_erl2.c b/drivers/target/iscsi/iscsi_target_erl2.c index b1b7db9d..bdc9558 100644 --- a/drivers/target/iscsi/iscsi_target_erl2.c +++ b/drivers/target/iscsi/iscsi_target_erl2.c @@ -93,7 +93,7 @@ static int iscsit_attach_inactive_connection_recovery_entry( sess->conn_recovery_count++; pr_debug("Incremented connection recovery count to %u for" - " SID: %u\n", sess->conn_recovery_count, sess->sid); + " SID: %u\n", sess->conn_recovery_count, sess->se_sess->sid); spin_unlock(&sess->cr_i_lock); return 0; @@ -176,7 +176,7 @@ int iscsit_remove_active_connection_recovery_entry( sess->conn_recovery_count--; pr_debug("Decremented connection recovery count to %u for" - " SID: %u\n", sess->conn_recovery_count, sess->sid); + " SID: %u\n", sess->conn_recovery_count, sess->se_sess->sid); spin_unlock(&sess->cr_a_lock); kfree(cr); @@ -251,11 +251,11 @@ void iscsit_discard_cr_cmds_by_expstatsn( if (!cr->cmd_count) { pr_debug("No commands to be reassigned for failed" " connection CID: %hu on SID: %u\n", - cr->cid, sess->sid); + cr->cid, sess->se_sess->sid); iscsit_remove_inactive_connection_recovery_entry(cr, sess); iscsit_attach_active_connection_recovery_entry(sess, cr); pr_debug("iSCSI connection recovery successful for CID:" - " %hu on SID: %u\n", cr->cid, sess->sid); + " %hu on SID: %u\n", cr->cid, sess->se_sess->sid); iscsit_remove_active_connection_recovery_entry(cr, sess); } else { iscsit_remove_inactive_connection_recovery_entry(cr, sess); diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c index 85748e3..417b797 100644 --- a/drivers/target/iscsi/iscsi_target_login.c +++ b/drivers/target/iscsi/iscsi_target_login.c @@ -186,7 +186,7 @@ int iscsi_check_for_session_reinstatement(struct iscsi_conn *conn) pr_debug("%s iSCSI Session SID %u is still active for %s," " performing session reinstatement.\n", (sessiontype) ? - "Discovery" : "Normal", sess->sid, + "Discovery" : "Normal", sess->se_sess->sid, sess->sess_ops->InitiatorName); spin_lock_bh(&sess->conn_lock); @@ -258,7 +258,6 @@ static int iscsi_login_zero_tsih_s1( { struct iscsi_session *sess = NULL; struct iscsi_login_req *pdu = (struct iscsi_login_req *)buf; - int ret; sess = kzalloc(sizeof(struct iscsi_session), GFP_KERNEL); if (!sess) { @@ -292,15 +291,6 @@ static int iscsi_login_zero_tsih_s1( timer_setup(&sess->time2retain_timer, iscsit_handle_time2retain_timeout, 0); - ret = ida_alloc(&sess_ida, GFP_KERNEL); - if (ret < 0) { - pr_err("Session ID allocation failed %d\n", ret); - iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, - ISCSI_LOGIN_STATUS_NO_RESOURCES); - goto free_sess; - } - - sess->session_index = ret; sess->creation_time = get_jiffies_64(); /* * The FFP CmdSN window values will be allocated from the TPG's @@ -314,7 +304,7 @@ static int iscsi_login_zero_tsih_s1( ISCSI_LOGIN_STATUS_NO_RESOURCES); pr_err("Unable to allocate memory for" " struct iscsi_sess_ops.\n"); - goto free_id; + goto free_sess; } sess->se_sess = transport_alloc_session(TARGET_PROT_NORMAL); @@ -328,8 +318,6 @@ static int iscsi_login_zero_tsih_s1( free_ops: kfree(sess->sess_ops); -free_id: - ida_free(&sess_ida, sess->session_index); free_sess: kfree(sess); conn->sess = NULL; @@ -768,9 +756,6 @@ void iscsi_post_login_handler( sess->sess_ops->InitiatorName); spin_unlock_bh(&sess->conn_lock); - sess->sid = tpg->sid++; - if (!sess->sid) - sess->sid = tpg->sid++; pr_debug("Established iSCSI session from node: %s\n", sess->sess_ops->InitiatorName); @@ -1161,7 +1146,6 @@ void iscsi_target_login_sess_out(struct iscsi_conn *conn, goto old_sess_out; transport_free_session(conn->sess->se_sess); - ida_free(&sess_ida, conn->sess->session_index); kfree(conn->sess->sess_ops); kfree(conn->sess); conn->sess = NULL; diff --git a/drivers/target/iscsi/iscsi_target_stat.c b/drivers/target/iscsi/iscsi_target_stat.c index 35e75a3..8167fdc 100644 --- a/drivers/target/iscsi/iscsi_target_stat.c +++ b/drivers/target/iscsi/iscsi_target_stat.c @@ -630,8 +630,7 @@ static ssize_t iscsi_stat_sess_indx_show(struct config_item *item, char *page) if (se_sess) { sess = se_sess->fabric_sess_ptr; if (sess) - ret = snprintf(page, PAGE_SIZE, "%u\n", - sess->session_index); + ret = snprintf(page, PAGE_SIZE, "%u\n", se_sess->sid); } spin_unlock_bh(&se_nacl->nacl_sess_lock); diff --git a/drivers/target/iscsi/iscsi_target_tmr.c b/drivers/target/iscsi/iscsi_target_tmr.c index 7d618db..dbc95eb 100644 --- a/drivers/target/iscsi/iscsi_target_tmr.c +++ b/drivers/target/iscsi/iscsi_target_tmr.c @@ -186,7 +186,7 @@ static void iscsit_task_reassign_remove_cmd( spin_unlock(&cr->conn_recovery_cmd_lock); if (!ret) { pr_debug("iSCSI connection recovery successful for CID:" - " %hu on SID: %u\n", cr->cid, sess->sid); + " %hu on SID: %u\n", cr->cid, sess->se_sess->sid); iscsit_remove_active_connection_recovery_entry(cr, sess); } } diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c index 8075f60..e252a7f 100644 --- a/drivers/target/iscsi/iscsi_target_tpg.c +++ b/drivers/target/iscsi/iscsi_target_tpg.c @@ -64,16 +64,13 @@ int iscsit_load_discovery_tpg(void) */ tpg->tpg_se_tpg.se_tpg_tfo = &iscsi_ops; ret = core_tpg_register(NULL, &tpg->tpg_se_tpg, -1); - if (ret < 0) { - kfree(tpg); - return -1; - } + if (ret < 0) + goto free_tpg; - tpg->sid = 1; /* First Assigned LIO Session ID */ iscsit_set_default_tpg_attribs(tpg); if (iscsi_create_default_params(&tpg->param_list) < 0) - goto out; + goto dereg_se_tpg; /* * By default we disable authentication for discovery sessions, * this can be changed with: @@ -97,11 +94,12 @@ int iscsit_load_discovery_tpg(void) pr_debug("CORE[0] - Allocated Discovery TPG\n"); return 0; + free_pl_out: iscsi_release_param_list(tpg->param_list); -out: - if (tpg->sid == 1) - core_tpg_deregister(&tpg->tpg_se_tpg); +dereg_se_tpg: + core_tpg_deregister(&tpg->tpg_se_tpg); +free_tpg: kfree(tpg); return -1; } diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c index 45ba07c6..1b6c2db 100644 --- a/drivers/target/iscsi/iscsi_target_util.c +++ b/drivers/target/iscsi/iscsi_target_util.c @@ -1219,7 +1219,7 @@ void iscsit_print_session_params(struct iscsi_session *sess) struct iscsi_conn *conn; pr_debug("-----------------------------[Session Params for" - " SID: %u]-----------------------------\n", sess->sid); + " SID: %u]-----------------------------\n", sess->se_sess->sid); spin_lock_bh(&sess->conn_lock); list_for_each_entry(conn, &sess->sess_conn_list, conn_list) iscsi_dump_conn_ops(conn->conn_ops); diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h index 4fda324..836eee2 100644 --- a/include/target/iscsi/iscsi_target_core.h +++ b/include/target/iscsi/iscsi_target_core.h @@ -644,11 +644,7 @@ struct iscsi_session { atomic_t max_cmd_sn; struct list_head sess_ooo_cmdsn_list; - /* LIO specific session ID */ - u32 sid; char auth_type[8]; - /* unique within the target */ - int session_index; /* Used for session reference counting */ int session_usage_count; int session_waiting_on_uc; @@ -820,8 +816,6 @@ struct iscsi_portal_group { u32 nsessions; /* Number of Network Portals available for this TPG */ u32 num_tpg_nps; - /* Per TPG LIO specific session ID. */ - u32 sid; /* Spinlock for adding/removing Network Portals */ spinlock_t tpg_np_lock; spinlock_t tpg_state_lock; From patchwork Sat Jun 27 04:35:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629165 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA01E6C1 for ; Sat, 27 Jun 2020 04:35:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B540D20857 for ; Sat, 27 Jun 2020 04:35:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="JdvRmjms" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725992AbgF0Efb (ORCPT ); Sat, 27 Jun 2020 00:35:31 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:58920 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725885AbgF0Ef0 (ORCPT ); Sat, 27 Jun 2020 00:35:26 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4Xlqm078571; Sat, 27 Jun 2020 04:35:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=i5xlX1dWDQ4WWO/YUZhV5MRg2OEsDWF2vB/DhW7QMEM=; b=JdvRmjms3k2ZKXhAjTB+xvpAeUQBIPHYlfZ9Vr4pR1vcYmOhyWxu7jeo+dT/bH/otig0 11KfvIb0ZZpa35oyXgzzHbANnxFsJEzhGbZJBP8BnnAkSmC8ROe+zzy1nl5VuA/LTNqD 477GerfgQ7ugkPBPRyN0OnuKF+3v0kb8vAXK1d7/X7JGzRYY2i5olx/nmrquBqJJo+Ur Gz/3PM8j2t5HbTQrvAXuPg2+ykHv2Cgf8rlHIGIpnGgi18wRzyE1Ib7ZECXRH4gTlOAF +8XogCt6L29RcrSPuz3+PKW6vSqK1l8ndw1EXyUGVqeIvjz34T1yxMPDZ7phcMQqONo3 IQ== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2120.oracle.com with ESMTP id 31wxrmr0y5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:15 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4YNSP050751; Sat, 27 Jun 2020 04:35:15 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserp3020.oracle.com with ESMTP id 31wwwyvnnj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:14 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 05R4ZDJv030599; Sat, 27 Jun 2020 04:35:13 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:13 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 03/10] target: drop sess_get_index Date: Fri, 26 Jun 2020 23:35:02 -0500 Message-Id: <1593232509-13720-4-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=2 adultscore=0 mlxscore=0 spamscore=0 bulkscore=0 malwarescore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999 priorityscore=1501 impostorscore=0 bulkscore=0 clxscore=1015 malwarescore=0 phishscore=0 adultscore=0 cotscore=-2147483648 lowpriorityscore=0 suspectscore=2 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org Use the LIO session id for the scsiAttIntrPortIndex. iSCSI was already using this value and the other drivers used hard coded values of 1 or 0. The SCSI-MIB specs says: This object represents an arbitrary integer used to uniquely identify a particular attached remote initiator port to a particular SCSI target port within a particular SCSI target device within a particular SCSI instance. So the lio session sid can be used. Reviewed-by: Hannes Reinecke Signed-off-by: Mike Christie --- drivers/infiniband/ulp/srpt/ib_srpt.c | 15 --------------- drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 6 ------ drivers/scsi/qla2xxx/tcm_qla2xxx.c | 7 ------- drivers/target/iscsi/iscsi_target_configfs.c | 6 ------ drivers/target/loopback/tcm_loop.c | 6 ------ drivers/target/sbp/sbp_target.c | 6 ------ drivers/target/target_core_configfs.c | 4 ---- drivers/target/target_core_stat.c | 5 +---- drivers/target/tcm_fc/tfc_conf.c | 1 - drivers/target/tcm_fc/tfc_sess.c | 7 ------- drivers/usb/gadget/function/f_tcm.c | 6 ------ drivers/vhost/scsi.c | 6 ------ drivers/xen/xen-scsiback.c | 6 ------ include/target/target_core_fabric.h | 1 - 14 files changed, 1 insertion(+), 81 deletions(-) diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c index ef7fcd3..de564d1 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.c +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c @@ -3344,20 +3344,6 @@ static void srpt_close_session(struct se_session *se_sess) srpt_disconnect_ch_sync(ch); } -/** - * srpt_sess_get_index - return the value of scsiAttIntrPortIndex (SCSI-MIB) - * @se_sess: SCSI target session. - * - * A quote from RFC 4455 (SCSI-MIB) about this MIB object: - * This object represents an arbitrary integer used to uniquely identify a - * particular attached remote initiator port to a particular SCSI target port - * within a particular SCSI target device within a particular SCSI instance. - */ -static u32 srpt_sess_get_index(struct se_session *se_sess) -{ - return 0; -} - static void srpt_set_default_node_attrs(struct se_node_acl *nacl) { } @@ -3834,7 +3820,6 @@ static ssize_t srpt_wwn_version_show(struct config_item *item, char *buf) .release_cmd = srpt_release_cmd, .check_stop_free = srpt_check_stop_free, .close_session = srpt_close_session, - .sess_get_index = srpt_sess_get_index, .sess_get_initiator_sid = NULL, .write_pending = srpt_write_pending, .set_default_node_attributes = srpt_set_default_node_attrs, diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c index d9e94e8..a817524 100644 --- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c +++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c @@ -3739,11 +3739,6 @@ static void ibmvscsis_release_cmd(struct se_cmd *se_cmd) spin_unlock_bh(&vscsi->intr_lock); } -static u32 ibmvscsis_sess_get_index(struct se_session *se_sess) -{ - return 0; -} - static int ibmvscsis_write_pending(struct se_cmd *se_cmd) { struct ibmvscsis_cmd *cmd = container_of(se_cmd, struct ibmvscsis_cmd, @@ -4034,7 +4029,6 @@ static ssize_t ibmvscsis_tpg_enable_store(struct config_item *item, .tpg_get_inst_index = ibmvscsis_tpg_get_inst_index, .check_stop_free = ibmvscsis_check_stop_free, .release_cmd = ibmvscsis_release_cmd, - .sess_get_index = ibmvscsis_sess_get_index, .write_pending = ibmvscsis_write_pending, .set_default_node_attributes = ibmvscsis_set_default_node_attrs, .get_cmd_state = ibmvscsis_get_cmd_state, diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c index 68183a9..fa861ba 100644 --- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c +++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c @@ -377,11 +377,6 @@ static void tcm_qla2xxx_close_session(struct se_session *se_sess) tcm_qla2xxx_put_sess(sess); } -static u32 tcm_qla2xxx_sess_get_index(struct se_session *se_sess) -{ - return 0; -} - static int tcm_qla2xxx_write_pending(struct se_cmd *se_cmd) { struct qla_tgt_cmd *cmd = container_of(se_cmd, @@ -1853,7 +1848,6 @@ static ssize_t tcm_qla2xxx_wwn_version_show(struct config_item *item, .check_stop_free = tcm_qla2xxx_check_stop_free, .release_cmd = tcm_qla2xxx_release_cmd, .close_session = tcm_qla2xxx_close_session, - .sess_get_index = tcm_qla2xxx_sess_get_index, .sess_get_initiator_sid = NULL, .write_pending = tcm_qla2xxx_write_pending, .set_default_node_attributes = tcm_qla2xxx_set_default_node_attrs, @@ -1893,7 +1887,6 @@ static ssize_t tcm_qla2xxx_wwn_version_show(struct config_item *item, .check_stop_free = tcm_qla2xxx_check_stop_free, .release_cmd = tcm_qla2xxx_release_cmd, .close_session = tcm_qla2xxx_close_session, - .sess_get_index = tcm_qla2xxx_sess_get_index, .sess_get_initiator_sid = NULL, .write_pending = tcm_qla2xxx_write_pending, .set_default_node_attributes = tcm_qla2xxx_set_default_node_attrs, diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c index d0021bf..d8b657b 100644 --- a/drivers/target/iscsi/iscsi_target_configfs.c +++ b/drivers/target/iscsi/iscsi_target_configfs.c @@ -1342,11 +1342,6 @@ static int iscsi_get_cmd_state(struct se_cmd *se_cmd) return cmd->i_state; } -static u32 lio_sess_get_index(struct se_session *se_sess) -{ - return se_sess->sid; -} - static u32 lio_sess_get_initiator_sid( struct se_session *se_sess, unsigned char *buf, @@ -1542,7 +1537,6 @@ static void lio_release_cmd(struct se_cmd *se_cmd) .check_stop_free = lio_check_stop_free, .release_cmd = lio_release_cmd, .close_session = lio_tpg_close_session, - .sess_get_index = lio_sess_get_index, .sess_get_initiator_sid = lio_sess_get_initiator_sid, .write_pending = lio_write_pending, .set_default_node_attributes = lio_set_default_node_attributes, diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c index 16d5a4e..70eb87e 100644 --- a/drivers/target/loopback/tcm_loop.c +++ b/drivers/target/loopback/tcm_loop.c @@ -512,11 +512,6 @@ static u32 tcm_loop_get_inst_index(struct se_portal_group *se_tpg) return 1; } -static u32 tcm_loop_sess_get_index(struct se_session *se_sess) -{ - return 1; -} - static void tcm_loop_set_default_node_attributes(struct se_node_acl *se_acl) { return; @@ -1131,7 +1126,6 @@ static ssize_t tcm_loop_wwn_version_show(struct config_item *item, char *page) .tpg_get_inst_index = tcm_loop_get_inst_index, .check_stop_free = tcm_loop_check_stop_free, .release_cmd = tcm_loop_release_cmd, - .sess_get_index = tcm_loop_sess_get_index, .write_pending = tcm_loop_write_pending, .set_default_node_attributes = tcm_loop_set_default_node_attributes, .get_cmd_state = tcm_loop_get_cmd_state, diff --git a/drivers/target/sbp/sbp_target.c b/drivers/target/sbp/sbp_target.c index e4a9b9f..944ba4d 100644 --- a/drivers/target/sbp/sbp_target.c +++ b/drivers/target/sbp/sbp_target.c @@ -1708,11 +1708,6 @@ static void sbp_release_cmd(struct se_cmd *se_cmd) sbp_free_request(req); } -static u32 sbp_sess_get_index(struct se_session *se_sess) -{ - return 0; -} - static int sbp_write_pending(struct se_cmd *se_cmd) { struct sbp_target_request *req = container_of(se_cmd, @@ -2309,7 +2304,6 @@ static ssize_t sbp_tpg_attrib_max_logins_per_lun_store(struct config_item *item, .tpg_check_prod_mode_write_protect = sbp_check_false, .tpg_get_inst_index = sbp_tpg_get_inst_index, .release_cmd = sbp_release_cmd, - .sess_get_index = sbp_sess_get_index, .write_pending = sbp_write_pending, .set_default_node_attributes = sbp_set_default_node_attrs, .get_cmd_state = sbp_get_cmd_state, diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c index f043522..bb932d4 100644 --- a/drivers/target/target_core_configfs.c +++ b/drivers/target/target_core_configfs.c @@ -385,10 +385,6 @@ static int target_fabric_tf_ops_check(const struct target_core_fabric_ops *tfo) pr_err("Missing tfo->release_cmd()\n"); return -EINVAL; } - if (!tfo->sess_get_index) { - pr_err("Missing tfo->sess_get_index()\n"); - return -EINVAL; - } if (!tfo->write_pending) { pr_err("Missing tfo->write_pending()\n"); return -EINVAL; diff --git a/drivers/target/target_core_stat.c b/drivers/target/target_core_stat.c index 237309d..2aeb843 100644 --- a/drivers/target/target_core_stat.c +++ b/drivers/target/target_core_stat.c @@ -1264,7 +1264,6 @@ static ssize_t target_stat_iport_indx_show(struct config_item *item, struct se_lun_acl *lacl = iport_to_lacl(item); struct se_node_acl *nacl = lacl->se_lun_nacl; struct se_session *se_sess; - struct se_portal_group *tpg; ssize_t ret; spin_lock_irq(&nacl->nacl_sess_lock); @@ -1274,10 +1273,8 @@ static ssize_t target_stat_iport_indx_show(struct config_item *item, return -ENODEV; } - tpg = nacl->se_tpg; /* scsiAttIntrPortIndex */ - ret = snprintf(page, PAGE_SIZE, "%u\n", - tpg->se_tpg_tfo->sess_get_index(se_sess)); + ret = snprintf(page, PAGE_SIZE, "%u\n", se_sess->sid); spin_unlock_irq(&nacl->nacl_sess_lock); return ret; } diff --git a/drivers/target/tcm_fc/tfc_conf.c b/drivers/target/tcm_fc/tfc_conf.c index 1a38c98..ff18e0a 100644 --- a/drivers/target/tcm_fc/tfc_conf.c +++ b/drivers/target/tcm_fc/tfc_conf.c @@ -426,7 +426,6 @@ static u32 ft_tpg_get_inst_index(struct se_portal_group *se_tpg) .check_stop_free = ft_check_stop_free, .release_cmd = ft_release_cmd, .close_session = ft_sess_close, - .sess_get_index = ft_sess_get_index, .sess_get_initiator_sid = NULL, .write_pending = ft_write_pending, .set_default_node_attributes = ft_set_default_node_attr, diff --git a/drivers/target/tcm_fc/tfc_sess.c b/drivers/target/tcm_fc/tfc_sess.c index 4fd6a1d..6df570a 100644 --- a/drivers/target/tcm_fc/tfc_sess.c +++ b/drivers/target/tcm_fc/tfc_sess.c @@ -325,13 +325,6 @@ void ft_sess_close(struct se_session *se_sess) synchronize_rcu(); /* let transport deregister happen */ } -u32 ft_sess_get_index(struct se_session *se_sess) -{ - struct ft_sess *sess = se_sess->fabric_sess_ptr; - - return sess->port_id; /* XXX TBD probably not what is needed */ -} - u32 ft_sess_get_port_name(struct se_session *se_sess, unsigned char *buf, u32 len) { diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c index eaf556c..e66884f 100644 --- a/drivers/usb/gadget/function/f_tcm.c +++ b/drivers/usb/gadget/function/f_tcm.c @@ -1292,11 +1292,6 @@ static void usbg_release_cmd(struct se_cmd *se_cmd) target_free_tag(se_sess, se_cmd); } -static u32 usbg_sess_get_index(struct se_session *se_sess) -{ - return 0; -} - static void usbg_set_default_node_attrs(struct se_node_acl *nacl) { } @@ -1719,7 +1714,6 @@ static int usbg_check_stop_free(struct se_cmd *se_cmd) .tpg_check_prod_mode_write_protect = usbg_check_false, .tpg_get_inst_index = usbg_tpg_get_inst_index, .release_cmd = usbg_release_cmd, - .sess_get_index = usbg_sess_get_index, .sess_get_initiator_sid = NULL, .write_pending = usbg_send_write_request, .set_default_node_attributes = usbg_set_default_node_attrs, diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 6fb4d7e..7896e69 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -340,11 +340,6 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd) target_free_tag(se_sess, se_cmd); } -static u32 vhost_scsi_sess_get_index(struct se_session *se_sess) -{ - return 0; -} - static int vhost_scsi_write_pending(struct se_cmd *se_cmd) { /* Go ahead and process the write immediately */ @@ -2291,7 +2286,6 @@ static void vhost_scsi_drop_tport(struct se_wwn *wwn) .tpg_get_inst_index = vhost_scsi_tpg_get_inst_index, .release_cmd = vhost_scsi_release_cmd, .check_stop_free = vhost_scsi_check_stop_free, - .sess_get_index = vhost_scsi_sess_get_index, .sess_get_initiator_sid = NULL, .write_pending = vhost_scsi_write_pending, .set_default_node_attributes = vhost_scsi_set_default_node_attrs, diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c index 75c0a2e..f0a8fc7 100644 --- a/drivers/xen/xen-scsiback.c +++ b/drivers/xen/xen-scsiback.c @@ -1392,11 +1392,6 @@ static void scsiback_release_cmd(struct se_cmd *se_cmd) target_free_tag(se_cmd->se_sess, se_cmd); } -static u32 scsiback_sess_get_index(struct se_session *se_sess) -{ - return 0; -} - static int scsiback_write_pending(struct se_cmd *se_cmd) { /* Go ahead and process the write immediately */ @@ -1811,7 +1806,6 @@ static int scsiback_check_false(struct se_portal_group *se_tpg) .tpg_get_inst_index = scsiback_tpg_get_inst_index, .check_stop_free = scsiback_check_stop_free, .release_cmd = scsiback_release_cmd, - .sess_get_index = scsiback_sess_get_index, .sess_get_initiator_sid = NULL, .write_pending = scsiback_write_pending, .set_default_node_attributes = scsiback_set_default_node_attrs, diff --git a/include/target/target_core_fabric.h b/include/target/target_core_fabric.h index 6adf4d7..6b05510 100644 --- a/include/target/target_core_fabric.h +++ b/include/target/target_core_fabric.h @@ -66,7 +66,6 @@ struct target_core_fabric_ops { int (*check_stop_free)(struct se_cmd *); void (*release_cmd)(struct se_cmd *); void (*close_session)(struct se_session *); - u32 (*sess_get_index)(struct se_session *); /* * Used only for SCSI fabrics that contain multi-value TransportIDs * (like iSCSI). All other SCSI fabrics should set this to NULL. From patchwork Sat Jun 27 04:35:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629145 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA94E912 for ; Sat, 27 Jun 2020 04:35:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B27AB20857 for ; Sat, 27 Jun 2020 04:35:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="OC9PZvv/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725958AbgF0Ef0 (ORCPT ); Sat, 27 Jun 2020 00:35:26 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:39066 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725880AbgF0EfY (ORCPT ); Sat, 27 Jun 2020 00:35:24 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4WwjT118325; Sat, 27 Jun 2020 04:35:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=8L/ZL5jJJSa8I22MCpSzYKeHPvESS5q4cLy8M6wJXLg=; b=OC9PZvv/AEniVrWYD0WBUToUmvYsupHNyhYUlLW232CJC8R+b0/V2bizD/wJnBrGYcsA 652TEoteFkc6wN5pZidhyIf0DrdE1xzRfGqa4pycF/lPocYwMUpI7ScM7yrrf2K2sUtY 0Mpy/8CaLrWJP3omHGAa/j++mvbsx7hJFQaG0EL4YpYVJnkwi3BuKO+8ls4s36a86Bsh zlOvwCUctEURiphqLxA5I9F6IMrXHONPI5EG90ajKbRrzTEZ2vyf0cHVvKE1LW3dw1SO 2EoED3B7iBi2kWs6Kt6+ZWK+Wts8KrfW1Ne+w/oU6FqDpfv8GhUyNoRbYpLmEVAcLh74 ig== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2130.oracle.com with ESMTP id 31wwhr83sk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:16 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4Y9eF122011; Sat, 27 Jun 2020 04:35:15 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserp3030.oracle.com with ESMTP id 31wv58v89q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:15 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 05R4ZE2Y010374; Sat, 27 Jun 2020 04:35:14 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:13 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 04/10] target: fix xcopy sess release leak Date: Fri, 26 Jun 2020 23:35:03 -0500 Message-Id: <1593232509-13720-5-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 suspectscore=2 bulkscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 malwarescore=0 phishscore=0 priorityscore=1501 clxscore=1015 cotscore=-2147483648 mlxscore=0 adultscore=0 lowpriorityscore=0 impostorscore=0 bulkscore=0 spamscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org transport_init_session can allocate memory via percpu_ref_init, and target_xcopy_release_pt never frees it. This adds a transport_uninit_session function to handle cleanup of resources allocated in the init function. Signed-off-by: Mike Christie --- drivers/target/target_core_internal.h | 1 + drivers/target/target_core_transport.c | 7 ++++++- drivers/target/target_core_xcopy.c | 11 +++++++++-- 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h index 8533444..e7b3c6e 100644 --- a/drivers/target/target_core_internal.h +++ b/drivers/target/target_core_internal.h @@ -138,6 +138,7 @@ struct se_node_acl *core_tpg_add_initiator_node_acl(struct se_portal_group *tpg, void release_se_kmem_caches(void); u32 scsi_get_new_index(scsi_index_t); void transport_subsystem_check_init(void); +void transport_uninit_session(struct se_session *); unsigned char *transport_dump_cmd_direction(struct se_cmd *); void transport_dump_dev_state(struct se_device *, char *, int *); void transport_dump_dev_info(struct se_device *, struct se_lun *, diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index 3d06f52..0da9bba 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -239,6 +239,11 @@ int transport_init_session(struct se_session *se_sess) } EXPORT_SYMBOL(transport_init_session); +void transport_uninit_session(struct se_session *se_sess) +{ + percpu_ref_exit(&se_sess->cmd_count); +} + /** * transport_alloc_session - allocate a session object and initialize it * @sup_prot_ops: bitmask that defines which T10-PI modes are supported. @@ -594,7 +599,7 @@ void transport_free_session(struct se_session *se_sess) sbitmap_queue_free(&se_sess->sess_tag_pool); kvfree(se_sess->sess_cmd_map); } - percpu_ref_exit(&se_sess->cmd_count); + transport_uninit_session(se_sess); ida_simple_remove(&se_sess_ida, se_sess->sid); kmem_cache_free(se_sess_cache, se_sess); } diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c index 0d00ccb..44e15d7 100644 --- a/drivers/target/target_core_xcopy.c +++ b/drivers/target/target_core_xcopy.c @@ -474,7 +474,7 @@ int target_xcopy_setup_pt(void) memset(&xcopy_pt_sess, 0, sizeof(struct se_session)); ret = transport_init_session(&xcopy_pt_sess); if (ret < 0) - return ret; + goto destroy_wq; xcopy_pt_nacl.se_tpg = &xcopy_pt_tpg; xcopy_pt_nacl.nacl_sess = &xcopy_pt_sess; @@ -483,12 +483,19 @@ int target_xcopy_setup_pt(void) xcopy_pt_sess.se_node_acl = &xcopy_pt_nacl; return 0; + +destroy_wq: + destroy_workqueue(xcopy_wq); + xcopy_wq = NULL; + return ret; } void target_xcopy_release_pt(void) { - if (xcopy_wq) + if (xcopy_wq) { destroy_workqueue(xcopy_wq); + transport_uninit_session(&xcopy_pt_sess); + } } /* From patchwork Sat Jun 27 04:35:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629177 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B0EC96C1 for ; Sat, 27 Jun 2020 04:37:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7FD1A20857 for ; Sat, 27 Jun 2020 04:37:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="ByR7Ht2i" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725797AbgF0Eh0 (ORCPT ); Sat, 27 Jun 2020 00:37:26 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:43186 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725840AbgF0Eh0 (ORCPT ); Sat, 27 Jun 2020 00:37:26 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4Xbgh045573; Sat, 27 Jun 2020 04:37:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=0Ib3BPGcSzo6bo+zutPufsN+YCglmBLlgJqbRwDzxO8=; b=ByR7Ht2i6ON97Y6541d0OZKb/YA6I2DAg0+Dx2V4qngJutpczXkquGj4xPN7QR7W+XwD ABYWhRG9fcoXACRzGWHUmogXo/EXA144kAjowySWyA75BXwLxX4VgaWwZiMZWQh61Bmh Pm2A72ERLIXkam/glF1VUX1sXoBqsdQxbmTGUOti7q9oaZMjxAFCdFLxYub5hyw1c23s EXZFmeX4wTUAA0sNSXCgnzu5joTOA6h1gGj67cHlkNGH1Y2dKcrRkBmqGaStPj867mT7 LmkQyrQ0zkR0TeSio+dZAs+Yw0tmns0AtmhE5L1qkJY2/iOLKKHmEmYZSfeERWgJTF0j ww== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by aserp2120.oracle.com with ESMTP id 31wx2m82hv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:37:16 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4Y9vh122016; Sat, 27 Jun 2020 04:35:16 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3030.oracle.com with ESMTP id 31wv58v89w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:15 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 05R4ZEqJ004205; Sat, 27 Jun 2020 04:35:14 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:14 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 05/10] target: add free_session callout and use cfgfs refcounts Date: Fri, 26 Jun 2020 23:35:04 -0500 Message-Id: <1593232509-13720-6-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 suspectscore=2 bulkscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 impostorscore=0 cotscore=-2147483648 priorityscore=1501 malwarescore=0 phishscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org This adds a free_session callout and uses the configfs refcounts to determine when to free the session. Currently, when you delete an ACL it will call into the fabric module to remove the session. While this is happening userspace could be accessing a configfs file that accesses the se_session. In many places we try to check for this by using the acl/tpg locks. This patch has us to start using refcounts and release functions so we can drop that locking. This first patch just hooks us into the refcoutning. The next patches will then add us into configfs and export some useful info. Hannes, you had added your reviewed-by tag to the original version of this patch: target: add free_session callout but I did not carry it over, because I modified it for the configfs refcounts instead of being sysfs based. Signed-off-by: Mike Christie --- drivers/target/iscsi/iscsi_target.c | 6 +- drivers/target/iscsi/iscsi_target_configfs.c | 9 +++ drivers/target/iscsi/iscsi_target_login.c | 8 +-- drivers/target/target_core_fabric_configfs.c | 33 +++++++++++ drivers/target/target_core_internal.h | 2 + drivers/target/target_core_transport.c | 88 +++++++++++++++++++--------- include/target/target_core_base.h | 2 + include/target/target_core_fabric.h | 15 ++++- 8 files changed, 122 insertions(+), 41 deletions(-) diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c index e3b3de6..9f215ff 100644 --- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -4386,8 +4386,6 @@ int iscsit_close_session(struct iscsi_session *sess) } } - transport_deregister_session(sess->se_sess); - if (sess->sess_ops->ErrorRecoveryLevel == 2) iscsit_free_connection_recovery_entries(sess); @@ -4405,11 +4403,9 @@ int iscsit_close_session(struct iscsi_session *sess) pr_debug("Decremented number of active iSCSI Sessions on" " iSCSI TPG: %hu to %u\n", tpg->tpgt, tpg->nsessions); - kfree(sess->sess_ops); - sess->sess_ops = NULL; spin_unlock_bh(&se_tpg->session_lock); - kfree(sess); + transport_deregister_session(sess->se_sess); return 0; } diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c index d8b657b..3e843b0 100644 --- a/drivers/target/iscsi/iscsi_target_configfs.c +++ b/drivers/target/iscsi/iscsi_target_configfs.c @@ -1488,6 +1488,14 @@ static void lio_tpg_close_session(struct se_session *se_sess) iscsit_dec_session_usage_count(sess); } +static void lio_free_session(struct se_session *se_sess) +{ + struct iscsi_session *sess = se_sess->fabric_sess_ptr; + + kfree(sess->sess_ops); + kfree(sess); +} + static u32 lio_tpg_get_inst_index(struct se_portal_group *se_tpg) { return iscsi_tpg(se_tpg)->tpg_tiqn->tiqn_index; @@ -1537,6 +1545,7 @@ static void lio_release_cmd(struct se_cmd *se_cmd) .check_stop_free = lio_check_stop_free, .release_cmd = lio_release_cmd, .close_session = lio_tpg_close_session, + .free_session = lio_free_session, .sess_get_initiator_sid = lio_sess_get_initiator_sid, .write_pending = lio_write_pending, .set_default_node_attributes = lio_set_default_node_attributes, diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c index 417b797..486297c 100644 --- a/drivers/target/iscsi/iscsi_target_login.c +++ b/drivers/target/iscsi/iscsi_target_login.c @@ -307,7 +307,8 @@ static int iscsi_login_zero_tsih_s1( goto free_sess; } - sess->se_sess = transport_alloc_session(TARGET_PROT_NORMAL); + sess->se_sess = transport_alloc_session(&iscsi_ops, TARGET_PROT_NORMAL, + sess); if (IS_ERR(sess->se_sess)) { iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, ISCSI_LOGIN_STATUS_NO_RESOURCES); @@ -740,7 +741,7 @@ void iscsi_post_login_handler( spin_lock_bh(&se_tpg->session_lock); __transport_register_session(&sess->tpg->tpg_se_tpg, - se_sess->se_node_acl, se_sess, sess); + se_sess->se_node_acl, se_sess); pr_debug("Moving to TARG_SESS_STATE_LOGGED_IN.\n"); sess->session_state = TARG_SESS_STATE_LOGGED_IN; @@ -1146,9 +1147,6 @@ void iscsi_target_login_sess_out(struct iscsi_conn *conn, goto old_sess_out; transport_free_session(conn->sess->se_sess); - kfree(conn->sess->sess_ops); - kfree(conn->sess); - conn->sess = NULL; old_sess_out: iscsi_stop_login_thread_timer(np); diff --git a/drivers/target/target_core_fabric_configfs.c b/drivers/target/target_core_fabric_configfs.c index ee85602..2b70bdf 100644 --- a/drivers/target/target_core_fabric_configfs.c +++ b/drivers/target/target_core_fabric_configfs.c @@ -799,6 +799,39 @@ static void target_fabric_drop_lun( TF_CIT_SETUP_DRV(tpg_auth, NULL, NULL); TF_CIT_SETUP_DRV(tpg_param, NULL, NULL); +static void target_cfgfs_sess_release(struct config_item *item) +{ + struct se_session *se_sess = container_of(to_config_group(item), + struct se_session, group); + target_release_session(se_sess); +} + +static struct configfs_item_operations target_sess_item_ops = { + .release = target_cfgfs_sess_release, +}; + +static struct config_item_type target_sess_type = { + .ct_owner = THIS_MODULE, + .ct_item_ops = &target_sess_item_ops, +}; + +int target_cfgfs_init_session(struct se_session *se_sess) +{ + int ret; + + ret = config_item_set_name(&se_sess->group.cg_item, "session-%d", + se_sess->sid); + if (ret) { + pr_err("Could not set configfs name for sid %d. Error %d.\n", + se_sess->sid, ret); + return ret; + } + + se_sess->group.cg_item.ci_type = &target_sess_type; + config_group_init(&se_sess->group); + return 0; +} + /* Start of tfc_tpg_base_cit */ static void target_fabric_tpg_release(struct config_item *item) diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h index e7b3c6e..e92dcf2 100644 --- a/drivers/target/target_core_internal.h +++ b/drivers/target/target_core_internal.h @@ -92,6 +92,7 @@ int target_for_each_device(int (*fn)(struct se_device *dev, void *data), /* target_core_configfs.c */ extern struct configfs_item_operations target_core_dev_item_ops; void target_setup_backend_cits(struct target_backend *); +int target_cfgfs_init_session(struct se_session *); /* target_core_fabric_configfs.c */ int target_fabric_setup_cits(struct target_fabric_configfs *); @@ -153,6 +154,7 @@ void transport_dump_dev_info(struct se_device *, struct se_lun *, bool target_check_wce(struct se_device *dev); bool target_check_fua(struct se_device *dev); void __target_execute_cmd(struct se_cmd *, bool); +void target_release_session(struct se_session *); /* target_core_stat.c */ void target_stat_setup_dev_default_groups(struct se_device *); diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index 0da9bba..8d11a8c 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -246,9 +246,13 @@ void transport_uninit_session(struct se_session *se_sess) /** * transport_alloc_session - allocate a session object and initialize it + * @tfo: target core fabric ops * @sup_prot_ops: bitmask that defines which T10-PI modes are supported. + * @private: pointer to fabric session storage in fabric_sess_ptr */ -struct se_session *transport_alloc_session(enum target_prot_op sup_prot_ops) +struct se_session *transport_alloc_session(const struct target_core_fabric_ops *tfo, + enum target_prot_op sup_prot_ops, + void *private) { struct se_session *se_sess; int ret; @@ -259,6 +263,8 @@ struct se_session *transport_alloc_session(enum target_prot_op sup_prot_ops) " se_sess_cache\n"); return ERR_PTR(-ENOMEM); } + se_sess->fabric_sess_ptr = private; + se_sess->tfo = tfo; ret = ida_simple_get(&se_sess_ida, 1, 0, GFP_KERNEL); if (ret < 0) { @@ -270,10 +276,20 @@ struct se_session *transport_alloc_session(enum target_prot_op sup_prot_ops) ret = transport_init_session(se_sess); if (ret < 0) goto free_ida; - se_sess->sup_prot_ops = sup_prot_ops; + /* + * After this call we always (even if it does not get added to cfgfs) + * use the cfg item refcounts to determine when to release the sess. + */ + ret = target_cfgfs_init_session(se_sess); + if (ret) + goto uninit_sess; + + se_sess->sup_prot_ops = sup_prot_ops; return se_sess; +uninit_sess: + transport_uninit_session(se_sess); free_ida: ida_simple_remove(&se_sess_ida, se_sess->sid); free_sess: @@ -317,14 +333,17 @@ int transport_alloc_session_tags(struct se_session *se_sess, /** * transport_init_session_tags - allocate a session and target driver private data + * @tfo: target core fabric ops * @tag_num: Maximum number of in-flight commands between initiator and target. * @tag_size: Size in bytes of the private data a target driver associates with * each command. * @sup_prot_ops: bitmask that defines which T10-PI modes are supported. + * @private: pointer to fabric session storage in fabric_sess_ptr */ static struct se_session * -transport_init_session_tags(unsigned int tag_num, unsigned int tag_size, - enum target_prot_op sup_prot_ops) +transport_init_session_tags(const struct target_core_fabric_ops *tfo, + unsigned int tag_num, unsigned int tag_size, + enum target_prot_op sup_prot_ops, void *private) { struct se_session *se_sess; int rc; @@ -340,7 +359,7 @@ int transport_alloc_session_tags(struct se_session *se_sess, return ERR_PTR(-EINVAL); } - se_sess = transport_alloc_session(sup_prot_ops); + se_sess = transport_alloc_session(tfo, sup_prot_ops, private); if (IS_ERR(se_sess)) return se_sess; @@ -359,15 +378,13 @@ int transport_alloc_session_tags(struct se_session *se_sess, void __transport_register_session( struct se_portal_group *se_tpg, struct se_node_acl *se_nacl, - struct se_session *se_sess, - void *fabric_sess_ptr) + struct se_session *se_sess) { const struct target_core_fabric_ops *tfo = se_tpg->se_tpg_tfo; unsigned char buf[PR_REG_ISID_LEN]; unsigned long flags; se_sess->se_tpg = se_tpg; - se_sess->fabric_sess_ptr = fabric_sess_ptr; /* * Used by struct se_node_acl's under ConfigFS to locate active se_session-t * @@ -415,20 +432,19 @@ void __transport_register_session( list_add_tail(&se_sess->sess_list, &se_tpg->tpg_sess_list); pr_debug("TARGET_CORE[%s]: Registered fabric_sess_ptr: %p\n", - se_tpg->se_tpg_tfo->fabric_name, se_sess->fabric_sess_ptr); + se_sess->tfo->fabric_name, se_sess->fabric_sess_ptr); } EXPORT_SYMBOL(__transport_register_session); void transport_register_session( struct se_portal_group *se_tpg, struct se_node_acl *se_nacl, - struct se_session *se_sess, - void *fabric_sess_ptr) + struct se_session *se_sess) { unsigned long flags; spin_lock_irqsave(&se_tpg->session_lock, flags); - __transport_register_session(se_tpg, se_nacl, se_sess, fabric_sess_ptr); + __transport_register_session(se_tpg, se_nacl, se_sess); spin_unlock_irqrestore(&se_tpg->session_lock, flags); } EXPORT_SYMBOL(transport_register_session); @@ -442,15 +458,18 @@ struct se_session * struct se_session *, void *)) { struct se_session *sess; + int rc; /* * If the fabric driver is using percpu-ida based pre allocation * of I/O descriptor tags, go ahead and perform that setup now.. */ if (tag_num != 0) - sess = transport_init_session_tags(tag_num, tag_size, prot_op); + sess = transport_init_session_tags(tpg->se_tpg_tfo, tag_num, + tag_size, prot_op, private); else - sess = transport_alloc_session(prot_op); + sess = transport_alloc_session(tpg->se_tpg_tfo, prot_op, + private); if (IS_ERR(sess)) return sess; @@ -458,23 +477,30 @@ struct se_session * sess->se_node_acl = core_tpg_check_initiator_node_acl(tpg, (unsigned char *)initiatorname); if (!sess->se_node_acl) { - transport_free_session(sess); - return ERR_PTR(-EACCES); + rc = -EACCES; + goto free_session; } /* * Go ahead and perform any remaining fabric setup that is * required before transport_register_session(). */ if (callback != NULL) { - int rc = callback(tpg, sess, private); - if (rc) { - transport_free_session(sess); - return ERR_PTR(rc); - } + rc = callback(tpg, sess, private); + if (rc) + goto free_session; } - transport_register_session(tpg, sess->se_node_acl, sess, private); + transport_register_session(tpg, sess->se_node_acl, sess); return sess; + +free_session: + /* + * Don't call back into the driver's free_session. The setup callback + * was not successfully run, so the fabric didn't perform its setup. + */ + sess->tfo = NULL; + transport_free_session(sess); + return ERR_PTR(rc); } EXPORT_SYMBOL(target_setup_session); @@ -557,6 +583,15 @@ void transport_deregister_session_configfs(struct se_session *se_sess) } EXPORT_SYMBOL(transport_deregister_session_configfs); +void target_release_session(struct se_session *se_sess) +{ + if (se_sess->tfo && se_sess->tfo->free_session) + se_sess->tfo->free_session(se_sess); + + ida_simple_remove(&se_sess_ida, se_sess->sid); + kmem_cache_free(se_sess_cache, se_sess); +} + void transport_free_session(struct se_session *se_sess) { struct se_node_acl *se_nacl = se_sess->se_node_acl; @@ -567,7 +602,6 @@ void transport_free_session(struct se_session *se_sess) */ if (se_nacl) { struct se_portal_group *se_tpg = se_nacl->se_tpg; - const struct target_core_fabric_ops *se_tfo = se_tpg->se_tpg_tfo; unsigned long flags; se_sess->se_node_acl = NULL; @@ -579,7 +613,7 @@ void transport_free_session(struct se_session *se_sess) */ mutex_lock(&se_tpg->acl_node_mutex); if (se_nacl->dynamic_node_acl && - !se_tfo->tpg_check_demo_mode_cache(se_tpg)) { + !se_sess->tfo->tpg_check_demo_mode_cache(se_tpg)) { spin_lock_irqsave(&se_nacl->nacl_sess_lock, flags); if (list_empty(&se_nacl->acl_sess_list)) se_nacl->dynamic_stop = true; @@ -600,8 +634,7 @@ void transport_free_session(struct se_session *se_sess) kvfree(se_sess->sess_cmd_map); } transport_uninit_session(se_sess); - ida_simple_remove(&se_sess_ida, se_sess->sid); - kmem_cache_free(se_sess_cache, se_sess); + config_group_put(&se_sess->group); } EXPORT_SYMBOL(transport_free_session); @@ -627,7 +660,6 @@ void transport_deregister_session(struct se_session *se_sess) spin_lock_irqsave(&se_tpg->session_lock, flags); list_del(&se_sess->sess_list); se_sess->se_tpg = NULL; - se_sess->fabric_sess_ptr = NULL; spin_unlock_irqrestore(&se_tpg->session_lock, flags); /* @@ -637,7 +669,7 @@ void transport_deregister_session(struct se_session *se_sess) target_for_each_device(target_release_res, se_sess); pr_debug("TARGET_CORE[%s]: Deregistered fabric_sess\n", - se_tpg->se_tpg_tfo->fabric_name); + se_sess->tfo->fabric_name); /* * If last kref is dropping now for an explicit NodeACL, awake sleeping * ->acl_free_comp caller to wakeup configfs se_node_acl->acl_group diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h index adea3bd..d6aca0c 100644 --- a/include/target/target_core_base.h +++ b/include/target/target_core_base.h @@ -624,6 +624,8 @@ struct se_session { void *sess_cmd_map; struct sbitmap_queue sess_tag_pool; int sid; + struct config_group group; + const struct target_core_fabric_ops *tfo; }; struct se_device; diff --git a/include/target/target_core_fabric.h b/include/target/target_core_fabric.h index 6b05510..838c2f0 100644 --- a/include/target/target_core_fabric.h +++ b/include/target/target_core_fabric.h @@ -65,6 +65,14 @@ struct target_core_fabric_ops { */ int (*check_stop_free)(struct se_cmd *); void (*release_cmd)(struct se_cmd *); + /* + * Optional callout to free internal fabric resources when all + * references to the session have been dropped. For modules that call + * transport_alloc_session this will be called if that function + * is successfully run. For modules that call target_setup_session + * the callout will be called if that function is successfully run. + */ + void (*free_session)(struct se_session *); void (*close_session)(struct se_session *); /* * Used only for SCSI fabrics that contain multi-value TransportIDs @@ -132,13 +140,14 @@ struct se_session *target_setup_session(struct se_portal_group *, void target_remove_session(struct se_session *); int transport_init_session(struct se_session *se_sess); -struct se_session *transport_alloc_session(enum target_prot_op); +struct se_session *transport_alloc_session(const struct target_core_fabric_ops *, + enum target_prot_op, void *); int transport_alloc_session_tags(struct se_session *, unsigned int, unsigned int); void __transport_register_session(struct se_portal_group *, - struct se_node_acl *, struct se_session *, void *); + struct se_node_acl *, struct se_session *); void transport_register_session(struct se_portal_group *, - struct se_node_acl *, struct se_session *, void *); + struct se_node_acl *, struct se_session *); ssize_t target_show_dynamic_sessions(struct se_portal_group *, char *); void transport_free_session(struct se_session *); void target_spc2_release(struct se_node_acl *nacl); From patchwork Sat Jun 27 04:35:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629163 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 86FDE14E3 for ; Sat, 27 Jun 2020 04:35:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A97720857 for ; Sat, 27 Jun 2020 04:35:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="f8mM+vIZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725770AbgF0EfZ (ORCPT ); Sat, 27 Jun 2020 00:35:25 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:39056 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725840AbgF0EfY (ORCPT ); Sat, 27 Jun 2020 00:35:24 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4X03n118380; Sat, 27 Jun 2020 04:35:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=xfALCTNiUytLnfV1dbweiTUu9AvHamL/fQ+ICFQjfL4=; b=f8mM+vIZeJ1CED2ZN5EJ/wOy3+ZMAfFlzcyTCmGVOiGKy71LJ/+8+z1swqkWnuSasE41 vs7kdWS7CNYM4iq8dswA4PISGZMWOUfPVp5poffztYaqmlcxkav0B/I34Rfplns7aeCA aNCwXu4E7uJeyiz9ORFWWhgKqkpXW2zbcyE/AV7ZdR9KuwkLZTb+ZX9RJ6faluz7VWnw bHG+guy+uKp0YdeaIN0P34Glw4Nk5XwifQpPXlhYd0XvKpyPA7DTaxmYBuDRZaaw0CTI Sy7pxIHxGYsoNjmn4T6yUoad/5cCFGJFPLNo+M0DAyOPFBJiekUwDBSvMKFNdLNXqZvT xw== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2130.oracle.com with ESMTP id 31wwhr83sm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:17 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4Y94x121992; Sat, 27 Jun 2020 04:35:16 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3030.oracle.com with ESMTP id 31wv58v8a5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:16 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 05R4ZFLg004220; Sat, 27 Jun 2020 04:35:15 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:14 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 06/10] tcm_loop: fix nexus races Date: Fri, 26 Jun 2020 23:35:05 -0500 Message-Id: <1593232509-13720-7-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 suspectscore=0 bulkscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 malwarescore=0 phishscore=0 priorityscore=1501 clxscore=1015 cotscore=-2147483648 mlxscore=0 adultscore=0 lowpriorityscore=0 impostorscore=0 bulkscore=0 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org We could be freeing the loop nexus while accessing it from other configfs files, and we could have multiple writers to the nexus file. This adds a mutex aroung these operations like is done in other modules that have the nexus configfs interface. Signed-off-by: Mike Christie --- drivers/target/loopback/tcm_loop.c | 30 ++++++++++++++++++++++++++---- drivers/target/loopback/tcm_loop.h | 1 + 2 files changed, 27 insertions(+), 4 deletions(-) diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c index 70eb87e..570dff2 100644 --- a/drivers/target/loopback/tcm_loop.c +++ b/drivers/target/loopback/tcm_loop.c @@ -720,14 +720,18 @@ static int tcm_loop_make_nexus( struct tcm_loop_nexus *tl_nexus; int ret; + mutex_lock(&tl_tpg->tl_nexus_mutex); if (tl_tpg->tl_nexus) { pr_debug("tl_tpg->tl_nexus already exists\n"); + mutex_unlock(&tl_tpg->tl_nexus_mutex); return -EEXIST; } tl_nexus = kzalloc(sizeof(*tl_nexus), GFP_KERNEL); - if (!tl_nexus) + if (!tl_nexus) { + mutex_unlock(&tl_tpg->tl_nexus_mutex); return -ENOMEM; + } tl_nexus->se_sess = target_setup_session(&tl_tpg->tl_se_tpg, 0, 0, TARGET_PROT_DIN_PASS | TARGET_PROT_DOUT_PASS, @@ -735,8 +739,10 @@ static int tcm_loop_make_nexus( if (IS_ERR(tl_nexus->se_sess)) { ret = PTR_ERR(tl_nexus->se_sess); kfree(tl_nexus); + mutex_unlock(&tl_tpg->tl_nexus_mutex); return ret; } + mutex_unlock(&tl_tpg->tl_nexus_mutex); pr_debug("TCM_Loop_ConfigFS: Established I_T Nexus to emulated %s Initiator Port: %s\n", tcm_loop_dump_proto_id(tl_hba), name); @@ -749,17 +755,23 @@ static int tcm_loop_drop_nexus( struct se_session *se_sess; struct tcm_loop_nexus *tl_nexus; + mutex_lock(&tpg->tl_nexus_mutex); tl_nexus = tpg->tl_nexus; - if (!tl_nexus) + if (!tl_nexus) { + mutex_unlock(&tpg->tl_nexus_mutex); return -ENODEV; + } se_sess = tl_nexus->se_sess; - if (!se_sess) + if (!se_sess) { + mutex_unlock(&tpg->tl_nexus_mutex); return -ENODEV; + } if (atomic_read(&tpg->tl_tpg_port_count)) { pr_err("Unable to remove TCM_Loop I_T Nexus with active TPG port count: %d\n", atomic_read(&tpg->tl_tpg_port_count)); + mutex_unlock(&tpg->tl_nexus_mutex); return -EPERM; } @@ -771,6 +783,8 @@ static int tcm_loop_drop_nexus( */ target_remove_session(se_sess); tpg->tl_nexus = NULL; + mutex_unlock(&tpg->tl_nexus_mutex); + kfree(tl_nexus); return 0; } @@ -785,12 +799,16 @@ static ssize_t tcm_loop_tpg_nexus_show(struct config_item *item, char *page) struct tcm_loop_nexus *tl_nexus; ssize_t ret; + mutex_lock(&tl_tpg->tl_nexus_mutex); tl_nexus = tl_tpg->tl_nexus; - if (!tl_nexus) + if (!tl_nexus) { + mutex_unlock(&tl_tpg->tl_nexus_mutex); return -ENODEV; + } ret = snprintf(page, PAGE_SIZE, "%s\n", tl_nexus->se_sess->se_node_acl->initiatorname); + mutex_unlock(&tl_tpg->tl_nexus_mutex); return ret; } @@ -909,11 +927,14 @@ static ssize_t tcm_loop_tpg_transport_status_store(struct config_item *item, } if (!strncmp(page, "offline", 7)) { tl_tpg->tl_transport_status = TCM_TRANSPORT_OFFLINE; + + mutex_lock(&tl_tpg->tl_nexus_mutex); if (tl_tpg->tl_nexus) { struct se_session *tl_sess = tl_tpg->tl_nexus->se_sess; core_allocate_nexus_loss_ua(tl_sess->se_node_acl); } + mutex_unlock(&tl_tpg->tl_nexus_mutex); return count; } return -EINVAL; @@ -968,6 +989,7 @@ static struct se_portal_group *tcm_loop_make_naa_tpg(struct se_wwn *wwn, tl_tpg = &tl_hba->tl_hba_tpgs[tpgt]; tl_tpg->tl_hba = tl_hba; tl_tpg->tl_tpgt = tpgt; + mutex_init(&tl_tpg->tl_nexus_mutex); /* * Register the tl_tpg as a emulated TCM Target Endpoint */ diff --git a/drivers/target/loopback/tcm_loop.h b/drivers/target/loopback/tcm_loop.h index d311090..88a4eff 100644 --- a/drivers/target/loopback/tcm_loop.h +++ b/drivers/target/loopback/tcm_loop.h @@ -40,6 +40,7 @@ struct tcm_loop_tpg { struct se_portal_group tl_se_tpg; struct tcm_loop_hba *tl_hba; struct tcm_loop_nexus *tl_nexus; + struct mutex tl_nexus_mutex; }; struct tcm_loop_hba { From patchwork Sat Jun 27 04:35:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629141 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7BA85912 for ; Sat, 27 Jun 2020 04:35:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B3E620857 for ; Sat, 27 Jun 2020 04:35:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="qUG2YDlO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725840AbgF0EfZ (ORCPT ); Sat, 27 Jun 2020 00:35:25 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:58916 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725832AbgF0EfY (ORCPT ); Sat, 27 Jun 2020 00:35:24 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4WxI3078438; Sat, 27 Jun 2020 04:35:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=a5+qWx19xiffHAyz0XEGWFulanRlwMIBkYQcqbAE5PA=; b=qUG2YDlOqICAKJqGQUzwWfFuRwBf9VKKLeYVOoFEh/7JrWbIWYx7d+WUSwWpReo+8WG1 XsiZcIm1vGsfi0GZhYankSEIRQim9V3gdatTA2g6bzWK/shsFBCKM9ri+wulSVfEavno 3nbwxws6k9dAy2j70bXr+eIO7NvQWCxchf/4obfCnv/Dms8GIcobOQ533dktlTnzEwPt bCvEldY/2/j+20XeQsuzBrGchgcsrFxPYiS4ofG0Rz4kMGwPAgCKugfOSK0ajPGiOjSr TmoZSG7PRIT1afo2UgEo4APd8a0xCHuqUcNPiy92Yxu/mt1LWguSTQdBi2TONOud5czT yg== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2120.oracle.com with ESMTP id 31wxrmr0y6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:17 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4YMUk050661; Sat, 27 Jun 2020 04:35:17 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserp3020.oracle.com with ESMTP id 31wwwyvnpw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:16 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 05R4ZF1O010409; Sat, 27 Jun 2020 04:35:15 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:15 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 07/10] target: add return value to close_session Date: Fri, 26 Jun 2020 23:35:06 -0500 Message-Id: <1593232509-13720-8-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=2 adultscore=0 mlxscore=0 spamscore=0 bulkscore=0 malwarescore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999 priorityscore=1501 impostorscore=0 bulkscore=0 clxscore=1015 malwarescore=0 phishscore=0 adultscore=0 cotscore=-2147483648 lowpriorityscore=0 suspectscore=2 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org This adds a return value to close_session. In this patch only fcoe returns non-zero and we don't do anything. In the next patches we will be able to remove the session via configfs through the common fabric configfs interface and the fabric specific nexus one, so we will need to handle the case where the interfaces both try to delete the session. Signed-off-by: Mike Christie --- drivers/infiniband/ulp/srpt/ib_srpt.c | 3 ++- drivers/scsi/qla2xxx/tcm_qla2xxx.c | 3 ++- drivers/target/iscsi/iscsi_target_configfs.c | 5 +++-- drivers/target/tcm_fc/tcm_fc.h | 2 +- drivers/target/tcm_fc/tfc_sess.c | 5 +++-- include/target/target_core_fabric.h | 2 +- 6 files changed, 12 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c index de564d1..f9a5bd8 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.c +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c @@ -3337,11 +3337,12 @@ static void srpt_release_cmd(struct se_cmd *se_cmd) * with a node ACL when the user invokes * rmdir /sys/kernel/config/target/$driver/$port/$tpg/acls/$i_port_id */ -static void srpt_close_session(struct se_session *se_sess) +static int srpt_close_session(struct se_session *se_sess) { struct srpt_rdma_ch *ch = se_sess->fabric_sess_ptr; srpt_disconnect_ch_sync(ch); + return 0; } static void srpt_set_default_node_attrs(struct se_node_acl *nacl) diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c index fa861ba..94a26ba 100644 --- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c +++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c @@ -360,7 +360,7 @@ static void tcm_qla2xxx_put_sess(struct fc_port *sess) kref_put(&sess->sess_kref, tcm_qla2xxx_release_session); } -static void tcm_qla2xxx_close_session(struct se_session *se_sess) +static int tcm_qla2xxx_close_session(struct se_session *se_sess) { struct fc_port *sess = se_sess->fabric_sess_ptr; struct scsi_qla_host *vha; @@ -375,6 +375,7 @@ static void tcm_qla2xxx_close_session(struct se_session *se_sess) sess->explicit_logout = 1; tcm_qla2xxx_put_sess(sess); + return 0; } static int tcm_qla2xxx_write_pending(struct se_cmd *se_cmd) diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c index 3e843b0..aa7c4a6 100644 --- a/drivers/target/iscsi/iscsi_target_configfs.c +++ b/drivers/target/iscsi/iscsi_target_configfs.c @@ -1460,7 +1460,7 @@ static int lio_tpg_check_prot_fabric_only( * This function calls iscsit_inc_session_usage_count() on the * struct iscsi_session in question. */ -static void lio_tpg_close_session(struct se_session *se_sess) +static int lio_tpg_close_session(struct se_session *se_sess) { struct iscsi_session *sess = se_sess->fabric_sess_ptr; struct se_portal_group *se_tpg = &sess->tpg->tpg_se_tpg; @@ -1473,7 +1473,7 @@ static void lio_tpg_close_session(struct se_session *se_sess) (sess->time2retain_timer_flags & ISCSI_TF_EXPIRED)) { spin_unlock(&sess->conn_lock); spin_unlock_bh(&se_tpg->session_lock); - return; + return 0; } iscsit_inc_session_usage_count(sess); atomic_set(&sess->session_reinstatement, 1); @@ -1486,6 +1486,7 @@ static void lio_tpg_close_session(struct se_session *se_sess) iscsit_stop_session(sess, 1, 1); iscsit_dec_session_usage_count(sess); + return 0; } static void lio_free_session(struct se_session *se_sess) diff --git a/drivers/target/tcm_fc/tcm_fc.h b/drivers/target/tcm_fc/tcm_fc.h index 2ff716d..4280171 100644 --- a/drivers/target/tcm_fc/tcm_fc.h +++ b/drivers/target/tcm_fc/tcm_fc.h @@ -130,7 +130,7 @@ struct ft_cmd { * Session ops. */ void ft_sess_put(struct ft_sess *); -void ft_sess_close(struct se_session *); +int ft_sess_close(struct se_session *); u32 ft_sess_get_index(struct se_session *); u32 ft_sess_get_port_name(struct se_session *, unsigned char *, u32); diff --git a/drivers/target/tcm_fc/tfc_sess.c b/drivers/target/tcm_fc/tfc_sess.c index 6df570a..12c54e6 100644 --- a/drivers/target/tcm_fc/tfc_sess.c +++ b/drivers/target/tcm_fc/tfc_sess.c @@ -306,7 +306,7 @@ static void ft_sess_delete_all(struct ft_tport *tport) * Remove session and send PRLO. * This is called when the ACL is being deleted or queue depth is changing. */ -void ft_sess_close(struct se_session *se_sess) +int ft_sess_close(struct se_session *se_sess) { struct ft_sess *sess = se_sess->fabric_sess_ptr; u32 port_id; @@ -315,7 +315,7 @@ void ft_sess_close(struct se_session *se_sess) port_id = sess->port_id; if (port_id == -1) { mutex_unlock(&ft_lport_lock); - return; + return -ENODEV; } TFC_SESS_DBG(sess->tport->lport, "port_id %x close session\n", port_id); ft_sess_unhash(sess); @@ -323,6 +323,7 @@ void ft_sess_close(struct se_session *se_sess) ft_close_sess(sess); /* XXX Send LOGO or PRLO */ synchronize_rcu(); /* let transport deregister happen */ + return 0; } u32 ft_sess_get_port_name(struct se_session *se_sess, diff --git a/include/target/target_core_fabric.h b/include/target/target_core_fabric.h index 838c2f0..e200faa 100644 --- a/include/target/target_core_fabric.h +++ b/include/target/target_core_fabric.h @@ -73,7 +73,7 @@ struct target_core_fabric_ops { * the callout will be called if that function is successfully run. */ void (*free_session)(struct se_session *); - void (*close_session)(struct se_session *); + int (*close_session)(struct se_session *); /* * Used only for SCSI fabrics that contain multi-value TransportIDs * (like iSCSI). All other SCSI fabrics should set this to NULL. From patchwork Sat Jun 27 04:35:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629169 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3A84314E3 for ; Sat, 27 Jun 2020 04:35:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2284A20CC7 for ; Sat, 27 Jun 2020 04:35:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="aBJI6w58" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725994AbgF0Efc (ORCPT ); Sat, 27 Jun 2020 00:35:32 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:42094 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725797AbgF0Ef0 (ORCPT ); Sat, 27 Jun 2020 00:35:26 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4WxLJ045438; Sat, 27 Jun 2020 04:35:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=SktdWlZAxVirP2saH2LaktTAxFtxq4YWloWq9Ic/Vl8=; b=aBJI6w58y35Tq+aveT+H7XhC3IfHfUWzw3pXyWHvG8wt0ZBf0fiSTBV1bD7cnr/Ol7Xr RSE7EThibUxkcU4cr7Q9RRJgbbeT14SyM5kQHi83hRd0KRessyrrXMypzFov+R7bCEjo pQkx6QMdZwE08NE47vMcZbrLg/C+Sy+s9reUPYAxTZx3hFvLBA/yrJuIpQm66OypN0A7 QJXytTACvqoXe2eiHpfGI0acijTpg/R38hx+hGu717GliZBXUC4d5TDjIJSEN5KvdqH4 xLSheVLJZ7UbOgyDx+XrbB9v87C9kDIcjNoVLEZTTCivDjlq0uTbK4ZuOW91ujrLYzuY +Q== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by aserp2120.oracle.com with ESMTP id 31wx2m82g5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:16 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4YJtC122171; Sat, 27 Jun 2020 04:35:16 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserp3030.oracle.com with ESMTP id 31wv58v8ae-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:16 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 05R4ZGAK030605; Sat, 27 Jun 2020 04:35:16 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:15 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 08/10] ibm,loop,vhost,xenscsi: add close_session callouts Date: Fri, 26 Jun 2020 23:35:07 -0500 Message-Id: <1593232509-13720-9-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 suspectscore=2 bulkscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 impostorscore=0 cotscore=-2147483648 priorityscore=1501 malwarescore=0 phishscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org Call the fabric modules drop nexus functions from the close_session callout. Signed-off-by: Mike Christie --- drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 14 +++++++++++--- drivers/target/loopback/tcm_loop.c | 17 ++++++++++++----- drivers/usb/gadget/function/f_tcm.c | 17 ++++++++++++----- drivers/vhost/scsi.c | 16 ++++++++++++---- drivers/xen/xen-scsiback.c | 16 ++++++++++++---- 5 files changed, 59 insertions(+), 21 deletions(-) diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c index a817524..81f9649 100644 --- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c +++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c @@ -2239,10 +2239,12 @@ static int ibmvscsis_make_nexus(struct ibmvscsis_tport *tport) return rc; } -static int ibmvscsis_drop_nexus(struct ibmvscsis_tport *tport) +static int ibmvscsis_drop_nexus(struct se_portal_group *se_tpg) { - struct se_session *se_sess; + struct ibmvscsis_tport *tport = + container_of(se_tpg, struct ibmvscsis_tport, se_tpg); struct ibmvscsis_nexus *nexus; + struct se_session *se_sess; nexus = tport->ibmv_nexus; if (!nexus) @@ -2262,6 +2264,11 @@ static int ibmvscsis_drop_nexus(struct ibmvscsis_tport *tport) return 0; } +static int ibmvscsis_close_session(struct se_session *se_sess) +{ + return ibmvscsis_drop_nexus(se_sess->se_tpg); +} + /** * ibmvscsis_srp_login() - Process an SRP Login Request * @vscsi: Pointer to our adapter structure @@ -3934,7 +3941,7 @@ static void ibmvscsis_drop_tpg(struct se_portal_group *se_tpg) /* * Release the virtual I_T Nexus for this ibmvscsis TPG */ - ibmvscsis_drop_nexus(tport); + ibmvscsis_drop_nexus(se_tpg); /* * Deregister the se_tpg from TCM.. */ @@ -4036,6 +4043,7 @@ static ssize_t ibmvscsis_tpg_enable_store(struct config_item *item, .queue_status = ibmvscsis_queue_status, .queue_tm_rsp = ibmvscsis_queue_tm_rsp, .aborted_task = ibmvscsis_aborted_task, + .close_session = ibmvscsis_close_session, /* * Setup function pointers for logic in target_core_fabric_configfs.c */ diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c index 570dff2..d5e616e 100644 --- a/drivers/target/loopback/tcm_loop.c +++ b/drivers/target/loopback/tcm_loop.c @@ -749,11 +749,12 @@ static int tcm_loop_make_nexus( return 0; } -static int tcm_loop_drop_nexus( - struct tcm_loop_tpg *tpg) +static int tcm_loop_drop_nexus(struct se_portal_group *se_tpg) { - struct se_session *se_sess; + struct tcm_loop_tpg *tpg = container_of(se_tpg, struct tcm_loop_tpg, + tl_se_tpg); struct tcm_loop_nexus *tl_nexus; + struct se_session *se_sess; mutex_lock(&tpg->tl_nexus_mutex); tl_nexus = tpg->tl_nexus; @@ -789,6 +790,11 @@ static int tcm_loop_drop_nexus( return 0; } +static int tcm_loop_close_session(struct se_session *se_sess) +{ + return tcm_loop_drop_nexus(se_sess->se_tpg); +} + /* End items for tcm_loop_nexus_cit */ static ssize_t tcm_loop_tpg_nexus_show(struct config_item *item, char *page) @@ -826,7 +832,7 @@ static ssize_t tcm_loop_tpg_nexus_store(struct config_item *item, * Shutdown the active I_T nexus if 'NULL' is passed.. */ if (!strncmp(page, "NULL", 4)) { - ret = tcm_loop_drop_nexus(tl_tpg); + ret = tcm_loop_drop_nexus(se_tpg); return (!ret) ? count : ret; } /* @@ -1017,7 +1023,7 @@ static void tcm_loop_drop_naa_tpg( /* * Release the I_T Nexus for the Virtual target link if present */ - tcm_loop_drop_nexus(tl_tpg); + tcm_loop_drop_nexus(se_tpg); /* * Deregister the tl_tpg as a emulated TCM Target Endpoint */ @@ -1155,6 +1161,7 @@ static ssize_t tcm_loop_wwn_version_show(struct config_item *item, char *page) .queue_status = tcm_loop_queue_status, .queue_tm_rsp = tcm_loop_queue_tm_rsp, .aborted_task = tcm_loop_aborted_task, + .close_session = tcm_loop_close_session, .fabric_make_wwn = tcm_loop_make_scsi_hba, .fabric_drop_wwn = tcm_loop_drop_scsi_hba, .fabric_make_tpg = tcm_loop_make_naa_tpg, diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c index e66884f..ed449ab 100644 --- a/drivers/usb/gadget/function/f_tcm.c +++ b/drivers/usb/gadget/function/f_tcm.c @@ -1415,7 +1415,7 @@ static struct se_portal_group *usbg_make_tpg(struct se_wwn *wwn, return ERR_PTR(ret); } -static int tcm_usbg_drop_nexus(struct usbg_tpg *); +static int tcm_usbg_drop_nexus(struct se_portal_group *); static void usbg_drop_tpg(struct se_portal_group *se_tpg) { @@ -1424,7 +1424,7 @@ static void usbg_drop_tpg(struct se_portal_group *se_tpg) unsigned i; struct f_tcm_opts *opts; - tcm_usbg_drop_nexus(tpg); + tcm_usbg_drop_nexus(se_tpg); core_tpg_deregister(se_tpg); destroy_workqueue(tpg->workqueue); @@ -1597,10 +1597,11 @@ static int tcm_usbg_make_nexus(struct usbg_tpg *tpg, char *name) return ret; } -static int tcm_usbg_drop_nexus(struct usbg_tpg *tpg) +static int tcm_usbg_drop_nexus(struct se_portal_group *se_tpg) { - struct se_session *se_sess; + struct usbg_tpg *tpg = container_of(se_tpg, struct usbg_tpg, se_tpg); struct tcm_usbg_nexus *tv_nexus; + struct se_session *se_sess; int ret = -ENODEV; mutex_lock(&tpg->tpg_mutex); @@ -1635,6 +1636,11 @@ static int tcm_usbg_drop_nexus(struct usbg_tpg *tpg) return ret; } +static int tcm_usbg_close_session(struct se_session *se_sess) +{ + return tcm_usbg_drop_nexus(se_sess->se_tpg); +} + static ssize_t tcm_usbg_tpg_nexus_store(struct config_item *item, const char *page, size_t count) { @@ -1644,7 +1650,7 @@ static ssize_t tcm_usbg_tpg_nexus_store(struct config_item *item, int ret; if (!strncmp(page, "NULL", 4)) { - ret = tcm_usbg_drop_nexus(tpg); + ret = tcm_usbg_drop_nexus(se_tpg); return (!ret) ? count : ret; } if (strlen(page) >= USBG_NAMELEN) { @@ -1723,6 +1729,7 @@ static int usbg_check_stop_free(struct se_cmd *se_cmd) .queue_tm_rsp = usbg_queue_tm_rsp, .aborted_task = usbg_aborted_task, .check_stop_free = usbg_check_stop_free, + .close_session = tcm_usbg_close_session, .fabric_make_wwn = usbg_make_tport, .fabric_drop_wwn = usbg_drop_tport, diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 7896e69..d30b8da 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -1972,10 +1972,12 @@ static int vhost_scsi_make_nexus(struct vhost_scsi_tpg *tpg, return 0; } -static int vhost_scsi_drop_nexus(struct vhost_scsi_tpg *tpg) +static int vhost_scsi_drop_nexus(struct se_portal_group *se_tpg) { - struct se_session *se_sess; + struct vhost_scsi_tpg *tpg = container_of(se_tpg, struct vhost_scsi_tpg, + se_tpg); struct vhost_scsi_nexus *tv_nexus; + struct se_session *se_sess; mutex_lock(&tpg->tv_tpg_mutex); tv_nexus = tpg->tpg_nexus; @@ -2022,6 +2024,11 @@ static int vhost_scsi_drop_nexus(struct vhost_scsi_tpg *tpg) return 0; } +static int vhost_scsi_close_session(struct se_session *se_sess) +{ + return vhost_scsi_drop_nexus(se_sess->se_tpg); +} + static ssize_t vhost_scsi_tpg_nexus_show(struct config_item *item, char *page) { struct se_portal_group *se_tpg = to_tpg(item); @@ -2056,7 +2063,7 @@ static ssize_t vhost_scsi_tpg_nexus_store(struct config_item *item, * Shutdown the active I_T nexus if 'NULL' is passed.. */ if (!strncmp(page, "NULL", 4)) { - ret = vhost_scsi_drop_nexus(tpg); + ret = vhost_scsi_drop_nexus(se_tpg); return (!ret) ? count : ret; } /* @@ -2176,7 +2183,7 @@ static void vhost_scsi_drop_tpg(struct se_portal_group *se_tpg) /* * Release the virtual I_T Nexus for this vhost TPG */ - vhost_scsi_drop_nexus(tpg); + vhost_scsi_drop_nexus(se_tpg); /* * Deregister the se_tpg from TCM.. */ @@ -2294,6 +2301,7 @@ static void vhost_scsi_drop_tport(struct se_wwn *wwn) .queue_status = vhost_scsi_queue_status, .queue_tm_rsp = vhost_scsi_queue_tm_rsp, .aborted_task = vhost_scsi_aborted_task, + .close_session = vhost_scsi_close_session, /* * Setup callers for generic logic in target_core_fabric_configfs.c */ diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c index f0a8fc7..ffd1a7a 100644 --- a/drivers/xen/xen-scsiback.c +++ b/drivers/xen/xen-scsiback.c @@ -1538,10 +1538,12 @@ static int scsiback_make_nexus(struct scsiback_tpg *tpg, return ret; } -static int scsiback_drop_nexus(struct scsiback_tpg *tpg) +static int scsiback_drop_nexus(struct se_portal_group *se_tpg) { - struct se_session *se_sess; + struct scsiback_tpg *tpg = container_of(se_tpg, struct scsiback_tpg, + se_tpg); struct scsiback_nexus *tv_nexus; + struct se_session *se_sess; mutex_lock(&tpg->tv_tpg_mutex); tv_nexus = tpg->tpg_nexus; @@ -1585,6 +1587,11 @@ static int scsiback_drop_nexus(struct scsiback_tpg *tpg) return 0; } +static int scsiback_close_session(struct se_session *se_sess) +{ + return scsiback_drop_nexus(se_sess->se_tpg); +} + static ssize_t scsiback_tpg_nexus_show(struct config_item *item, char *page) { struct se_portal_group *se_tpg = to_tpg(item); @@ -1619,7 +1626,7 @@ static ssize_t scsiback_tpg_nexus_store(struct config_item *item, * Shutdown the active I_T nexus if 'NULL' is passed. */ if (!strncmp(page, "NULL", 4)) { - ret = scsiback_drop_nexus(tpg); + ret = scsiback_drop_nexus(se_tpg); return (!ret) ? count : ret; } /* @@ -1776,7 +1783,7 @@ static void scsiback_drop_tpg(struct se_portal_group *se_tpg) /* * Release the virtual I_T Nexus for this xen-pvscsi TPG */ - scsiback_drop_nexus(tpg); + scsiback_drop_nexus(se_tpg); /* * Deregister the se_tpg from TCM. */ @@ -1814,6 +1821,7 @@ static int scsiback_check_false(struct se_portal_group *se_tpg) .queue_status = scsiback_queue_status, .queue_tm_rsp = scsiback_queue_tm_rsp, .aborted_task = scsiback_aborted_task, + .close_session = scsiback_close_session, /* * Setup callers for generic logic in target_core_fabric_configfs.c */ From patchwork Sat Jun 27 04:35:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629159 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1E5D1731 for ; Sat, 27 Jun 2020 04:35:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1A302089D for ; Sat, 27 Jun 2020 04:35:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="HyK49OAf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725963AbgF0Ef2 (ORCPT ); Sat, 27 Jun 2020 00:35:28 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:39078 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725936AbgF0Ef1 (ORCPT ); Sat, 27 Jun 2020 00:35:27 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4XVWm118765; Sat, 27 Jun 2020 04:35:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=UqVh+Qq45m1qyMQYABqYNg5keIO3mjWJqXi6V/qtmRI=; b=HyK49OAfQsQiClDk15t9l8k5s/TepXLFf87Azz7K7GjRSSDiIhdxDvQq+Kc+rNA1mWnf 6Uws4Gof8alkX1PyIOZejJM4A0e4plZlpFfvoQuClBifb97JGAhUw0Waxgsi6MBt2vpz 0is0zpd44r4p+72Oj11X4172CqAbMlhVMtZWwQ0YK7rBPPYbT+DoAyhbx7VlzgWQok42 FIE4DCdo0YLka/QWbKEcceuJthGv7WoivvmVO2zHmPaYSmmjlu0iiYBTsgxJgHeOgUeO nj4brM4vnzqXsG9CqT+2gBGvX2X9n+OnIyEkQjaHcWjugjDhr+lxvpMeaLtYAWT3sRC9 dA== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2130.oracle.com with ESMTP id 31wwhr83sn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:18 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4Y9M9121984; Sat, 27 Jun 2020 04:35:17 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserp3030.oracle.com with ESMTP id 31wv58v8as-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:17 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 05R4ZGlw010413; Sat, 27 Jun 2020 04:35:16 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:16 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 09/10] target: add helper to close session synchronously Date: Fri, 26 Jun 2020 23:35:08 -0500 Message-Id: <1593232509-13720-10-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 suspectscore=2 bulkscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 malwarescore=0 phishscore=0 priorityscore=1501 clxscore=1015 cotscore=-2147483648 mlxscore=0 adultscore=0 lowpriorityscore=0 impostorscore=0 bulkscore=0 spamscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org We need to be able to delete sessions from userspace like is done for the ACL path or is done for other objects like tpg, lun, etc. This patch adds a helper function that calls the fabric module close_session callback and then waits for the removal to complete. It will be used in the next patch so userspace can remove sessions before deleting TPGs/ACLs. Signed-off-by: Mike Christie --- drivers/target/target_core_internal.h | 1 + drivers/target/target_core_transport.c | 90 ++++++++++++++++++++++++++++++++++ include/target/target_core_base.h | 2 + 3 files changed, 93 insertions(+) diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h index e92dcf2..0af3844 100644 --- a/drivers/target/target_core_internal.h +++ b/drivers/target/target_core_internal.h @@ -155,6 +155,7 @@ void transport_dump_dev_info(struct se_device *, struct se_lun *, bool target_check_fua(struct se_device *dev); void __target_execute_cmd(struct se_cmd *, bool); void target_release_session(struct se_session *); +int target_close_session_sync(struct se_portal_group *, int); /* target_core_stat.c */ void target_stat_setup_dev_default_groups(struct se_device *); diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index 8d11a8c..942b0c5 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -650,6 +650,7 @@ static int target_release_res(struct se_device *dev, void *data) void transport_deregister_session(struct se_session *se_sess) { struct se_portal_group *se_tpg = se_sess->se_tpg; + struct completion *removal_comp; unsigned long flags; if (!se_tpg) { @@ -660,6 +661,8 @@ void transport_deregister_session(struct se_session *se_sess) spin_lock_irqsave(&se_tpg->session_lock, flags); list_del(&se_sess->sess_list); se_sess->se_tpg = NULL; + removal_comp = se_sess->removal_comp; + se_sess->removal_comp = NULL; spin_unlock_irqrestore(&se_tpg->session_lock, flags); /* @@ -680,11 +683,98 @@ void transport_deregister_session(struct se_session *se_sess) */ transport_free_session(se_sess); + + if (removal_comp) + complete(removal_comp); } EXPORT_SYMBOL(transport_deregister_session); +/** + * target_close_session_sync - Request fabric remove session and wait removal + * @se_tpg: se_portal_group that is the parent of the sess to remove + * @sid: session id + */ +int target_close_session_sync(struct se_portal_group *se_tpg, int sid) +{ + DECLARE_COMPLETION_ONSTACK(removal_comp); + struct se_session *se_sess; + unsigned long flags; + int ret; + +retry: + spin_lock_irqsave(&se_tpg->session_lock, flags); + list_for_each_entry(se_sess, &se_tpg->tpg_sess_list, sess_list) { + if (se_sess->sid == sid) { + config_group_get(&se_sess->group); + goto found; + } + } + spin_unlock_irqrestore(&se_tpg->session_lock, flags); + return -ENODEV; + +found: + if (!se_sess->tfo->close_session) { + pr_err("Session %d does not support configfs session removal.", + se_sess->sid); + return -EINVAL; + } + + if (se_sess->sess_tearing_down || se_sess->sess_remove_running || + se_sess->removal_comp) { + spin_unlock_irqrestore(&se_tpg->session_lock, flags); + config_group_put(&se_sess->group); + /* + * Either the transport started a removal already or another + * caller of this function did. Wait for it to be torn down, + * so caller knows it's safe to proceed with operations like + * parent removals when this returns. + */ + msleep(250); + goto retry; + } + + se_sess->removal_comp = &removal_comp; + pr_debug("Closing session-%d\n", se_sess->sid); + spin_unlock_irqrestore(&se_tpg->session_lock, flags); + + ret = se_sess->tfo->close_session(se_sess); + if (ret < 0) { + pr_debug("Close for session-%d failed %d\n", se_sess->sid, ret); + if (ret != -ENODEV) { + spin_lock_irqsave(&se_tpg->session_lock, flags); + se_sess->removal_comp = NULL; + spin_unlock_irqrestore(&se_tpg->session_lock, flags); + goto put_sess; + } + /* + * Raced with fabric specific nexus interface, but we set our + * compeltion before they called target_remove_session, so we + * can just wait below for them to call complete. + */ + } + + wait_for_completion(&removal_comp); + +put_sess: + config_group_put(&se_sess->group); + return ret; +} + void target_remove_session(struct se_session *se_sess) { + struct se_portal_group *se_tpg = se_sess->se_tpg; + unsigned long flags; + + pr_debug("Removing session-%d\n", se_sess->sid); + + spin_lock_irqsave(&se_tpg->session_lock, flags); + if (se_sess->sess_remove_running) { + spin_unlock_irqrestore(&se_tpg->session_lock, flags); + return; + } + se_sess->sess_remove_running = 1; + spin_unlock_irqrestore(&se_tpg->session_lock, flags); + transport_deregister_session_configfs(se_sess); transport_deregister_session(se_sess); } diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h index d6aca0c..690fff2 100644 --- a/include/target/target_core_base.h +++ b/include/target/target_core_base.h @@ -609,12 +609,14 @@ static inline struct se_node_acl *fabric_stat_to_nacl(struct config_item *item) struct se_session { unsigned sess_tearing_down:1; + unsigned sess_remove_running:1; u64 sess_bin_isid; enum target_prot_op sup_prot_ops; enum target_prot_type sess_prot_type; struct se_node_acl *se_node_acl; struct se_portal_group *se_tpg; void *fabric_sess_ptr; + struct completion *removal_comp; struct percpu_ref cmd_count; struct list_head sess_list; struct list_head sess_acl_list; From patchwork Sat Jun 27 04:35:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11629151 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B86A912 for ; Sat, 27 Jun 2020 04:35:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F16C420857 for ; Sat, 27 Jun 2020 04:35:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="Hb44LLRh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725936AbgF0Ef3 (ORCPT ); Sat, 27 Jun 2020 00:35:29 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:42108 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725900AbgF0Ef1 (ORCPT ); Sat, 27 Jun 2020 00:35:27 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4W2Lq044988; Sat, 27 Jun 2020 04:35:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2020-01-29; bh=iZJvKGQ3yyFMZyhm/t9A/kwXuZeEZcFLExLOfYS6SHk=; b=Hb44LLRhHzVFlX76oEzWwbwMFKCnd/VrG3OXH5siFOcTv/LWJDCHbmldzjTlUF9FTpHp b3+P9NV/ECG0XoK5kpet60fbhdfOFosWrVP/L0EEY4adeU/+/tPg1AeZZyNpEI3KlFQa 6/tvxSe0vlrvl8cA6UlZnp+O343Kabm28maDMVNzGmSscNu5Ra6xTBsEPsA7/piw15Ng Om7qIfznlJaLVRDeTge3gQzhmET1vtUWd59jqcIKtPBlggmu2KA+ZFHkO756W83qdyJ7 mCWbLEF7jiSujpjL++BJW9pioWDqwybKccSiCao3xFitSA8AwocYkBTFJJcM060N40mM MQ== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by aserp2120.oracle.com with ESMTP id 31wx2m82g7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 27 Jun 2020 04:35:17 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 05R4YMdk050735; Sat, 27 Jun 2020 04:35:17 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserp3020.oracle.com with ESMTP id 31wwwyvnqp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 27 Jun 2020 04:35:17 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 05R4ZHYU030608; Sat, 27 Jun 2020 04:35:17 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 27 Jun 2020 04:35:16 +0000 From: Mike Christie To: hare@suse.de, bvanassche@acm.org, bstroesser@ts.fujitsu.com, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH 10/10] target: export sessions via configfs Date: Fri, 26 Jun 2020 23:35:09 -0500 Message-Id: <1593232509-13720-11-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> References: <1593232509-13720-1-git-send-email-michael.christie@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=0 adultscore=0 mlxscore=0 spamscore=0 bulkscore=0 malwarescore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9664 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 impostorscore=0 cotscore=-2147483648 priorityscore=1501 malwarescore=0 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006270030 Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org This patch exports the LIO sessions via configfs. If userspace makes a "sessions" dir on the ACL or TPG dir to indicate to the kernel it supports the new interface on that TPG, then the kernel will make a dir per session in the tpg/sessions or tpg/acls/alc/sessions dir It works similar to how some targets export their session info today where if it's dynamic session then it goes in the tpg dir and if there is an ACL then it goes in the acl's sessions dir. The name of the dir is "session-$sid". qla2xxx example: For ACL based sessions: ├── 21:00:00:24:ff:46:b8:88 │   ├── fabric_statistics │   └── tpgt_1 │   ├── acls │   │   └── 21:00:00:24:ff:46:b8:8a │   │   └── sessions │   │   └── session-1 or for a dynamic session it would be in the tpg dir ..... │   ├── param │   └── sessions │   └── session-1 There is currently nothing in the session-$sid dir. To make the RFC easier to read I did not post the transport id patches or the iscsi conversion one, but on the final send I'll include them. Note/Warning: The interface has 2 quirks: 1. It works similar to the loop/vhost/usb/xen nexus file interface where instead of a rmdir to delete the session you write to some special file. For this new interface we have: /fabric/target/tpgt/sessions/remove_session 2. Because the kernel is making the session, there is no mkdir/rmdir support for each session like other objects like LUN, tpg, target, np, etc. But, before we remove the parent tpg, we have to remove the children sessions still. This gives configfs the behavior it expects where parents can't be removed before children and we will not hit issues like we hit before. To signal that this new requirement is supported, userspace must do mkdir "sessions" on the tpg/acl to create the root sessions dir that will contain the individual sessions. See this rtslib patch: https://github.com/mikechristie/rtslib-fb/commit/4af906d2955b739c0585d81b4b1a0d498cc2f663 If userspace does not do a mkdir "sessions" on the tpg, then the old behavior is supported (we just don't register the session in configfs) for that tpg. Signed-off-by: Mike Christie --- drivers/target/target_core_fabric_configfs.c | 187 +++++++++++++++++++++++++-- drivers/target/target_core_transport.c | 5 + include/target/target_core_base.h | 4 + include/target/target_core_fabric.h | 4 +- 4 files changed, 189 insertions(+), 11 deletions(-) diff --git a/drivers/target/target_core_fabric_configfs.c b/drivers/target/target_core_fabric_configfs.c index 2b70bdf..3c1288b 100644 --- a/drivers/target/target_core_fabric_configfs.c +++ b/drivers/target/target_core_fabric_configfs.c @@ -322,15 +322,45 @@ static struct config_group *target_fabric_make_mappedlun( return ERR_PTR(ret); } -static void target_fabric_drop_mappedlun( - struct config_group *group, - struct config_item *item) +static struct config_item_type target_nacl_sess_type = { + .ct_owner = THIS_MODULE, +}; + +static struct config_group * +target_make_nacl_sess_group(struct config_group *group) { - struct se_lun_acl *lacl = container_of(to_config_group(item), - struct se_lun_acl, se_lun_group); + struct se_node_acl *se_nacl = container_of(group, struct se_node_acl, + acl_group); + struct se_portal_group *se_tpg = se_nacl->se_tpg; + + config_group_init_type_name(&se_nacl->acl_sess_group, "sessions", + &target_nacl_sess_type); + se_tpg->cfgfs_sess_supp = true; + + return &se_nacl->acl_sess_group; +} + +static struct config_group *target_make_nacl_group(struct config_group *group, + const char *name) +{ + if (!strcmp(name, "sessions")) { + return target_make_nacl_sess_group(group); + } else { + return target_fabric_make_mappedlun(group, name); + } +} - configfs_remove_default_groups(&lacl->ml_stat_grps.stat_group); - configfs_remove_default_groups(&lacl->se_lun_group); +static void target_drop_nacl_group(struct config_group *group, + struct config_item *item) +{ + struct se_lun_acl *lacl; + + if (strstr(config_item_name(item), "lun_")) { + lacl = container_of(to_config_group(item), struct se_lun_acl, + se_lun_group); + configfs_remove_default_groups(&lacl->ml_stat_grps.stat_group); + configfs_remove_default_groups(&lacl->se_lun_group); + } config_item_put(item); } @@ -349,8 +379,8 @@ static void target_fabric_nacl_base_release(struct config_item *item) }; static struct configfs_group_operations target_fabric_nacl_base_group_ops = { - .make_group = target_fabric_make_mappedlun, - .drop_item = target_fabric_drop_mappedlun, + .make_group = target_make_nacl_group, + .drop_item = target_drop_nacl_group, }; TF_CIT_SETUP_DRV(tpg_nacl_base, &target_fabric_nacl_base_item_ops, @@ -799,6 +829,8 @@ static void target_fabric_drop_lun( TF_CIT_SETUP_DRV(tpg_auth, NULL, NULL); TF_CIT_SETUP_DRV(tpg_param, NULL, NULL); +/* Start of tfc_tpg_session_cit */ + static void target_cfgfs_sess_release(struct config_item *item) { struct se_session *se_sess = container_of(to_config_group(item), @@ -832,6 +864,82 @@ int target_cfgfs_init_session(struct se_session *se_sess) return 0; } +int target_cfgfs_register_session(struct se_portal_group *se_tpg, + struct se_session *se_sess) +{ + struct se_node_acl *se_nacl; + int ret; + + /* + * If the fabric doesn't support close_session, there's no way for + * userspace to clean up the session during nacl/tpg deletion. + */ + if (!se_tpg->cfgfs_sess_supp || !se_tpg->se_tpg_tfo->close_session) + return 0; + + se_nacl = se_sess->se_node_acl; + if (se_nacl->dynamic_node_acl) { + ret = configfs_register_group(&se_tpg->tpg_sess_group, + &se_sess->group); + } else { + ret = configfs_register_group(&se_nacl->acl_sess_group, + &se_sess->group); + } + if (ret) + goto fail; + + /* + * The session is not created via a mkdir like other objects. A + * transport event like a login or userspace used the nexus file to + * initiate creation. However, we want the same behavior as other + * objects where we must remove the children before removing the + * parent dir, so do a depend on the parent that is released when the + * session is removed. + */ + if (se_nacl->dynamic_node_acl) { + ret = target_depend_item(&se_tpg->tpg_sess_group.cg_item); + } else { + ret = target_depend_item(&se_nacl->acl_sess_group.cg_item); + } + if (ret) + goto unreg_cfgfs; + + se_sess->added_to_cfgfs = true; + return 0; + +unreg_cfgfs: + configfs_unregister_group(&se_sess->group); +fail: + pr_err("Could not register session dir %d. Error %d.\n", se_sess->sid, + ret); + return ret; +} +EXPORT_SYMBOL_GPL(target_cfgfs_register_session); + +void target_cfgfs_unregister_session(struct se_session *se_sess) +{ + struct se_node_acl *se_nacl; + + /* + * The session attr interface may not be enabled and discovery + * sessions are not registered. + */ + if (!se_sess->added_to_cfgfs) + return; + + configfs_unregister_group(&se_sess->group); + + se_nacl = se_sess->se_node_acl; + if (se_nacl->dynamic_node_acl) { + target_undepend_item(&se_sess->se_tpg->tpg_sess_group.cg_item); + } else { + target_undepend_item(&se_nacl->acl_sess_group.cg_item); + } +} +EXPORT_SYMBOL_GPL(target_cfgfs_unregister_session); + +/* End of tfc_tpg_session_cit */ + /* Start of tfc_tpg_base_cit */ static void target_fabric_tpg_release(struct config_item *item) @@ -848,7 +956,66 @@ static void target_fabric_tpg_release(struct config_item *item) .release = target_fabric_tpg_release, }; -TF_CIT_SETUP_DRV(tpg_base, &target_fabric_tpg_base_item_ops, NULL); +static ssize_t target_tpg_remove_session_store(struct config_item *item, + const char *page, size_t count) +{ + struct se_portal_group *se_tpg = container_of(to_config_group(item), + struct se_portal_group, + tpg_sess_group); + int ret, sid; + + ret = kstrtoint(page, 10, &sid); + if (ret < 0) + return ret; + + ret = target_close_session_sync(se_tpg, sid); + if (ret < 0) + return ret; + + return count; +} +CONFIGFS_ATTR_WO(target_tpg_, remove_session); + +static struct configfs_attribute *target_tpg_sess_attrs[] = { + &target_tpg_attr_remove_session, + NULL, +}; + +static struct config_item_type target_tpg_sess_type = { + .ct_owner = THIS_MODULE, + .ct_attrs = target_tpg_sess_attrs, +}; + +static struct config_group * +target_make_tpg_sess_group(struct config_group *group, const char *name) +{ + struct se_portal_group *se_tpg = container_of(group, + struct se_portal_group, + tpg_group); + + if (strcmp(name, "sessions")) + return ERR_PTR(-EINVAL); + + config_group_init_type_name(&se_tpg->tpg_sess_group, name, + &target_tpg_sess_type); + se_tpg->cfgfs_sess_supp = true; + + return &se_tpg->tpg_sess_group; +} + +static void target_drop_tpg_sess_group(struct config_group *group, + struct config_item *item) +{ + config_item_put(item); +} + +static struct configfs_group_operations target_tpg_sess_group_ops = { + .make_group = target_make_tpg_sess_group, + .drop_item = target_drop_tpg_sess_group, +}; + +TF_CIT_SETUP_DRV(tpg_base, &target_fabric_tpg_base_item_ops, + &target_tpg_sess_group_ops); /* End of tfc_tpg_base_cit */ diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index 942b0c5..87aac76 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -480,6 +480,10 @@ struct se_session * rc = -EACCES; goto free_session; } + + rc = target_cfgfs_register_session(tpg, sess); + if (rc) + goto free_session; /* * Go ahead and perform any remaining fabric setup that is * required before transport_register_session(). @@ -775,6 +779,7 @@ void target_remove_session(struct se_session *se_sess) se_sess->sess_remove_running = 1; spin_unlock_irqrestore(&se_tpg->session_lock, flags); + target_cfgfs_unregister_session(se_sess); transport_deregister_session_configfs(se_sess); transport_deregister_session(se_sess); } diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h index 690fff2..f78c1f4 100644 --- a/include/target/target_core_base.h +++ b/include/target/target_core_base.h @@ -571,6 +571,7 @@ struct se_node_acl { struct config_group acl_auth_group; struct config_group acl_param_group; struct config_group acl_fabric_stat_group; + struct config_group acl_sess_group; struct list_head acl_list; struct list_head acl_sess_list; struct completion acl_free_comp; @@ -626,6 +627,7 @@ struct se_session { void *sess_cmd_map; struct sbitmap_queue sess_tag_pool; int sid; + bool added_to_cfgfs; struct config_group group; const struct target_core_fabric_ops *tfo; }; @@ -887,6 +889,7 @@ struct se_portal_group { /* Spinlock for adding/removing sessions */ spinlock_t session_lock; struct mutex tpg_lun_mutex; + bool cfgfs_sess_supp; /* linked list for initiator ACL list */ struct list_head acl_node_list; struct hlist_head tpg_lun_hlist; @@ -903,6 +906,7 @@ struct se_portal_group { struct config_group tpg_attrib_group; struct config_group tpg_auth_group; struct config_group tpg_param_group; + struct config_group tpg_sess_group; }; static inline struct se_portal_group *to_tpg(struct config_item *item) diff --git a/include/target/target_core_fabric.h b/include/target/target_core_fabric.h index e200faa..1582455 100644 --- a/include/target/target_core_fabric.h +++ b/include/target/target_core_fabric.h @@ -154,7 +154,9 @@ void transport_register_session(struct se_portal_group *, void target_put_nacl(struct se_node_acl *); void transport_deregister_session_configfs(struct se_session *); void transport_deregister_session(struct se_session *); - +int target_cfgfs_register_session(struct se_portal_group *, + struct se_session *); +void target_cfgfs_unregister_session(struct se_session *); void transport_init_se_cmd(struct se_cmd *, const struct target_core_fabric_ops *,