From patchwork Tue Sep 12 04:26:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anish M Jhaveri X-Patchwork-Id: 9948365 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9928E6038F for ; Tue, 12 Sep 2017 04:26:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86E3F28DAA for ; Tue, 12 Sep 2017 04:26:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7BAE128DF0; Tue, 12 Sep 2017 04:26:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7759728DAA for ; Tue, 12 Sep 2017 04:26:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751217AbdILE0h (ORCPT ); Tue, 12 Sep 2017 00:26:37 -0400 Received: from mail-sn1nam02on0066.outbound.protection.outlook.com ([104.47.36.66]:45445 "EHLO NAM02-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751216AbdILE0g (ORCPT ); Tue, 12 Sep 2017 00:26:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=paviliondata.onmicrosoft.com; s=selector1-paviliondata-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=3DH5y+CY+t0FZl+iWzsCQ4IjY2a9GiV8xJTMLFEEz6s=; b=S5bpJ9VAoTE0E/TZLrSqXJ/MjBga0Oahbk7JoLhA7ARBamACWH5pcSiCjvgtQ96qTkKXZOrerlIMrPN6fPhrzt97kJSHKJyG2UDLoeG/0UXJ5WTQd9sdPx9wi3YQHCmYi7yvr7APZVkTB/j35FG+v/zEp+CGEhQs0CLhlZKwd+8= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=anish.jhaveri@paviliondata.com; Received: from haynes (96.82.108.81) by CO1PR17MB0822.namprd17.prod.outlook.com (2a01:111:e400:7b69::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.35.12; Tue, 12 Sep 2017 04:26:32 +0000 Date: Mon, 11 Sep 2017 21:26:27 -0700 From: Anish M Jhaveri To: sagi@grimberg.me, hch@lst.de, keith.busch@intel.com, axboe@kernel.dk Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Subject: [PATCH 06/10] Init multipath head namespace. Message-ID: <20170912042627.2ut7onndfmeqxd5r@haynes> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.6.2-neo (2016-08-21) X-Originating-IP: [96.82.108.81] X-ClientProxiedBy: MWHPR10CA0024.namprd10.prod.outlook.com (2603:10b6:301::34) To CO1PR17MB0822.namprd17.prod.outlook.com (2a01:111:e400:7b69::26) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c602182c-a4ef-473b-0604-08d4f9967236 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:CO1PR17MB0822; X-Microsoft-Exchange-Diagnostics: 1; CO1PR17MB0822; 3:ewxL2bjyFIx5djfdWlvMJCG9S38PzaUod8FvjKMdsAtWIlCKIggtfxIcFLr5UA6q7UjaroFBcbJzQOkMHSrga2w9OGy75K9aS/CjIjiBxKC3QsSzoZ96fGXuTV4CDa8nitsiM902NOkkTxehOd1UeSlaAyVhuvd0ZEP5jBDyfsOzz1CPvsGv4l0uZzDMRkm+GOuNZX3XkX1tldaB5q75Y0dsb2UT5HU4doxLMJEVXqNEPIQ+H2fCy2xrrWFqbU+0; 25:fhJlN6ctxBmb9VccfH3TKHUx7Mo9SbPW09EAb+GU0c+oz01eKj78Yp7z1d29rbiS7cES6jq7SPtzGGBLiaF5Eiat7DmA2aSChlgfhhnJn4WvXqfYeQvvBNmyUivkU7EXEas2zgVj2zMXIXSnfgMzaBWIhPmK2L6dqPWif0p3U3ORYFzUWpFsqOFHD73tmSxr+jw7sBvJfa8mcajFPNPEesgzE9Vdg0gVRJyKAb3DLZ/tSa9P6gUAFEpCcp/eZKLfAUPqRgLMj5WjWqGfFMUocoCzPZeBTjZivx33UFuD7GHkHXKThR0HJk8zeZGrs8LpnMYGh6Xqhlt6AYEu+PBhsQ==; 31:zU9TRgFIa/gjJDeHBJk5FyxSLdB8Ak9Ko7+xwyYVwsl6kHoySLE6pDHLnH1gie2klClbIubsK1qxN1viVmI2gJZu3ZmClCtS6kOkz61DyaY2eQ6O3wG1AJAGV3M/OHLvTFcV9uL1hZ0XB4Dh1DGBeyN4aGabz+HNPfnXvwIsoSScYBolMKQMi0yK1BVpfR2nmcx0ve4QMqHF2GaCZelfQcS29IL82BIe2ZARk9/isvU= X-MS-TrafficTypeDiagnostic: CO1PR17MB0822: X-Exchange-Antispam-Report-Test: UriScan:; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(93006095)(93001095)(3002001)(10201501046)(100000703101)(100105400095)(6041248)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123555025)(20161123560025)(20161123564025)(20161123562025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:CO1PR17MB0822; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:CO1PR17MB0822; X-Microsoft-Exchange-Diagnostics: 1; CO1PR17MB0822; 4:nmsNeX3NCwx20Ym3p5lovQydDY32YdQSURZtLOG9YE4ID5YnCRNDopMC4zx3dfTNItjgQyiUg8UyARxW2bOqBzCqz5JZ9hbD3VE7N5xuTFggsJx5UdmTzYWvWiUfeDiyrtQb6L/gtHhRRkWB9diL+IPTKJsp37QAjxDb+yH1QhZORM7NoPNLzRujE9TP8JNKEZ6VNI8W+YwssNKiuowuIN5GHQ2ivFF7eZfDig1ZPEvhwz2gVQ/1B4KcOcWDbh2V X-Forefront-PRVS: 042857DBB5 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(43544003)(189002)(40224003)(51234002)(199003)(86362001)(6666003)(5660300001)(3846002)(47776003)(8676002)(110136004)(8936002)(575784001)(305945005)(66066001)(33646002)(6116002)(6496005)(25786009)(68736007)(50466002)(33716001)(83506001)(7736002)(23726003)(53936002)(478600001)(50986999)(101416001)(54356999)(9686003)(105586002)(1076002)(106356001)(2906002)(4001350100001)(42186005)(4326008)(189998001)(97736004)(55016002)(81156014)(81166006)(316002)(18370500001); DIR:OUT; SFP:1101; SCL:1; SRVR:CO1PR17MB0822; H:haynes; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: paviliondata.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CO1PR17MB0822; 23:RwmTL0ff6Hcv8M1b6YMek+FM/bK6fMAwDA2Gj3tmb?= =?us-ascii?Q?RMP6IfquoBLCE68cBT10KGgh+uH4Lyf5MPVogkbO/6yVqCZB0OZ0sW7ShvpY?= =?us-ascii?Q?ZdZESZeECGmqamlBqdhyI2BDVCi4fktPL/+6+CKTPxCF1YRhTUTzpmLuK92a?= =?us-ascii?Q?sPlTRdrd+mHjHhwc1ltNm+iNx9xm+6MCWxYTc7QO0RRQMvIsms26P3EiMA3g?= =?us-ascii?Q?eX8FADjQxf5z6lhRhOVT1W+cy2nfE2b8oRsZutgf2TadimphufLdAxeJRgUB?= =?us-ascii?Q?Fkxq3tP1SwljMi+ZlmEISgvmEByzQ6tgSClywKoOcjO6vnWLrEQaMsFwwR0X?= =?us-ascii?Q?pEZMIw+1Gq53mfMCwrip/wdi5KxQvdHCNmEuVVqvJi8lYSDHwrX2QHqZSzhf?= =?us-ascii?Q?9CKdyPSTrVPvc57Usb+YgXNxigSThkWMplTiu2IpPCrjUvYoPIwESskO50Pg?= =?us-ascii?Q?kPsNrWPOXvSovQPjqfb5osRoimaaEN+RNVQtr3Vq0jYW0J2gXd521sSfQH58?= =?us-ascii?Q?1me3MmxE1cDqagZaTt9pwEQaG0dCKBPXE6ta9FntW0WqsjxG0KA/SzlRi3pI?= =?us-ascii?Q?PikV98M3BUV4a3pcVKMFFwljK5BWX7Px76gb00DGTHXowpfdDVqvbYk3i/xk?= =?us-ascii?Q?eJUR+g3WQ+b0MJ9+6fw1aDu4CMYgeNRG/8+HR6TUtFFX047zQr0FMH2J95dp?= =?us-ascii?Q?gB1cEeqQbpU4pcmukds0VPkve+Y/gILgxo7qq6p73kkNpK6Hhdc26JouSeYq?= =?us-ascii?Q?IqnGbisxEu8gEWueYrjbPZol/F673fmnB1CckDiwNcZXEZhW8WjbYuSrDhcO?= =?us-ascii?Q?NYBph0qlPdaZqr0BUHkfYGVKMszMriTH1mXCwpvlYj4qpThyt2RkjVwhTdze?= =?us-ascii?Q?+YQVaJ03niGiFgT6Us9bRWy/UJZCkmAI9BW53PAfTHKL/ai5FbKN7tXDJ6lB?= =?us-ascii?Q?rriUNV5vA04vJWMvEwOI//KDizRXtLdQU0nfHUikHU/0CsWr5Rz7nkPjog1W?= =?us-ascii?Q?c+ueyOtX07lDA7j/denIV9GT8eFU8nKNZ8k2rsGrUQ0W27T7rP2EoAyTIBGZ?= =?us-ascii?Q?XAplekL5C0eSY7XuHq4VTZ2jnuFmgOPnDBoOkJdLoxilc8PduG88cVUn1P0w?= =?us-ascii?Q?98NpAH8Y7i8YREnIJcXYsoxOPHzAX5Q?= X-Microsoft-Exchange-Diagnostics: 1; CO1PR17MB0822; 6:5wnIUGTE4BSjPIRTl8ewPlLuZiUinxKKTcnKePqw1j+MTwHz4E0L5wsjmqhKcB/uEBGAf2/Lf8v1Gh9eD+79o7KGPQO1lQ1O0ASKIzzX51v5Y+1e7ar7y++kRoplY0vidAEnur3pXyLwJD5afz9UJcx2FG/oeuF4e3oQaoU8Nf4MvM0SWgkMUTuhUGjkz73UVO+IPVxdcoXmh4ZtWJLVf9mXzq4i2WC1po/M8Yr8+GcY526mYfmi2CG7XA0owvI429eFriA3FvuETrJkWLteJUcuUIqnJorpIs7spbLoHfgLroSNl+TleM4CgSBikq2nKSubThYzhVBmXLOo6SI2/w==; 5:XvRpqrP+vJ3L26QZTvWGhHYC6j25bTCPdYEQBo/KAZ1J1I3k9+VQe89kDXAXKf9z57S72ByWrrclr309KEgVqlWTpp/7mA5ycw+3++xVVw4Ogcycp8yV/ezK6RVkg22k6ZwZtQsHU+Rwxtxry5KuYQ==; 24:rqaQPBC0jzJPjuGwtK3fl+zgR80HgK3VcADBo+6SvDxZGyJICcNrkMBidHOmLabwja1T3EEokd8D7Wfp9MYOgXNFxIPJrT7xTz5XGr0b9sU=; 7:Mcj6U3WPZs1PXWVPGOtt5otOIsOaYFexjPpFFnDg7msOSn1nM35s0b5SahBF8QbTjB0zZ6cyU6SGLo8Wcr6NvhALAhwO6refis2FdJk8SSIZp862/6nCly2z7OZR9mpjDmAllGvOYWGxPIKh1ZMDUAxEap+ZRVAV1ZSY83sHKIKBRXgq71nA2nPwH+Y6z9wEDlMKDsR2UsKyWfTw3f2Dls5q9yFJNQ3hcdmCDzF9RK8= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: paviliondata.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Sep 2017 04:26:32.2011 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 09c91d0e-1fe5-4430-b0cb-4ae41b61ea64 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR17MB0822 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize multipath head namespace. The way generic namespace is related to a generic controller, similarly multipath head namespace is related to multipath head controller. It is initialized as part of a single shared namespace enumeration. We will initialize the bio list and congestion wait queue for given multipath head namespace, this queue will be used during failover to requeue any IO returned from active path for resubmission. nvme_mpath_flush_io_work is implemented to take care of condition when both interfaces is down and IO is queued up on head mulitpath devices congestion queue. Added functionality to add and delete namespace to head multipath namespace list. This logic is similar to namespace list we have per controller. On deletion of namespaces, check if this was last namespace part of multipath head device and remove the head multipath namespace. Signed-off-by: Anish M Jhaveri --- drivers/nvme/host/core.c | 233 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 233 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 2987e0a..cefa506 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2436,6 +2436,239 @@ static int nvme_setup_streams_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns) return 0; } +static struct nvme_ns *nvme_find_get_mpath_ns(struct nvme_ctrl *ctrl) +{ + struct nvme_ns *ns = NULL; + mutex_lock(&ctrl->namespaces_mutex); + list_for_each_entry(ns, &ctrl->mpath_namespace, list) { + if (ns) + break; + } + mutex_unlock(&ctrl->namespaces_mutex); + return ns; +} + +/*Adding namespace to multipath list under multipath controller*/ +static void nvme_add_ns_mpath_ctrl(struct nvme_ns *ns) +{ + struct nvme_ns *mpath_ns = NULL; + mpath_ns = nvme_find_get_mpath_ns(ns->mpath_ctrl); + mutex_lock(&ns->mpath_ctrl->namespaces_mutex); + list_add_tail(&ns->mpathlist, &ns->mpath_ctrl->namespaces); + test_and_set_bit(NVME_CTRL_MPATH_CHILD, &ns->ctrl->flags); + test_and_set_bit(NVME_NS_MULTIPATH, &ns->flags); + mutex_unlock(&ns->mpath_ctrl->namespaces_mutex); + kref_get(&mpath_ns->kref); +} + +/*Deleting namespace from multipath list under multipath controller*/ +static int nvme_del_ns_mpath_ctrl(struct nvme_ns *ns) +{ + + struct nvme_ns *mpath_ns = NULL, *nsa = NULL, *next; + + if (!ns->mpath_ctrl) + return NVME_NO_MPATH_NS_AVAIL; + mpath_ns = nvme_find_get_mpath_ns(ns->mpath_ctrl); + mutex_lock(&mpath_ns->ctrl->namespaces_mutex); + test_and_clear_bit(NVME_NS_MULTIPATH, &ns->flags); + list_del_init(&ns->mpathlist); + list_for_each_entry_safe(nsa, next, &mpath_ns->ctrl->namespaces, mpathlist) { + if (nsa == ns) { + list_del_init(&ns->mpathlist); + continue; + } + } + mutex_unlock(&mpath_ns->ctrl->namespaces_mutex); + + /*Check if we were the last device to a given head or parent device. + If last device then remove head device also.*/ + if (mpath_ns == nvme_get_ns_for_mpath_ns(mpath_ns)) { + nvme_put_ns(mpath_ns); + nvme_mpath_ns_remove(mpath_ns); + /*cancel delayed work as we are the last device */ + cancel_delayed_work_sync(&ns->mpath_ctrl->cu_work); + return NVME_NO_MPATH_NS_AVAIL; + } else { + blk_mq_freeze_queue(ns->disk->queue); + set_capacity(ns->disk, 0); + blk_mq_unfreeze_queue(ns->disk->queue); + revalidate_disk(ns->disk); + nvme_put_ns(mpath_ns); + return NVME_MPATH_NS_AVAIL; + } +} + + +static struct nvme_ns *nvme_alloc_mpath_ns(struct nvme_ns *nsa) +{ + struct gendisk *disk; + struct nvme_id_ns *id; + char disk_name[DISK_NAME_LEN]; + char devpath[DISK_NAME_LEN+4]; + struct nvme_ctrl *ctrl = NULL; + struct nvme_ns *ns = NULL; + int node; + + ctrl = nvme_init_mpath_ctrl(nsa->ctrl); + if (!ctrl) + return NULL; + + node = dev_to_node(ctrl->dev); + ns = kzalloc_node(sizeof(*ns), GFP_KERNEL, node); + if (!ns) + goto out_free_ctrl; + ns->ctrl = ctrl; + ns->instance = ida_simple_get(&ns->ctrl->ns_ida, 1, 0, GFP_KERNEL); + if (ns->instance < 0) + goto out_free_ns; + + ns->queue = blk_alloc_queue(GFP_KERNEL); + if (IS_ERR(ns->queue)) + goto out_release_instance; + + blk_queue_make_request(ns->queue, nvme_mpath_make_request); + + queue_flag_set_unlocked(QUEUE_FLAG_NONROT, ns->queue); + ns->queue->queuedata = ns; + kref_init(&ns->kref); + ns->ns_id = nsa->ns_id; + ns->lba_shift = 9; /* set to a default value for 512 until disk is validated */ + + test_and_set_bit(NVME_NS_ROOT, &ns->flags); + blk_queue_logical_block_size(ns->queue, 1 << ns->lba_shift); + nvme_set_queue_limits(ctrl, ns->queue); + blk_queue_rq_timeout(ns->queue, mpath_io_timeout * HZ); + sprintf(disk_name, "mpnvme%dn%d", ctrl->instance, ns->instance); + sprintf(devpath, "/dev/mpnvme%dn%d", ctrl->instance, ns->instance); + if (nvme_revalidate_ns(nsa, &id)) + goto out_free_queue; + + disk = alloc_disk_node(0, node); + if (!disk) + goto out_free_id; + + disk->fops = &nvme_fops; + disk->private_data = ns; + disk->queue = ns->queue; + disk->flags = GENHD_FL_EXT_DEVT; + memcpy(disk->disk_name, disk_name, DISK_NAME_LEN); + ns->disk = disk; + __nvme_revalidate_disk(disk, id); + init_waitqueue_head(&ns->fq_full); + init_waitqueue_entry(&ns->fq_cong_wait, nvme_mpath_thread); + bio_list_init(&ns->fq_cong); + nsa->mpath_ctrl = ns->ctrl; + nsa->ctrl->mpath_ctrl = (void *)ns->ctrl; + mutex_lock(&ctrl->namespaces_mutex); + list_add_tail(&ns->list, &ctrl->mpath_namespace); + mutex_unlock(&ctrl->namespaces_mutex); + nvme_add_ns_mpath_ctrl(nsa); + + memcpy(&ns->mpath_nguid, &nsa->mpath_nguid, NVME_NIDT_NGUID_LEN); + kref_get(&ns->ctrl->kref); + + device_add_disk(ctrl->device, ns->disk); + + + if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj, + &nvme_ns_attr_group)) { + pr_warn("%s: failed to create sysfs group for identification\n", + ns->disk->disk_name); + goto out_del_gendisk; + } + + ns->bdev = blkdev_get_by_path(devpath, + FMODE_READ | FMODE_WRITE, NULL); + if (IS_ERR(ns->bdev)) { + pr_warn("%s: failed to get block device\n", + ns->disk->disk_name); + goto out_sysfs_remove_group; + } + + kfree(id); + + if (nvme_set_ns_active(nsa, ns, NVME_FAILOVER_RETRIES)) { + pr_info("%s:%d Failed to set active Namespace nvme%dn%d\n", __FUNCTION__, __LINE__, nsa->ctrl->instance, nsa->instance); + } + + /* init delayed work for IO cleanup when both iface are down */ + INIT_DELAYED_WORK(&ctrl->cu_work, nvme_mpath_flush_io_work); + return ns; + + out_sysfs_remove_group: + sysfs_remove_group(&disk_to_dev(ns->disk)->kobj, + &nvme_ns_attr_group); + out_del_gendisk: + del_gendisk(ns->disk); + mutex_lock(&ctrl->namespaces_mutex); + test_and_clear_bit(NVME_NS_MULTIPATH, &nsa->flags); + list_del_init(&nsa->mpathlist); + mutex_unlock(&ctrl->namespaces_mutex); + nsa->mpath_ctrl = NULL; + nsa->ctrl->mpath_ctrl = NULL; + out_free_id: + kfree(id); + out_free_queue: + blk_cleanup_queue(ns->queue); + out_release_instance: + ida_simple_remove(&ctrl->ns_ida, ns->instance); + out_free_ns: + kfree(ns); + out_free_ctrl: + device_destroy(nvme_class, MKDEV(nvme_char_major, ctrl->instance)); + spin_lock(&dev_list_lock); + list_del(&ctrl->node); + spin_unlock(&dev_list_lock); + nvme_put_ctrl(ctrl); + return NULL; +} + +static void nvme_shared_ns(struct nvme_ns *shared_ns) +{ + struct nvme_ctrl *ctrl = NULL; + struct nvme_ns *ns, *ret = NULL; + /* + Check if the namespace is shared and another namespace with same + serial number exist amount the controllers. + */ + + spin_lock(&dev_list_lock); + list_for_each_entry(ctrl, &nvme_ctrl_list, node) { + list_for_each_entry(ns, &ctrl->namespaces, list) { + if (ns == shared_ns) + continue; + /* + * Revalidating a dead namespace sets capacity to 0. This will + * end buffered writers dirtying pages that can't be synced. + */ + if (!ns->disk || test_bit(NVME_NS_DEAD, &ns->flags)) + continue; + + if (!strncmp(ns->nguid, shared_ns->nguid, NVME_NIDT_NGUID_LEN)) { + if (test_bit(NVME_NS_MULTIPATH, &ns->flags)) { + shared_ns->mpath_ctrl = ns->mpath_ctrl; + shared_ns->ctrl->mpath_ctrl = (void *)ns->mpath_ctrl; + ret = shared_ns; + } else { + ret = ns; + } + goto found_ns; + } + } + } + spin_unlock(&dev_list_lock); + + if (shared_ns->nmic & 0x1) { + shared_ns->active = 1; + nvme_alloc_mpath_ns(shared_ns); + } + return; + found_ns: + spin_unlock(&dev_list_lock); + if (ret == shared_ns) + nvme_add_ns_mpath_ctrl(shared_ns); +} static struct nvme_ns *nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) { struct nvme_ns *ns;