From patchwork Tue Feb 22 07:01:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhu, Lingshan" X-Patchwork-Id: 12754592 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9318CC433EF for ; Tue, 22 Feb 2022 07:08:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230383AbiBVHJJ (ORCPT ); Tue, 22 Feb 2022 02:09:09 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:59382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230262AbiBVHJF (ORCPT ); Tue, 22 Feb 2022 02:09:05 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DCAAB151C for ; Mon, 21 Feb 2022 23:08:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645513720; x=1677049720; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YCxJIQOwRYPK4xuIjQZtMjD1ZTaAQgwz8r39Lsxd1FQ=; b=LUlPA3bzUm/trZZOtjzn685+ZdTwAAMYoZJwu32cmBAXXGAqwpDbaboJ V7f0Y9lU4QSd2PqS0F+BBNWv0gCXo7PEcYXAOdbGjUtJxSCeDugjqI4cq kHPJof1btrDRpZ+ft6EDFqVbbz4srWr5aYagkANrnDnsnhO8n6vqFXhD+ kFYg+Ys+0SOnliok+AzOS3Mtau6KrQxIwQP4D6IbWv6YUtdVfGw955EVa vCJXK8ClafVS+FMVYGNI4HDmVIW2k5CsoUwTZV/s8sWPjn66N/hP6Ujjv 4nP3lTwizFtERSGsXxzJDkDtrMVKHjQI/X7rrL9z3lM4RA5OgkauzJWKV A==; X-IronPort-AV: E=McAfee;i="6200,9189,10265"; a="249207293" X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="249207293" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:40 -0800 X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="776207129" Received: from unknown (HELO cra01infra01.deacluster.intel.com) ([10.240.193.73]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:38 -0800 From: Zhu Lingshan To: mst@redhat.com, jasowang@redhat.com Cc: netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, Zhu Lingshan Subject: [PATCH V5 1/5] vDPA/ifcvf: make use of virtio pci modern IO helpers in ifcvf Date: Tue, 22 Feb 2022 15:01:05 +0800 Message-Id: <20220222070109.931260-2-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220222070109.931260-1-lingshan.zhu@intel.com> References: <20220222070109.931260-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This commit discards ifcvf_ioreadX()/writeX(), use virtio pci modern IO helpers instead Signed-off-by: Zhu Lingshan --- drivers/vdpa/ifcvf/ifcvf_base.c | 104 +++++++++++--------------------- drivers/vdpa/ifcvf/ifcvf_base.h | 1 + drivers/vdpa/ifcvf/ifcvf_main.c | 2 +- 3 files changed, 36 insertions(+), 71 deletions(-) diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c index 7d41dfe48ade..b9fdc5258611 100644 --- a/drivers/vdpa/ifcvf/ifcvf_base.c +++ b/drivers/vdpa/ifcvf/ifcvf_base.c @@ -10,42 +10,6 @@ #include "ifcvf_base.h" -static inline u8 ifc_ioread8(u8 __iomem *addr) -{ - return ioread8(addr); -} -static inline u16 ifc_ioread16 (__le16 __iomem *addr) -{ - return ioread16(addr); -} - -static inline u32 ifc_ioread32(__le32 __iomem *addr) -{ - return ioread32(addr); -} - -static inline void ifc_iowrite8(u8 value, u8 __iomem *addr) -{ - iowrite8(value, addr); -} - -static inline void ifc_iowrite16(u16 value, __le16 __iomem *addr) -{ - iowrite16(value, addr); -} - -static inline void ifc_iowrite32(u32 value, __le32 __iomem *addr) -{ - iowrite32(value, addr); -} - -static void ifc_iowrite64_twopart(u64 val, - __le32 __iomem *lo, __le32 __iomem *hi) -{ - ifc_iowrite32((u32)val, lo); - ifc_iowrite32(val >> 32, hi); -} - struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw) { return container_of(hw, struct ifcvf_adapter, vf); @@ -158,11 +122,11 @@ int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *pdev) return -EIO; } - hw->nr_vring = ifc_ioread16(&hw->common_cfg->num_queues); + hw->nr_vring = vp_ioread16(&hw->common_cfg->num_queues); for (i = 0; i < hw->nr_vring; i++) { - ifc_iowrite16(i, &hw->common_cfg->queue_select); - notify_off = ifc_ioread16(&hw->common_cfg->queue_notify_off); + vp_iowrite16(i, &hw->common_cfg->queue_select); + notify_off = vp_ioread16(&hw->common_cfg->queue_notify_off); hw->vring[i].notify_addr = hw->notify_base + notify_off * hw->notify_off_multiplier; hw->vring[i].notify_pa = hw->notify_base_pa + @@ -181,12 +145,12 @@ int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *pdev) u8 ifcvf_get_status(struct ifcvf_hw *hw) { - return ifc_ioread8(&hw->common_cfg->device_status); + return vp_ioread8(&hw->common_cfg->device_status); } void ifcvf_set_status(struct ifcvf_hw *hw, u8 status) { - ifc_iowrite8(status, &hw->common_cfg->device_status); + vp_iowrite8(status, &hw->common_cfg->device_status); } void ifcvf_reset(struct ifcvf_hw *hw) @@ -214,11 +178,11 @@ u64 ifcvf_get_hw_features(struct ifcvf_hw *hw) u32 features_lo, features_hi; u64 features; - ifc_iowrite32(0, &cfg->device_feature_select); - features_lo = ifc_ioread32(&cfg->device_feature); + vp_iowrite32(0, &cfg->device_feature_select); + features_lo = vp_ioread32(&cfg->device_feature); - ifc_iowrite32(1, &cfg->device_feature_select); - features_hi = ifc_ioread32(&cfg->device_feature); + vp_iowrite32(1, &cfg->device_feature_select); + features_hi = vp_ioread32(&cfg->device_feature); features = ((u64)features_hi << 32) | features_lo; @@ -271,12 +235,12 @@ void ifcvf_read_dev_config(struct ifcvf_hw *hw, u64 offset, WARN_ON(offset + length > hw->config_size); do { - old_gen = ifc_ioread8(&hw->common_cfg->config_generation); + old_gen = vp_ioread8(&hw->common_cfg->config_generation); p = dst; for (i = 0; i < length; i++) - *p++ = ifc_ioread8(hw->dev_cfg + offset + i); + *p++ = vp_ioread8(hw->dev_cfg + offset + i); - new_gen = ifc_ioread8(&hw->common_cfg->config_generation); + new_gen = vp_ioread8(&hw->common_cfg->config_generation); } while (old_gen != new_gen); } @@ -289,18 +253,18 @@ void ifcvf_write_dev_config(struct ifcvf_hw *hw, u64 offset, p = src; WARN_ON(offset + length > hw->config_size); for (i = 0; i < length; i++) - ifc_iowrite8(*p++, hw->dev_cfg + offset + i); + vp_iowrite8(*p++, hw->dev_cfg + offset + i); } static void ifcvf_set_features(struct ifcvf_hw *hw, u64 features) { struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg; - ifc_iowrite32(0, &cfg->guest_feature_select); - ifc_iowrite32((u32)features, &cfg->guest_feature); + vp_iowrite32(0, &cfg->guest_feature_select); + vp_iowrite32((u32)features, &cfg->guest_feature); - ifc_iowrite32(1, &cfg->guest_feature_select); - ifc_iowrite32(features >> 32, &cfg->guest_feature); + vp_iowrite32(1, &cfg->guest_feature_select); + vp_iowrite32(features >> 32, &cfg->guest_feature); } static int ifcvf_config_features(struct ifcvf_hw *hw) @@ -329,7 +293,7 @@ u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid) ifcvf_lm = (struct ifcvf_lm_cfg __iomem *)hw->lm_cfg; q_pair_id = qid / hw->nr_vring; avail_idx_addr = &ifcvf_lm->vring_lm_cfg[q_pair_id].idx_addr[qid % 2]; - last_avail_idx = ifc_ioread16(avail_idx_addr); + last_avail_idx = vp_ioread16(avail_idx_addr); return last_avail_idx; } @@ -344,7 +308,7 @@ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num) q_pair_id = qid / hw->nr_vring; avail_idx_addr = &ifcvf_lm->vring_lm_cfg[q_pair_id].idx_addr[qid % 2]; hw->vring[qid].last_avail_idx = num; - ifc_iowrite16(num, avail_idx_addr); + vp_iowrite16(num, avail_idx_addr); return 0; } @@ -357,9 +321,9 @@ static int ifcvf_hw_enable(struct ifcvf_hw *hw) ifcvf = vf_to_adapter(hw); cfg = hw->common_cfg; - ifc_iowrite16(IFCVF_MSI_CONFIG_OFF, &cfg->msix_config); + vp_iowrite16(IFCVF_MSI_CONFIG_OFF, &cfg->msix_config); - if (ifc_ioread16(&cfg->msix_config) == VIRTIO_MSI_NO_VECTOR) { + if (vp_ioread16(&cfg->msix_config) == VIRTIO_MSI_NO_VECTOR) { IFCVF_ERR(ifcvf->pdev, "No msix vector for device config\n"); return -EINVAL; } @@ -368,17 +332,17 @@ static int ifcvf_hw_enable(struct ifcvf_hw *hw) if (!hw->vring[i].ready) break; - ifc_iowrite16(i, &cfg->queue_select); - ifc_iowrite64_twopart(hw->vring[i].desc, &cfg->queue_desc_lo, + vp_iowrite16(i, &cfg->queue_select); + vp_iowrite64_twopart(hw->vring[i].desc, &cfg->queue_desc_lo, &cfg->queue_desc_hi); - ifc_iowrite64_twopart(hw->vring[i].avail, &cfg->queue_avail_lo, + vp_iowrite64_twopart(hw->vring[i].avail, &cfg->queue_avail_lo, &cfg->queue_avail_hi); - ifc_iowrite64_twopart(hw->vring[i].used, &cfg->queue_used_lo, + vp_iowrite64_twopart(hw->vring[i].used, &cfg->queue_used_lo, &cfg->queue_used_hi); - ifc_iowrite16(hw->vring[i].size, &cfg->queue_size); - ifc_iowrite16(i + IFCVF_MSI_QUEUE_OFF, &cfg->queue_msix_vector); + vp_iowrite16(hw->vring[i].size, &cfg->queue_size); + vp_iowrite16(i + IFCVF_MSI_QUEUE_OFF, &cfg->queue_msix_vector); - if (ifc_ioread16(&cfg->queue_msix_vector) == + if (vp_ioread16(&cfg->queue_msix_vector) == VIRTIO_MSI_NO_VECTOR) { IFCVF_ERR(ifcvf->pdev, "No msix vector for queue %u\n", i); @@ -386,7 +350,7 @@ static int ifcvf_hw_enable(struct ifcvf_hw *hw) } ifcvf_set_vq_state(hw, i, hw->vring[i].last_avail_idx); - ifc_iowrite16(1, &cfg->queue_enable); + vp_iowrite16(1, &cfg->queue_enable); } return 0; @@ -398,14 +362,14 @@ static void ifcvf_hw_disable(struct ifcvf_hw *hw) u32 i; cfg = hw->common_cfg; - ifc_iowrite16(VIRTIO_MSI_NO_VECTOR, &cfg->msix_config); + vp_iowrite16(VIRTIO_MSI_NO_VECTOR, &cfg->msix_config); for (i = 0; i < hw->nr_vring; i++) { - ifc_iowrite16(i, &cfg->queue_select); - ifc_iowrite16(VIRTIO_MSI_NO_VECTOR, &cfg->queue_msix_vector); + vp_iowrite16(i, &cfg->queue_select); + vp_iowrite16(VIRTIO_MSI_NO_VECTOR, &cfg->queue_msix_vector); } - ifc_ioread16(&cfg->queue_msix_vector); + vp_ioread16(&cfg->queue_msix_vector); } int ifcvf_start_hw(struct ifcvf_hw *hw) @@ -433,5 +397,5 @@ void ifcvf_stop_hw(struct ifcvf_hw *hw) void ifcvf_notify_queue(struct ifcvf_hw *hw, u16 qid) { - ifc_iowrite16(qid, hw->vring[qid].notify_addr); + vp_iowrite16(qid, hw->vring[qid].notify_addr); } diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h index c486873f370a..25c591a3eae2 100644 --- a/drivers/vdpa/ifcvf/ifcvf_base.h +++ b/drivers/vdpa/ifcvf/ifcvf_base.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c index d1a6b5ab543c..43b7180256c6 100644 --- a/drivers/vdpa/ifcvf/ifcvf_main.c +++ b/drivers/vdpa/ifcvf/ifcvf_main.c @@ -348,7 +348,7 @@ static u32 ifcvf_vdpa_get_generation(struct vdpa_device *vdpa_dev) { struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); - return ioread8(&vf->common_cfg->config_generation); + return vp_ioread8(&vf->common_cfg->config_generation); } static u32 ifcvf_vdpa_get_device_id(struct vdpa_device *vdpa_dev) From patchwork Tue Feb 22 07:01:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhu, Lingshan" X-Patchwork-Id: 12754593 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B5A5C433F5 for ; Tue, 22 Feb 2022 07:08:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230392AbiBVHJK (ORCPT ); Tue, 22 Feb 2022 02:09:10 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:59488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230382AbiBVHJG (ORCPT ); Tue, 22 Feb 2022 02:09:06 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39FC0B151A for ; Mon, 21 Feb 2022 23:08:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645513722; x=1677049722; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=u7sfhBlDXqoF4USnYs/Zw05QturZAb6oqbYahKFO2vU=; b=lb4Kd8ibEa6yXK3dvoGcQNliIU7U3HgC/2IUeGTJqJRYiJVNZH7vqtK1 DFp0w9LL4aRxJV5zVQt7Wd11LiNfIsA4b8xjv1SEDqAs66bAigTHG3Aln wUfT+9VBTSiEplwd65cqViGueuNHyOOZBI2jce/KWbGZyqe+YhegM4lqr 558MwAiRr4Kq1qybnH0BbwyRQNxsXbFRSU7cl5ibnUoYBWmeNDOJNjrvZ AuA4/5tGZHjHOXFbKUo+jijZg2i+ovAv7SrU1JVW6hkIMjHR6IbT1VgKA FgPxAD5PHC4alAGeSWqTXQOKh9ysdgiyesJ69JMvtVMObcUr0Z+pQRPW+ A==; X-IronPort-AV: E=McAfee;i="6200,9189,10265"; a="249207304" X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="249207304" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:42 -0800 X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="776207136" Received: from unknown (HELO cra01infra01.deacluster.intel.com) ([10.240.193.73]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:40 -0800 From: Zhu Lingshan To: mst@redhat.com, jasowang@redhat.com Cc: netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, Zhu Lingshan Subject: [PATCH V5 2/5] vhost_vdpa: don't setup irq offloading when irq_num < 0 Date: Tue, 22 Feb 2022 15:01:06 +0800 Message-Id: <20220222070109.931260-3-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220222070109.931260-1-lingshan.zhu@intel.com> References: <20220222070109.931260-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When irq number is negative(e.g., -EINVAL), the virtqueue may be disabled or the virtqueues are sharing a device irq. In such case, we should not setup irq offloading for a virtqueue. Signed-off-by: Zhu Lingshan --- drivers/vhost/vdpa.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 851539807bc9..8f53a8478c28 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -97,8 +97,11 @@ static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid) return; irq = ops->get_vq_irq(vdpa, qid); + if (irq < 0) + return; + irq_bypass_unregister_producer(&vq->call_ctx.producer); - if (!vq->call_ctx.ctx || irq < 0) + if (!vq->call_ctx.ctx) return; vq->call_ctx.producer.token = vq->call_ctx.ctx; From patchwork Tue Feb 22 07:01:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhu, Lingshan" X-Patchwork-Id: 12754594 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 795C5C433EF for ; Tue, 22 Feb 2022 07:08:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230394AbiBVHJM (ORCPT ); Tue, 22 Feb 2022 02:09:12 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:59742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230389AbiBVHJI (ORCPT ); Tue, 22 Feb 2022 02:09:08 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D27FB1894 for ; Mon, 21 Feb 2022 23:08:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645513724; x=1677049724; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9fQIDwBr2NvpIS534N3oB/rMt0UAWnSWZVgy1I2GBR8=; b=m3Bv7wBcIzKAiGe73zhvFcksnVON7URfaJJPePfIX6jQ3km2acZfASvf TZFwudSLUXDov3mxuIJNkZeUiy7t9m+bYuMTGRkYCiSyIrWhvy98nlez1 0Cmk0tpbQzCVNFCUHyqxCbsIbWDUojbTMWAO2ri2G+97SURvaNI3apR0a t4sVFtiOP549TYEg+Ecs9y9v8ScTn7PtL9BjZvnEUHOoYTTL4ZQcu6rZ4 JiLHZNK5V804ayFsnLmU6g6W8V/qByzrxw8OQVfX+t+bHY3beClglwbsg vAXWIzMI0draPbpGOSlgrsH51GyVYR/xKTtO1pScOgtinBZlCY5uExd+H Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10265"; a="249207306" X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="249207306" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:44 -0800 X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="776207140" Received: from unknown (HELO cra01infra01.deacluster.intel.com) ([10.240.193.73]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:42 -0800 From: Zhu Lingshan To: mst@redhat.com, jasowang@redhat.com Cc: netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, Zhu Lingshan Subject: [PATCH V5 3/5] vDPA/ifcvf: implement device MSIX vector allocator Date: Tue, 22 Feb 2022 15:01:07 +0800 Message-Id: <20220222070109.931260-4-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220222070109.931260-1-lingshan.zhu@intel.com> References: <20220222070109.931260-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This commit implements a MSIX vector allocation helper for vqs and config interrupts. Signed-off-by: Zhu Lingshan --- drivers/vdpa/ifcvf/ifcvf_main.c | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c index 43b7180256c6..964f7ac142ba 100644 --- a/drivers/vdpa/ifcvf/ifcvf_main.c +++ b/drivers/vdpa/ifcvf/ifcvf_main.c @@ -58,23 +58,44 @@ static void ifcvf_free_irq(struct ifcvf_adapter *adapter, int queues) ifcvf_free_irq_vectors(pdev); } -static int ifcvf_request_irq(struct ifcvf_adapter *adapter) +/* ifcvf MSIX vectors allocator, this helper tries to allocate + * vectors for all virtqueues and the config interrupt. + * It returns the number of allocated vectors, negative + * return value when fails. + */ +static int ifcvf_alloc_vectors(struct ifcvf_adapter *adapter) { struct pci_dev *pdev = adapter->pdev; struct ifcvf_hw *vf = &adapter->vf; - int vector, i, ret, irq; - u16 max_intr; + int max_intr, ret; /* all queues and config interrupt */ max_intr = vf->nr_vring + 1; + ret = pci_alloc_irq_vectors(pdev, 1, max_intr, PCI_IRQ_MSIX | PCI_IRQ_AFFINITY); - ret = pci_alloc_irq_vectors(pdev, max_intr, - max_intr, PCI_IRQ_MSIX); if (ret < 0) { IFCVF_ERR(pdev, "Failed to alloc IRQ vectors\n"); return ret; } + if (ret < max_intr) + IFCVF_INFO(pdev, + "Requested %u vectors, however only %u allocated, lower performance\n", + max_intr, ret); + + return ret; +} + +static int ifcvf_request_irq(struct ifcvf_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + struct ifcvf_hw *vf = &adapter->vf; + int vector, nvectors, i, ret, irq; + + nvectors = ifcvf_alloc_vectors(adapter); + if (nvectors <= 0) + return -EFAULT; + snprintf(vf->config_msix_name, 256, "ifcvf[%s]-config\n", pci_name(pdev)); vector = 0; From patchwork Tue Feb 22 07:01:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhu, Lingshan" X-Patchwork-Id: 12754596 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73AA6C433EF for ; Tue, 22 Feb 2022 07:08:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230389AbiBVHJU (ORCPT ); Tue, 22 Feb 2022 02:09:20 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:60028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230393AbiBVHJL (ORCPT ); Tue, 22 Feb 2022 02:09:11 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5050FB18AE for ; Mon, 21 Feb 2022 23:08:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645513726; x=1677049726; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vMuxhCZ+IJK4Gtzhc243HRjcYNVBlDW5WrZzbPagQkk=; b=FdhlNDWiQYyQu0MPrpXVlotq7MLOSl2hIDD5GxSJOVm0W6rconNrQ6ha DuC9svDspbvf6PUKW9+SVZn/yE/0pQp28LLg7b9hGLj/Undq6vm474swr 48DYuwJzTQPk0chT4TUcDwJDBQOJPldJeYjoCdFob6LOJZ2A5WHCnHS9v b3tIai8ZNLW7Gu4Oge8os+dACMImP/n70XYUI4Ndjw+Hz5t6vC1imCOWX 9hbAJaXlnZvGMv40hGHbwIHBBlOMNZ2n+DH8GUpcXUUWCI2AHTRQH5d6O kqR2RWVeO7xV7OR6eNj4O6UCKsdb9pfrmhIDbz0Cg8H0ECF+LWuvCj/0b A==; X-IronPort-AV: E=McAfee;i="6200,9189,10265"; a="249207311" X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="249207311" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:46 -0800 X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="776207143" Received: from unknown (HELO cra01infra01.deacluster.intel.com) ([10.240.193.73]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:44 -0800 From: Zhu Lingshan To: mst@redhat.com, jasowang@redhat.com Cc: netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, Zhu Lingshan Subject: [PATCH V5 4/5] vDPA/ifcvf: implement shared IRQ feature Date: Tue, 22 Feb 2022 15:01:08 +0800 Message-Id: <20220222070109.931260-5-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220222070109.931260-1-lingshan.zhu@intel.com> References: <20220222070109.931260-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On some platforms/devices, there may not be enough MSI vectors allocated for the virtqueues and config changes. In such a case, the interrupt sources(virtqueues, config changes) must share an IRQ/vector, to avoid initialization failures, keep the device functional. This commit handles three cases: (1) number of the allocated vectors == the number of virtqueues + 1 (config changes), every virtqueue and the config interrupt has a separated vector/IRQ, the best and the most likely case. (2) number of the allocated vectors is less than the best case, but greater than 1. In this case, all virtqueues share a vector/IRQ, the config interrupt has a separated vector/IRQ (3) only one vector is allocated, in this case, the virtqueues and the config interrupt share a vector/IRQ. The worst and most unlikely case. Otherwise, it needs to fail. This commit introduces some helper functions: ifcvf_set_vq_vector() and ifcvf_set_config_vector() sets virtqueue vector and config vector in the device config space, so that the device can send interrupt DMA. Signed-off-by: Zhu Lingshan --- drivers/vdpa/ifcvf/ifcvf_base.c | 48 +++--- drivers/vdpa/ifcvf/ifcvf_base.h | 15 +- drivers/vdpa/ifcvf/ifcvf_main.c | 294 ++++++++++++++++++++++++++++---- 3 files changed, 300 insertions(+), 57 deletions(-) diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c index b9fdc5258611..ba4866f871dd 100644 --- a/drivers/vdpa/ifcvf/ifcvf_base.c +++ b/drivers/vdpa/ifcvf/ifcvf_base.c @@ -15,6 +15,26 @@ struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw) return container_of(hw, struct ifcvf_adapter, vf); } +u16 ifcvf_set_vq_vector(struct ifcvf_hw *hw, u16 qid, int vector) +{ + struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg; + + vp_iowrite16(qid, &cfg->queue_select); + vp_iowrite16(vector, &cfg->queue_msix_vector); + + return vp_ioread16(&cfg->queue_msix_vector); +} + +u16 ifcvf_set_config_vector(struct ifcvf_hw *hw, int vector) +{ + struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg; + + cfg = hw->common_cfg; + vp_iowrite16(vector, &cfg->msix_config); + + return vp_ioread16(&cfg->msix_config); +} + static void __iomem *get_cap_addr(struct ifcvf_hw *hw, struct virtio_pci_cap *cap) { @@ -131,6 +151,7 @@ int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *pdev) notify_off * hw->notify_off_multiplier; hw->vring[i].notify_pa = hw->notify_base_pa + notify_off * hw->notify_off_multiplier; + hw->vring[i].irq = -EINVAL; } hw->lm_cfg = hw->base[IFCVF_LM_BAR]; @@ -140,6 +161,9 @@ int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *pdev) hw->common_cfg, hw->notify_base, hw->isr, hw->dev_cfg, hw->notify_off_multiplier); + hw->vqs_reused_irq = -EINVAL; + hw->config_irq = -EINVAL; + return 0; } @@ -321,13 +345,6 @@ static int ifcvf_hw_enable(struct ifcvf_hw *hw) ifcvf = vf_to_adapter(hw); cfg = hw->common_cfg; - vp_iowrite16(IFCVF_MSI_CONFIG_OFF, &cfg->msix_config); - - if (vp_ioread16(&cfg->msix_config) == VIRTIO_MSI_NO_VECTOR) { - IFCVF_ERR(ifcvf->pdev, "No msix vector for device config\n"); - return -EINVAL; - } - for (i = 0; i < hw->nr_vring; i++) { if (!hw->vring[i].ready) break; @@ -340,15 +357,6 @@ static int ifcvf_hw_enable(struct ifcvf_hw *hw) vp_iowrite64_twopart(hw->vring[i].used, &cfg->queue_used_lo, &cfg->queue_used_hi); vp_iowrite16(hw->vring[i].size, &cfg->queue_size); - vp_iowrite16(i + IFCVF_MSI_QUEUE_OFF, &cfg->queue_msix_vector); - - if (vp_ioread16(&cfg->queue_msix_vector) == - VIRTIO_MSI_NO_VECTOR) { - IFCVF_ERR(ifcvf->pdev, - "No msix vector for queue %u\n", i); - return -EINVAL; - } - ifcvf_set_vq_state(hw, i, hw->vring[i].last_avail_idx); vp_iowrite16(1, &cfg->queue_enable); } @@ -362,14 +370,10 @@ static void ifcvf_hw_disable(struct ifcvf_hw *hw) u32 i; cfg = hw->common_cfg; - vp_iowrite16(VIRTIO_MSI_NO_VECTOR, &cfg->msix_config); - + ifcvf_set_config_vector(hw, VIRTIO_MSI_NO_VECTOR); for (i = 0; i < hw->nr_vring; i++) { - vp_iowrite16(i, &cfg->queue_select); - vp_iowrite16(VIRTIO_MSI_NO_VECTOR, &cfg->queue_msix_vector); + ifcvf_set_vq_vector(hw, i, VIRTIO_MSI_NO_VECTOR) } - - vp_ioread16(&cfg->queue_msix_vector); } int ifcvf_start_hw(struct ifcvf_hw *hw) diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h index 25c591a3eae2..dcd31accfce5 100644 --- a/drivers/vdpa/ifcvf/ifcvf_base.h +++ b/drivers/vdpa/ifcvf/ifcvf_base.h @@ -28,8 +28,6 @@ #define IFCVF_QUEUE_ALIGNMENT PAGE_SIZE #define IFCVF_QUEUE_MAX 32768 -#define IFCVF_MSI_CONFIG_OFF 0 -#define IFCVF_MSI_QUEUE_OFF 1 #define IFCVF_PCI_MAX_RESOURCE 6 #define IFCVF_LM_CFG_SIZE 0x40 @@ -43,6 +41,13 @@ #define ifcvf_private_to_vf(adapter) \ (&((struct ifcvf_adapter *)adapter)->vf) +/* all vqs and config interrupt has its own vector */ +#define MSIX_VECTOR_PER_VQ_AND_CONFIG 1 +/* all vqs share a vector, and config interrupt has a separate vector */ +#define MSIX_VECTOR_SHARED_VQ_AND_CONFIG 2 +/* all vqs and config interrupt share a vector */ +#define MSIX_VECTOR_DEV_SHARED 3 + struct vring_info { u64 desc; u64 avail; @@ -77,9 +82,11 @@ struct ifcvf_hw { void __iomem * const *base; char config_msix_name[256]; struct vdpa_callback config_cb; - unsigned int config_irq; + int config_irq; + int vqs_reused_irq; /* virtio-net or virtio-blk device config size */ u32 config_size; + u8 msix_vector_status; }; struct ifcvf_adapter { @@ -124,4 +131,6 @@ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num); struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw); int ifcvf_probed_virtio_net(struct ifcvf_hw *hw); u32 ifcvf_get_config_size(struct ifcvf_hw *hw); +u16 ifcvf_set_vq_vector(struct ifcvf_hw *hw, u16 qid, int vector); +u16 ifcvf_set_config_vector(struct ifcvf_hw *hw, int vector); #endif /* _IFCVF_H_ */ diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c index 964f7ac142ba..3b48e717e89f 100644 --- a/drivers/vdpa/ifcvf/ifcvf_main.c +++ b/drivers/vdpa/ifcvf/ifcvf_main.c @@ -27,7 +27,7 @@ static irqreturn_t ifcvf_config_changed(int irq, void *arg) return IRQ_HANDLED; } -static irqreturn_t ifcvf_intr_handler(int irq, void *arg) +static irqreturn_t ifcvf_vq_intr_handler(int irq, void *arg) { struct vring_info *vring = arg; @@ -37,24 +37,98 @@ static irqreturn_t ifcvf_intr_handler(int irq, void *arg) return IRQ_HANDLED; } +static irqreturn_t ifcvf_vqs_reused_intr_handler(int irq, void *arg) +{ + struct ifcvf_hw *vf = arg; + struct vring_info *vring; + int i; + + for (i = 0; i < vf->nr_vring; i++) { + vring = &vf->vring[i]; + if (vring->cb.callback) + vf->vring->cb.callback(vring->cb.private); + } + + return IRQ_HANDLED; +} + +static irqreturn_t ifcvf_dev_intr_handler(int irq, void *arg) +{ + struct ifcvf_hw *vf = arg; + u8 isr; + + isr = vp_ioread8(vf->isr); + if (isr & VIRTIO_PCI_ISR_CONFIG) + ifcvf_config_changed(irq, arg); + + return ifcvf_vqs_reused_intr_handler(irq, arg); +} + static void ifcvf_free_irq_vectors(void *data) { pci_free_irq_vectors(data); } -static void ifcvf_free_irq(struct ifcvf_adapter *adapter, int queues) +static void ifcvf_free_per_vq_irq(struct ifcvf_adapter *adapter) { struct pci_dev *pdev = adapter->pdev; struct ifcvf_hw *vf = &adapter->vf; int i; + for (i = 0; i < vf->nr_vring; i++) { + if (vf->vring[i].irq != -EINVAL) { + devm_free_irq(&pdev->dev, vf->vring[i].irq, &vf->vring[i]); + vf->vring[i].irq = -EINVAL; + } + } +} - for (i = 0; i < queues; i++) { - devm_free_irq(&pdev->dev, vf->vring[i].irq, &vf->vring[i]); - vf->vring[i].irq = -EINVAL; +static void ifcvf_free_vqs_reused_irq(struct ifcvf_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + struct ifcvf_hw *vf = &adapter->vf; + + if (vf->vqs_reused_irq != -EINVAL) { + devm_free_irq(&pdev->dev, vf->vqs_reused_irq, vf); + vf->vqs_reused_irq = -EINVAL; } - devm_free_irq(&pdev->dev, vf->config_irq, vf); +} + +static void ifcvf_free_vq_irq(struct ifcvf_adapter *adapter) +{ + struct ifcvf_hw *vf = &adapter->vf; + + if (vf->msix_vector_status == MSIX_VECTOR_PER_VQ_AND_CONFIG) + ifcvf_free_per_vq_irq(adapter); + else + ifcvf_free_vqs_reused_irq(adapter); +} + +static void ifcvf_free_config_irq(struct ifcvf_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + struct ifcvf_hw *vf = &adapter->vf; + + if (vf->config_irq == -EINVAL) + return; + + /* If the irq is shared by all vqs and the config interrupt, + * it is already freed in ifcvf_free_vq_irq, so here only + * need to free config irq when msix_vector_status != MSIX_VECTOR_DEV_SHARED + */ + if (vf->msix_vector_status != MSIX_VECTOR_DEV_SHARED) { + devm_free_irq(&pdev->dev, vf->config_irq, vf); + vf->config_irq = -EINVAL; + } +} + +static void ifcvf_free_irq(struct ifcvf_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + + ifcvf_free_vq_irq(adapter); + ifcvf_free_config_irq(adapter); ifcvf_free_irq_vectors(pdev); } @@ -86,48 +160,201 @@ static int ifcvf_alloc_vectors(struct ifcvf_adapter *adapter) return ret; } -static int ifcvf_request_irq(struct ifcvf_adapter *adapter) +static int ifcvf_request_per_vq_irq(struct ifcvf_adapter *adapter) { struct pci_dev *pdev = adapter->pdev; struct ifcvf_hw *vf = &adapter->vf; - int vector, nvectors, i, ret, irq; + int i, vector, ret, irq; - nvectors = ifcvf_alloc_vectors(adapter); - if (nvectors <= 0) - return -EFAULT; + vf->vqs_reused_irq = -EINVAL; + for (i = 0; i < vf->nr_vring; i++) { + snprintf(vf->vring[i].msix_name, 256, "ifcvf[%s]-%d\n", pci_name(pdev), i); + vector = i; + irq = pci_irq_vector(pdev, vector); + ret = devm_request_irq(&pdev->dev, irq, + ifcvf_vq_intr_handler, 0, + vf->vring[i].msix_name, + &vf->vring[i]); + if (ret) { + IFCVF_ERR(pdev, "Failed to request irq for vq %d\n", i); + goto err; + } + + vf->vring[i].irq = irq; + ret = ifcvf_set_vq_vector(vf, i, vector); + if (ret == VIRTIO_MSI_NO_VECTOR) { + IFCVF_ERR(pdev, "No msix vector for vq %u\n", i); + goto err; + } + } + + return 0; +err: + ifcvf_free_irq(adapter); + + return -EFAULT; +} + +static int ifcvf_request_vqs_reused_irq(struct ifcvf_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + struct ifcvf_hw *vf = &adapter->vf; + int i, vector, ret, irq; + + vector = 0; + snprintf(vf->vring[0].msix_name, 256, "ifcvf[%s]-vqs-reused-irq\n", pci_name(pdev)); + irq = pci_irq_vector(pdev, vector); + ret = devm_request_irq(&pdev->dev, irq, + ifcvf_vqs_reused_intr_handler, 0, + vf->vring[0].msix_name, vf); + if (ret) { + IFCVF_ERR(pdev, "Failed to request reused irq for the device\n"); + goto err; + } + + vf->vqs_reused_irq = irq; + for (i = 0; i < vf->nr_vring; i++) { + vf->vring[i].irq = -EINVAL; + ret = ifcvf_set_vq_vector(vf, i, vector); + if (ret == VIRTIO_MSI_NO_VECTOR) { + IFCVF_ERR(pdev, "No msix vector for vq %u\n", i); + goto err; + } + } + + return 0; +err: + ifcvf_free_irq(adapter); + + return -EFAULT; +} + +static int ifcvf_request_dev_irq(struct ifcvf_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + struct ifcvf_hw *vf = &adapter->vf; + int i, vector, ret, irq; + + vector = 0; + snprintf(vf->vring[0].msix_name, 256, "ifcvf[%s]-dev-irq\n", pci_name(pdev)); + irq = pci_irq_vector(pdev, vector); + ret = devm_request_irq(&pdev->dev, irq, + ifcvf_dev_intr_handler, 0, + vf->vring[0].msix_name, vf); + if (ret) { + IFCVF_ERR(pdev, "Failed to request irq for the device\n"); + goto err; + } + + vf->vqs_reused_irq = irq; + for (i = 0; i < vf->nr_vring; i++) { + vf->vring[i].irq = -EINVAL; + ret = ifcvf_set_vq_vector(vf, i, vector); + if (ret == VIRTIO_MSI_NO_VECTOR) { + IFCVF_ERR(pdev, "No msix vector for vq %u\n", i); + goto err; + } + } + + vf->config_irq = irq; + ret = ifcvf_set_config_vector(vf, vector); + if (ret == VIRTIO_MSI_NO_VECTOR) { + IFCVF_ERR(pdev, "No msix vector for device config\n"); + goto err; + } + + return 0; +err: + ifcvf_free_irq(adapter); + + return -EFAULT; + +} + +static int ifcvf_request_vq_irq(struct ifcvf_adapter *adapter) +{ + struct ifcvf_hw *vf = &adapter->vf; + int ret; + + if (vf->msix_vector_status == MSIX_VECTOR_PER_VQ_AND_CONFIG) + ret = ifcvf_request_per_vq_irq(adapter); + else + ret = ifcvf_request_vqs_reused_irq(adapter); + + return ret; +} + +static int ifcvf_request_config_irq(struct ifcvf_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + struct ifcvf_hw *vf = &adapter->vf; + int config_vector, ret; + + if (vf->msix_vector_status == MSIX_VECTOR_DEV_SHARED) + return 0; + + if (vf->msix_vector_status == MSIX_VECTOR_PER_VQ_AND_CONFIG) + /* vector 0 ~ vf->nr_vring for vqs, num vf->nr_vring vector for config interrupt */ + config_vector = vf->nr_vring; + + if (vf->msix_vector_status == MSIX_VECTOR_SHARED_VQ_AND_CONFIG) + /* vector 0 for vqs and 1 for config interrupt */ + config_vector = 1; snprintf(vf->config_msix_name, 256, "ifcvf[%s]-config\n", pci_name(pdev)); - vector = 0; - vf->config_irq = pci_irq_vector(pdev, vector); + vf->config_irq = pci_irq_vector(pdev, config_vector); ret = devm_request_irq(&pdev->dev, vf->config_irq, ifcvf_config_changed, 0, vf->config_msix_name, vf); if (ret) { IFCVF_ERR(pdev, "Failed to request config irq\n"); - return ret; + goto err; } - for (i = 0; i < vf->nr_vring; i++) { - snprintf(vf->vring[i].msix_name, 256, "ifcvf[%s]-%d\n", - pci_name(pdev), i); - vector = i + IFCVF_MSI_QUEUE_OFF; - irq = pci_irq_vector(pdev, vector); - ret = devm_request_irq(&pdev->dev, irq, - ifcvf_intr_handler, 0, - vf->vring[i].msix_name, - &vf->vring[i]); - if (ret) { - IFCVF_ERR(pdev, - "Failed to request irq for vq %d\n", i); - ifcvf_free_irq(adapter, i); + ret = ifcvf_set_config_vector(vf, config_vector); + if (ret == VIRTIO_MSI_NO_VECTOR) { + IFCVF_ERR(pdev, "No msix vector for device config\n"); + goto err; + } - return ret; - } + return 0; +err: + ifcvf_free_irq(adapter); - vf->vring[i].irq = irq; + return -EFAULT; +} + +static int ifcvf_request_irq(struct ifcvf_adapter *adapter) +{ + struct ifcvf_hw *vf = &adapter->vf; + int nvectors, ret, max_intr; + + nvectors = ifcvf_alloc_vectors(adapter); + if (nvectors <= 0) + return -EFAULT; + + vf->msix_vector_status = MSIX_VECTOR_PER_VQ_AND_CONFIG; + max_intr = vf->nr_vring + 1; + if (nvectors < max_intr) + vf->msix_vector_status = MSIX_VECTOR_SHARED_VQ_AND_CONFIG; + + if (nvectors == 1) { + vf->msix_vector_status = MSIX_VECTOR_DEV_SHARED; + ret = ifcvf_request_dev_irq(adapter); + + return ret; } + ret = ifcvf_request_vq_irq(adapter); + if (ret) + return ret; + + ret = ifcvf_request_config_irq(adapter); + + if (ret) + return ret; + return 0; } @@ -284,7 +511,7 @@ static int ifcvf_vdpa_reset(struct vdpa_device *vdpa_dev) if (status_old & VIRTIO_CONFIG_S_DRIVER_OK) { ifcvf_stop_datapath(adapter); - ifcvf_free_irq(adapter, vf->nr_vring); + ifcvf_free_irq(adapter); } ifcvf_reset_vring(adapter); @@ -431,7 +658,10 @@ static int ifcvf_vdpa_get_vq_irq(struct vdpa_device *vdpa_dev, { struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); - return vf->vring[qid].irq; + if (vf->vqs_reused_irq < 0) + return vf->vring[qid].irq; + else + return -EINVAL; } static struct vdpa_notification_area ifcvf_get_vq_notification(struct vdpa_device *vdpa_dev, From patchwork Tue Feb 22 07:01:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhu, Lingshan" X-Patchwork-Id: 12754595 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E942FC433F5 for ; Tue, 22 Feb 2022 07:08:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230396AbiBVHJV (ORCPT ); Tue, 22 Feb 2022 02:09:21 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:60140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230395AbiBVHJM (ORCPT ); Tue, 22 Feb 2022 02:09:12 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44A42B1A89 for ; Mon, 21 Feb 2022 23:08:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645513728; x=1677049728; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=376WXI7gvq6wf+BEg0xDspI/XMWauY6hFaCPlRIcM3M=; b=R+etgpurZbEC13KwpC+r4/d9OH5/TZzLaQL1MBpzLHSVjphACRQRX5oc 4Xqy+OVryOFacylVm0stUTIYQP8sOEh+iRw2ONSqQ/7eo9YVRi6nTKE4h m+LftmnZAePiK4g74znP1rZRpPgCxpE/4HzFbXof0GnoLtPM83CTdFTcC gBfHGwhgRgvKhaYxppmiaeM1utAXIAAcvl+qXKuUZjqP9LHYx9bsf4tfu PzuzSbScd27cQQCd5r1DEmcsT27tVQoYkzH+pzmCbiih6ZsrgIFpdhvcL zCVFtGSsEw+Qia9QMgtmgo4/aNQ3ptMesWCUZmFyxb6h/D3qpHw0xPnoG Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10265"; a="249207316" X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="249207316" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:48 -0800 X-IronPort-AV: E=Sophos;i="5.88,387,1635231600"; d="scan'208";a="776207148" Received: from unknown (HELO cra01infra01.deacluster.intel.com) ([10.240.193.73]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 23:08:46 -0800 From: Zhu Lingshan To: mst@redhat.com, jasowang@redhat.com Cc: netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, Zhu Lingshan Subject: [PATCH V5 5/5] vDPA/ifcvf: cacheline alignment for ifcvf_hw Date: Tue, 22 Feb 2022 15:01:09 +0800 Message-Id: <20220222070109.931260-6-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220222070109.931260-1-lingshan.zhu@intel.com> References: <20220222070109.931260-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This commit introduces a new cacheline aligned layout for ifcvf_hw. Signed-off-by: Zhu Lingshan --- drivers/vdpa/ifcvf/ifcvf_base.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h index dcd31accfce5..115b61f4924b 100644 --- a/drivers/vdpa/ifcvf/ifcvf_base.h +++ b/drivers/vdpa/ifcvf/ifcvf_base.h @@ -66,16 +66,18 @@ struct ifcvf_hw { u8 __iomem *isr; /* Live migration */ u8 __iomem *lm_cfg; - u16 nr_vring; /* Notification bar number */ u8 notify_bar; + u8 msix_vector_status; + /* virtio-net or virtio-blk device config size */ + u32 config_size; /* Notificaiton bar address */ void __iomem *notify_base; phys_addr_t notify_base_pa; u32 notify_off_multiplier; + u32 dev_type; u64 req_features; u64 hw_features; - u32 dev_type; struct virtio_pci_common_cfg __iomem *common_cfg; void __iomem *dev_cfg; struct vring_info vring[IFCVF_MAX_QUEUES]; @@ -84,9 +86,7 @@ struct ifcvf_hw { struct vdpa_callback config_cb; int config_irq; int vqs_reused_irq; - /* virtio-net or virtio-blk device config size */ - u32 config_size; - u8 msix_vector_status; + u16 nr_vring; }; struct ifcvf_adapter {