diff mbox

[for-next] IB/usnic: Expose flows via debugfs

Message ID 6F4F8D2C-F10F-4697-ABD5-552D2E696361@cisco.com (mailing list archive)
State Rejected
Headers show

Commit Message

Upinder Malhi (umalhi) Dec. 20, 2013, 9:37 p.m. UTC
This patch depends on http://www.spinics.net/lists/linux-rdma/msg18193.html.

Signed-off-by: Upinder Malhi <umalhi@cisco.com>
---
drivers/infiniband/hw/usnic/usnic_debugfs.c   | 91 ++++++++++++++++++++++++++-
drivers/infiniband/hw/usnic/usnic_debugfs.h   |  4 ++
drivers/infiniband/hw/usnic/usnic_ib.h        |  7 +++
drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c |  8 ++-
drivers/infiniband/hw/usnic/usnic_ib_qp_grp.h |  5 ++
drivers/infiniband/hw/usnic/usnic_ib_sysfs.c  |  6 --
6 files changed, 111 insertions(+), 10 deletions(-)

--
1.8.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Or Gerlitz Dec. 22, 2013, 10:23 a.m. UTC | #1
On 20/12/2013 23:37, Upinder Malhi (umalhi) wrote:
> This patch depends onhttp://www.spinics.net/lists/linux-rdma/msg18193.html

Why use proprietary debugfs code  to display flows? you can (and should) 
use the rdma subsystem netlink infrastructure for that, see these
two commits


753f618 RDMA/cma: Add support for netlink statistics export
b2cbae2 RDMA: Add netlink infrastructure


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Upinder Malhi (umalhi) Jan. 7, 2014, 10:29 p.m. UTC | #2
Or,
	The flows contain Cisco VIC specific stuff - Ex. the hardware flow id; and they will contain more cisco specific things.  Hence, they are exported via debugfs.  

Upinder

On Dec 22, 2013, at 2:23 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:

> On 20/12/2013 23:37, Upinder Malhi (umalhi) wrote:
>> This patch depends onhttp://www.spinics.net/lists/linux-rdma/msg18193.html
> 
> Why use proprietary debugfs code  to display flows? you can (and should) use the rdma subsystem netlink infrastructure for that, see these
> two commits
> 
> 
> 753f618 RDMA/cma: Add support for netlink statistics export
> b2cbae2 RDMA: Add netlink infrastructure
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Or Gerlitz Jan. 8, 2014, 9:13 a.m. UTC | #3
On 08/01/2014 00:29, Upinder Malhi (umalhi) wrote:
> Or, The flows contain Cisco VIC specific stuff - Ex. the hardware flow id; and they will contain more cisco specific things.  Hence, they are exported via debugfs.

You should be able to enhance the rdma netlink infrastructure to allow 
for exporting HW dependent attributes to user space -- did you look on 
that?

Also, you should make sure to expose the non HW specific attributes of 
the sessions through the standard infrastructure.

Or.




>
> Upinder
>
> On Dec 22, 2013, at 2:23 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:
>
>> On 20/12/2013 23:37, Upinder Malhi (umalhi) wrote:
>>> This patch depends onhttp://www.spinics.net/lists/linux-rdma/msg18193.html
>> Why use proprietary debugfs code  to display flows? you can (and should) use the rdma subsystem netlink infrastructure for that, see these
>> two commits
>>
>>
>> 753f618 RDMA/cma: Add support for netlink statistics export
>> b2cbae2 RDMA: Add netlink infrastructure
>>
>>

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Upinder Malhi (umalhi) Jan. 8, 2014, 6:57 p.m. UTC | #4
Or,
	Yeah, I did think about extending the existing infrastructure to export HW specific stats and exposing some stats via standard infrastructure.  Besides the below, there are few other drawbacks with exposing statistics via netlink:
1) Having a two part solution, users space and kernel, will make changing the code more difficult.  Anytime another attributed is exposed, code in the kernel needs to be added to handle backwards compatibility with userspace (as I said we are going to add more stuff incrementally).
2)  The Cisco VIC series cards, that is our NIC, cannot do flow stats well.  Specially, it only reports Rx byte count for a flow and doesn't report any statistics on the Tx side.  Hence, exposing these via  a standard interface to a user is going to be confusing and misleading.

Hence, at least for Cisco VIC, we want to keep these flow stats in debugfs where they can be easily extended and extra effort is required to get to them.

Upinder

On Jan 8, 2014, at 1:13 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:

> On 08/01/2014 00:29, Upinder Malhi (umalhi) wrote:
>> Or, The flows contain Cisco VIC specific stuff - Ex. the hardware flow id; and they will contain more cisco specific things.  Hence, they are exported via debugfs.
> 
> You should be able to enhance the rdma netlink infrastructure to allow for exporting HW dependent attributes to user space -- did you look on that?
> 
> Also, you should make sure to expose the non HW specific attributes of the sessions through the standard infrastructure.
> 
> Or.
> 
> 
> 
> 
>> 
>> Upinder
>> 
>> On Dec 22, 2013, at 2:23 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:
>> 
>>> On 20/12/2013 23:37, Upinder Malhi (umalhi) wrote:
>>>> This patch depends onhttp://www.spinics.net/lists/linux-rdma/msg18193.html
>>> Why use proprietary debugfs code  to display flows? you can (and should) use the rdma subsystem netlink infrastructure for that, see these
>>> two commits
>>> 
>>> 
>>> 753f618 RDMA/cma: Add support for netlink statistics export
>>> b2cbae2 RDMA: Add netlink infrastructure
>>> 
>>> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Or Gerlitz Jan. 12, 2014, 9:32 a.m. UTC | #5
On 08/01/2014 20:57, Upinder Malhi (umalhi) wrote:
> Or,
> 	Yeah, I did think about extending the existing infrastructure to export HW specific stats and exposing some stats via standard infrastructure.  Besides the below, there are few other drawbacks with exposing statistics via netlink:
> 1) Having a two part solution, users space and kernel, will make changing the code more difficult.  Anytime another attributed is exposed, code in the kernel needs to be added to handle backwards compatibility with userspace (as I said we are going to add more stuff incrementally).

There are thousands if not millions LOCs over the kernel and user space 
tools which use netlink. Indeed when you have two for tango you 
sometimes change one side, sometimes the other and sometimes both.  A 
claim that "its easier to maintain  things when all the code resides in 
the kernel" can't be really taken seriously into account.  Netlink is 
used all over the place, so everyone's wrong?



> 2)  The Cisco VIC series cards, that is our NIC, cannot do flow stats well.  Specially, it only reports Rx byte count for a flow and doesn't report any statistics on the Tx side.  Hence, exposing these via  a standard interface to a user is going to be confusing and misleading.

1st use the standard/existing interface to report the open sessions and 
later we'll take it from there re the byte counts.


>
> Hence, at least for Cisco VIC, we want to keep these flow stats in debugfs where they can be easily extended and extra effort is required to get to them.
>
> Upinder
>
> On Jan 8, 2014, at 1:13 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:
>
>> On 08/01/2014 00:29, Upinder Malhi (umalhi) wrote:
>>> Or, The flows contain Cisco VIC specific stuff - Ex. the hardware flow id; and they will contain more cisco specific things.  Hence, they are exported via debugfs.
>> You should be able to enhance the rdma netlink infrastructure to allow for exporting HW dependent attributes to user space -- did you look on that?
>>
>> Also, you should make sure to expose the non HW specific attributes of the sessions through the standard infrastructure.
>>
>> Or.
>>
>>
>>
>>
>>> Upinder
>>>
>>> On Dec 22, 2013, at 2:23 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:
>>>
>>>> On 20/12/2013 23:37, Upinder Malhi (umalhi) wrote:
>>>>> This patch depends onhttp://www.spinics.net/lists/linux-rdma/msg18193.html
>>>> Why use proprietary debugfs code  to display flows? you can (and should) use the rdma subsystem netlink infrastructure for that, see these
>>>> two commits
>>>>
>>>>
>>>> 753f618 RDMA/cma: Add support for netlink statistics export
>>>> b2cbae2 RDMA: Add netlink infrastructure
>>>>
>>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Upinder Malhi (umalhi) Jan. 13, 2014, 10:14 p.m. UTC | #6
We are happy to use netlink if it makes sense.  What you are saying is: Netlink is used all over the kernel.  Bc it is used all over the kernel, it must be the right answer for all userspace/kernel interactions.  Therefore, if something doesn't use netlink, it must be wrong (ex. procfs, sysfs, debugfs).

We want to put the rx byte/packet count in debug fs because it is intended for developers.  We do not want to put these in a common infrastructure bc as we have found internally, lacking tx statistics confuses users.  We are being asked to write the ib core statistics infrastructure, that if it was available today, we would not use.  So unless someone else jumps in here, we are going to keep these in the debug fs.

Upinder

On Jan 12, 2014, at 1:32 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:

> On 08/01/2014 20:57, Upinder Malhi (umalhi) wrote:
>> Or,
>> 	Yeah, I did think about extending the existing infrastructure to export HW specific stats and exposing some stats via standard infrastructure.  Besides the below, there are few other drawbacks with exposing statistics via netlink:
>> 1) Having a two part solution, users space and kernel, will make changing the code more difficult.  Anytime another attributed is exposed, code in the kernel needs to be added to handle backwards compatibility with userspace (as I said we are going to add more stuff incrementally).
> 
> There are thousands if not millions LOCs over the kernel and user space tools which use netlink. Indeed when you have two for tango you sometimes change one side, sometimes the other and sometimes both.  A claim that "its easier to maintain  things when all the code resides in the kernel" can't be really taken seriously into account.  Netlink is used all over the place, so everyone's wrong?

> 
> 
> 
>> 2)  The Cisco VIC series cards, that is our NIC, cannot do flow stats well.  Specially, it only reports Rx byte count for a flow and doesn't report any statistics on the Tx side.  Hence, exposing these via  a standard interface to a user is going to be confusing and misleading.
> 
> 1st use the standard/existing interface to report the open sessions and later we'll take it from there re the byte counts.
> 
> 
>> 
>> Hence, at least for Cisco VIC, we want to keep these flow stats in debugfs where they can be easily extended and extra effort is required to get to them.
>> 
>> Upinder
>> 
>> On Jan 8, 2014, at 1:13 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:
>> 
>>> On 08/01/2014 00:29, Upinder Malhi (umalhi) wrote:
>>>> Or, The flows contain Cisco VIC specific stuff - Ex. the hardware flow id; and they will contain more cisco specific things.  Hence, they are exported via debugfs.
>>> You should be able to enhance the rdma netlink infrastructure to allow for exporting HW dependent attributes to user space -- did you look on that?
>>> 
>>> Also, you should make sure to expose the non HW specific attributes of the sessions through the standard infrastructure.
>>> 
>>> Or.
>>> 
>>> 
>>> 
>>> 
>>>> Upinder
>>>> 
>>>> On Dec 22, 2013, at 2:23 AM, Or Gerlitz <ogerlitz@mellanox.com> wrote:
>>>> 
>>>>> On 20/12/2013 23:37, Upinder Malhi (umalhi) wrote:
>>>>>> This patch depends onhttp://www.spinics.net/lists/linux-rdma/msg18193.html
>>>>> Why use proprietary debugfs code  to display flows? you can (and should) use the rdma subsystem netlink infrastructure for that, see these
>>>>> two commits
>>>>> 
>>>>> 
>>>>> 753f618 RDMA/cma: Add support for netlink statistics export
>>>>> b2cbae2 RDMA: Add netlink infrastructure
>>>>> 
>>>>> 
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Or Gerlitz Jan. 13, 2014, 11:39 p.m. UTC | #7
On Tue, Jan 14, 2014 at 12:14 AM, Upinder Malhi (umalhi)
<umalhi@cisco.com> wrote:
> We are happy to use netlink if it makes sense.  What you are saying is: Netlink is used all over the kernel.  Bc it is used all over the kernel, it must be the right answer for all userspace/kernel interactions.  Therefore, if something doesn't use netlink, it must be wrong (ex. procfs, sysfs, debugfs).
>
> We want to put the rx byte/packet count in debug fs because it is intended for developers.  We do not want to put these in a common infrastructure bc as we have found internally, lacking tx statistics confuses users.  We are being asked to write the ib core statistics infrastructure, that if it was available today, we would not use.  So unless someone else jumps in here, we are going to keep these in the debug fs.

What wrong with exposing the the flow tuples from netlink and then see
what's missing that justified debugfs?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Upinder Malhi (umalhi) Jan. 14, 2014, 12:13 a.m. UTC | #8
Understanding what's missing is important.

It doesn't require prototyping netlink statistics in ib subsystem.  Latest Cisco VIC adapters only report rx byte/pkt counts on flows.  Users expect networking statistics to report both tx and rx counts.  If a tx or rx counter is 0 or missing altogether it is *assumed* that counter is 0.  Because Cisco VIC adapters only reports rx byte/pkt counts, these statistics confuse/mislead the user if exposed through a standard interface.  Hence, we do not wish to report these through any tools easily accessible to the user.  The statistics being exposed are for developers, who understand the caveats, for debugging and the most apt interface for this purpose is debugfs.

Upinder

On Jan 13, 2014, at 3:39 PM, Or Gerlitz <or.gerlitz@gmail.com>
 wrote:

> On Tue, Jan 14, 2014 at 12:14 AM, Upinder Malhi (umalhi)
> <umalhi@cisco.com> wrote:
>> We are happy to use netlink if it makes sense.  What you are saying is: Netlink is used all over the kernel.  Bc it is used all over the kernel, it must be the right answer for all userspace/kernel interactions.  Therefore, if something doesn't use netlink, it must be wrong (ex. procfs, sysfs, debugfs).
>> 
>> We want to put the rx byte/packet count in debug fs because it is intended for developers.  We do not want to put these in a common infrastructure bc as we have found internally, lacking tx statistics confuses users.  We are being asked to write the ib core statistics infrastructure, that if it was available today, we would not use.  So unless someone else jumps in here, we are going to keep these in the debug fs.
> 
> What wrong with exposing the the flow tuples from netlink and then see
> what's missing that justified debugfs?
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/infiniband/hw/usnic/usnic_debugfs.c b/drivers/infiniband/hw/usnic/usnic_debugfs.c
index 91386df..6cb2e7c 100644
--- a/drivers/infiniband/hw/usnic/usnic_debugfs.c
+++ b/drivers/infiniband/hw/usnic/usnic_debugfs.c
@@ -22,8 +22,11 @@ 
#include "usnic.h"
#include "usnic_log.h"
#include "usnic_debugfs.h"
+#include "usnic_ib_qp_grp.h"
+#include "usnic_transport.h"

static struct dentry *debugfs_root;
+static struct dentry *flows_dentry;

static ssize_t usnic_debugfs_buildinfo_read(struct file *f, char __user *data,
						size_t count, loff_t *ppos)
@@ -48,17 +51,73 @@  static const struct file_operations usnic_debugfs_buildinfo_ops = {
	.read = usnic_debugfs_buildinfo_read
};

+static ssize_t flowinfo_read(struct file *f, char __user *data,
+				size_t count, loff_t *ppos)
+{
+	struct usnic_ib_qp_grp_flow *qp_flow;
+	int n;
+	int left;
+	char *ptr;
+	char buf[512];
+
+	qp_flow = f->private_data;
+	ptr = buf;
+	left = count;
+
+	if (*ppos > 0)
+		return 0;
+
+	spin_lock(&qp_flow->qp_grp->lock);
+	n = scnprintf(ptr, left,
+			"QP Grp ID: %d Transport: %s ",
+			qp_flow->qp_grp->grp_id,
+			usnic_transport_to_str(qp_flow->trans_type));
+	UPDATE_PTR_LEFT(n, ptr, left);
+	if (qp_flow->trans_type == USNIC_TRANSPORT_ROCE_CUSTOM) {
+		n = scnprintf(ptr, left, "Port_Num:%hu\n",
+					qp_flow->usnic_roce.port_num);
+		UPDATE_PTR_LEFT(n, ptr, left);
+	} else if (qp_flow->trans_type == USNIC_TRANSPORT_IPV4_UDP) {
+		n = usnic_transport_sock_to_str(ptr, left,
+				qp_flow->udp.sock);
+		UPDATE_PTR_LEFT(n, ptr, left);
+		n = scnprintf(ptr, left, "\n");
+		UPDATE_PTR_LEFT(n, ptr, left);
+	}
+	spin_unlock(&qp_flow->qp_grp->lock);
+
+	return simple_read_from_buffer(data, count, ppos, buf, ptr - buf);
+}
+
+static const struct file_operations flowinfo_ops = {
+	.owner = THIS_MODULE,
+	.open = simple_open,
+	.read = flowinfo_read,
+};
+
void usnic_debugfs_init(void)
{
	debugfs_root = debugfs_create_dir(DRV_NAME, NULL);
	if (IS_ERR(debugfs_root)) {
		usnic_err("Failed to create debugfs root dir, check if debugfs is enabled in kernel configuration\n");
-		debugfs_root = NULL;
-		return;
+		goto out_clear_root;
+	}
+
+	flows_dentry = debugfs_create_dir("flows", debugfs_root);
+	if (IS_ERR_OR_NULL(flows_dentry)) {
+		usnic_err("Failed to create debugfs flow dir with err %ld\n",
+				PTR_ERR(flows_dentry));
+		goto out_free_root;
	}

	debugfs_create_file("build-info", S_IRUGO, debugfs_root,
				NULL, &usnic_debugfs_buildinfo_ops);
+	return;
+
+out_free_root:
+	debugfs_remove_recursive(debugfs_root);
+out_clear_root:
+	debugfs_root = NULL;
}

void usnic_debugfs_exit(void)
@@ -69,3 +128,31 @@  void usnic_debugfs_exit(void)
	debugfs_remove_recursive(debugfs_root);
	debugfs_root = NULL;
}
+
+void usnic_debugfs_flow_add(struct usnic_ib_qp_grp_flow *qp_flow)
+{
+	struct usnic_ib_qp_grp *qp_grp;
+
+	if (IS_ERR_OR_NULL(flows_dentry))
+		return;
+
+	qp_grp = qp_flow->qp_grp;
+
+	scnprintf(qp_flow->dentry_name, sizeof(qp_flow->dentry_name),
+			"%u", qp_flow->flow->flow_id);
+	qp_flow->dbgfs_dentry = debugfs_create_file(qp_flow->dentry_name,
+							S_IRUGO,
+							flows_dentry,
+							qp_flow,
+							&flowinfo_ops);
+	if (IS_ERR_OR_NULL(qp_flow->dbgfs_dentry)) {
+		usnic_err("Failed to create dbg fs entry for flow %u\n",
+				qp_flow->flow->flow_id);
+	}
+}
+
+void usnic_debugfs_flow_remove(struct usnic_ib_qp_grp_flow *qp_flow)
+{
+	if (!IS_ERR_OR_NULL(qp_flow->dbgfs_dentry))
+		debugfs_remove(qp_flow->dbgfs_dentry);
+}
diff --git a/drivers/infiniband/hw/usnic/usnic_debugfs.h b/drivers/infiniband/hw/usnic/usnic_debugfs.h
index 914a330..4087d24 100644
--- a/drivers/infiniband/hw/usnic/usnic_debugfs.h
+++ b/drivers/infiniband/hw/usnic/usnic_debugfs.h
@@ -18,8 +18,12 @@ 
#ifndef USNIC_DEBUGFS_H_
#define USNIC_DEBUGFS_H_

+#include "usnic_ib_qp_grp.h"
+
void usnic_debugfs_init(void);

void usnic_debugfs_exit(void);
+void usnic_debugfs_flow_add(struct usnic_ib_qp_grp_flow *qp_flow);
+void usnic_debugfs_flow_remove(struct usnic_ib_qp_grp_flow *qp_flow);

#endif /*!USNIC_DEBUGFS_H_ */
diff --git a/drivers/infiniband/hw/usnic/usnic_ib.h b/drivers/infiniband/hw/usnic/usnic_ib.h
index 92d9d9a..111a86e 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib.h
+++ b/drivers/infiniband/hw/usnic/usnic_ib.h
@@ -109,4 +109,11 @@  struct usnic_ib_mr *to_umr(struct ib_mr *ibmr)
	return container_of(ibmr, struct usnic_ib_mr, ibmr);
}
void usnic_ib_log_vf(struct usnic_ib_vf *vf);
+
+#define UPDATE_PTR_LEFT(N, P, L)			\
+do {							\
+	L -= (N);					\
+	P += (N);					\
+} while (0)
+
#endif /* USNIC_IB_H_ */
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c b/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c
index 71efb8d..6d746d3 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.c
@@ -24,6 +24,7 @@ 
#include "usnic_vnic.h"
#include "usnic_fwd.h"
#include "usnic_uiom.h"
+#include "usnic_debugfs.h"
#include "usnic_ib_qp_grp.h"
#include "usnic_ib_sysfs.h"
#include "usnic_transport.h"
@@ -340,8 +341,10 @@  create_and_add_flow(struct usnic_ib_qp_grp *qp_grp,
		return ERR_PTR(-EINVAL);
	}

-	if (!IS_ERR_OR_NULL(qp_flow))
+	if (!IS_ERR_OR_NULL(qp_flow)) {
		list_add_tail(&qp_flow->link, &qp_grp->flows_lst);
+		usnic_debugfs_flow_add(qp_flow);
+	}


	return qp_flow;
@@ -349,6 +352,7 @@  create_and_add_flow(struct usnic_ib_qp_grp *qp_grp,

static void release_and_remove_flow(struct usnic_ib_qp_grp_flow *qp_flow)
{
+	usnic_debugfs_flow_remove(qp_flow);
	list_del(&qp_flow->link);

	switch (qp_flow->trans_type) {
@@ -728,9 +732,9 @@  void usnic_ib_qp_grp_destroy(struct usnic_ib_qp_grp *qp_grp)
	WARN_ON(qp_grp->state != IB_QPS_RESET);
	WARN_ON(!spin_is_locked(&qp_grp->vf->lock));

+	release_and_remove_all_flows(qp_grp);
	usnic_ib_sysfs_qpn_remove(qp_grp);
	qp_grp_and_vf_unbind(qp_grp);
-	release_and_remove_all_flows(qp_grp);
	free_qp_grp_res(qp_grp->res_chunk_list);
	kfree(qp_grp);
}
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.h b/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.h
index a8ba1b9..b0aafe8 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.h
+++ b/drivers/infiniband/hw/usnic/usnic_ib_qp_grp.h
@@ -19,6 +19,7 @@ 
#ifndef USNIC_IB_QP_GRP_H_
#define USNIC_IB_QP_GRP_H_

+#include <linux/debugfs.h>
#include <rdma/ib_verbs.h>

#include "usnic_ib.h"
@@ -62,6 +63,10 @@  struct usnic_ib_qp_grp_flow {
	};
	struct usnic_ib_qp_grp		*qp_grp;
	struct list_head		link;
+
+	/* Debug FS */
+	struct dentry			*dbgfs_dentry;
+	char				dentry_name[32];
};

static const struct
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_sysfs.c b/drivers/infiniband/hw/usnic/usnic_ib_sysfs.c
index 3e58842..27dc67c 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_sysfs.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_sysfs.c
@@ -30,12 +30,6 @@ 
#include "usnic_ib_verbs.h"
#include "usnic_log.h"

-#define UPDATE_PTR_LEFT(N, P, L)			\
-do {							\
-	L -= (N);					\
-	P += (N);					\
-} while (0)
-
static ssize_t usnic_ib_show_fw_ver(struct device *device,
					struct device_attribute *attr,
					char *buf)