Message ID | 20200603101738.159637-1-kamalheib1@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Headers | show |
Series | [for-rc] RDMA/rxe: Fix QP cleanup flow | expand |
-----Original Message----- From: Kamal Heib <kamalheib1@gmail.com> Sent: Wednesday, June 3, 2020 6:18 PM To: linux-rdma@vger.kernel.org Cc: Doug Ledford <dledford@redhat.com>; Jason Gunthorpe <jgg@ziepe.ca>; Yanjun Zhu <yanjunz@mellanox.com>; Kamal Heib <kamalheib1@gmail.com> Subject: [PATCH for-rc] RDMA/rxe: Fix QP cleanup flow Avoid releasing the socket associated with each QP in rxe_qp_cleanup() which can sleep and move it to rxe_destroy_qp() instead, after doing this there is no need for the execute_work that used to avoid calling rxe_qp_cleanup() directly. also check that the socket is valid in rxe_skb_tx_dtor() to avoid use-after-free. Fixes: 8700e3e7c485 ("Soft RoCE driver") Fixes: bb3ffb7ad48a ("RDMA/rxe: Fix rxe_qp_cleanup()") Signed-off-by: Kamal Heib <kamalheib1@gmail.com> --- drivers/infiniband/sw/rxe/rxe_net.c | 14 ++++++++++++-- drivers/infiniband/sw/rxe/rxe_qp.c | 22 ++++++---------------- drivers/infiniband/sw/rxe/rxe_verbs.h | 3 --- 3 files changed, 18 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 312c2fc961c0..298ccd3fd3e2 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -411,8 +411,18 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) static void rxe_skb_tx_dtor(struct sk_buff *skb) { struct sock *sk = skb->sk; - struct rxe_qp *qp = sk->sk_user_data; - int skb_out = atomic_dec_return(&qp->skb_out); + struct rxe_qp *qp; + int skb_out; + + if (!sk) + return; + + qp = sk->sk_user_data; + + if (!qp) + return; + + skb_out = atomic_dec_return(&qp->skb_out); if (unlikely(qp->need_req_skb && skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 6c11c3aeeca6..89dac6c1111c 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -787,6 +787,7 @@ void rxe_qp_destroy(struct rxe_qp *qp) if (qp_type(qp) == IB_QPT_RC) { del_timer_sync(&qp->retrans_timer); del_timer_sync(&qp->rnr_nak_timer); + sk_dst_reset(qp->sk->sk); } rxe_cleanup_task(&qp->req.task); @@ -798,12 +799,15 @@ void rxe_qp_destroy(struct rxe_qp *qp) __rxe_do_task(&qp->comp.task); __rxe_do_task(&qp->req.task); } + + kernel_sock_shutdown(qp->sk, SHUT_RDWR); + sock_release(qp->sk); } /* called when the last reference to the qp is dropped */ -static void rxe_qp_do_cleanup(struct work_struct *work) +void rxe_qp_cleanup(struct rxe_pool_entry *arg) { - struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); + struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); rxe_drop_all_mcast_groups(qp); @@ -828,19 +832,5 @@ static void rxe_qp_do_cleanup(struct work_struct *work) qp->resp.mr = NULL; } - if (qp_type(qp) == IB_QPT_RC) - sk_dst_reset(qp->sk->sk); - free_rd_atomic_resources(qp); - - kernel_sock_shutdown(qp->sk, SHUT_RDWR); - sock_release(qp->sk); -} - -/* called when the last reference to the qp is dropped */ -void rxe_qp_cleanup(struct rxe_pool_entry *arg) -{ - struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); - - execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work); } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 92de39c4a7c1..339debaf095f 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -35,7 +35,6 @@ #define RXE_VERBS_H #include <linux/interrupt.h> -#include <linux/workqueue.h> #include <rdma/rdma_user_rxe.h> #include "rxe_pool.h" #include "rxe_task.h" @@ -285,8 +284,6 @@ struct rxe_qp { struct timer_list rnr_nak_timer; spinlock_t state_lock; /* guard requester and completer */ - - struct execute_work cleanup_work; }; enum rxe_mem_state { -- 2.25.4
On Fri, Jun 12, 2020 at 4:31 PM Yanjun Zhu <yanjunz@mellanox.com> wrote: > > > > -----Original Message----- > From: Kamal Heib <kamalheib1@gmail.com> > Sent: Wednesday, June 3, 2020 6:18 PM > To: linux-rdma@vger.kernel.org > Cc: Doug Ledford <dledford@redhat.com>; Jason Gunthorpe <jgg@ziepe.ca>; Yanjun Zhu <yanjunz@mellanox.com>; Kamal Heib <kamalheib1@gmail.com> > Subject: [PATCH for-rc] RDMA/rxe: Fix QP cleanup flow > > Avoid releasing the socket associated with each QP in rxe_qp_cleanup() which can sleep and move it to rxe_destroy_qp() instead, after doing this there is no need for the execute_work that used to avoid calling > rxe_qp_cleanup() directly. also check that the socket is valid in > rxe_skb_tx_dtor() to avoid use-after-free. > > Fixes: 8700e3e7c485 ("Soft RoCE driver") > Fixes: bb3ffb7ad48a ("RDMA/rxe: Fix rxe_qp_cleanup()") > Signed-off-by: Kamal Heib <kamalheib1@gmail.com> > --- > drivers/infiniband/sw/rxe/rxe_net.c | 14 ++++++++++++-- > drivers/infiniband/sw/rxe/rxe_qp.c | 22 ++++++---------------- > drivers/infiniband/sw/rxe/rxe_verbs.h | 3 --- > 3 files changed, 18 insertions(+), 21 deletions(-) > > diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c > index 312c2fc961c0..298ccd3fd3e2 100644 > --- a/drivers/infiniband/sw/rxe/rxe_net.c > +++ b/drivers/infiniband/sw/rxe/rxe_net.c > @@ -411,8 +411,18 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) static void rxe_skb_tx_dtor(struct sk_buff *skb) { > struct sock *sk = skb->sk; > - struct rxe_qp *qp = sk->sk_user_data; > - int skb_out = atomic_dec_return(&qp->skb_out); > + struct rxe_qp *qp; > + int skb_out; > + > + if (!sk) When does sk become NULL? > + return; > + > + qp = sk->sk_user_data; > + > + if (!qp) When does qp become NULL? > + return; > + > + skb_out = atomic_dec_return(&qp->skb_out); > > if (unlikely(qp->need_req_skb && > skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c > index 6c11c3aeeca6..89dac6c1111c 100644 > --- a/drivers/infiniband/sw/rxe/rxe_qp.c > +++ b/drivers/infiniband/sw/rxe/rxe_qp.c > @@ -787,6 +787,7 @@ void rxe_qp_destroy(struct rxe_qp *qp) > if (qp_type(qp) == IB_QPT_RC) { > del_timer_sync(&qp->retrans_timer); > del_timer_sync(&qp->rnr_nak_timer); > + sk_dst_reset(qp->sk->sk); > } > > rxe_cleanup_task(&qp->req.task); > @@ -798,12 +799,15 @@ void rxe_qp_destroy(struct rxe_qp *qp) > __rxe_do_task(&qp->comp.task); > __rxe_do_task(&qp->req.task); > } > + > + kernel_sock_shutdown(qp->sk, SHUT_RDWR); > + sock_release(qp->sk); > } > > /* called when the last reference to the qp is dropped */ -static void rxe_qp_do_cleanup(struct work_struct *work) > +void rxe_qp_cleanup(struct rxe_pool_entry *arg) > { > - struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); > + struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); > > rxe_drop_all_mcast_groups(qp); > > @@ -828,19 +832,5 @@ static void rxe_qp_do_cleanup(struct work_struct *work) > qp->resp.mr = NULL; > } > > - if (qp_type(qp) == IB_QPT_RC) > - sk_dst_reset(qp->sk->sk); > - > free_rd_atomic_resources(qp); > - > - kernel_sock_shutdown(qp->sk, SHUT_RDWR); > - sock_release(qp->sk); > -} > - > -/* called when the last reference to the qp is dropped */ -void rxe_qp_cleanup(struct rxe_pool_entry *arg) -{ > - struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); > - > - execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work); > } > diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h > index 92de39c4a7c1..339debaf095f 100644 > --- a/drivers/infiniband/sw/rxe/rxe_verbs.h > +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h > @@ -35,7 +35,6 @@ > #define RXE_VERBS_H > > #include <linux/interrupt.h> > -#include <linux/workqueue.h> > #include <rdma/rdma_user_rxe.h> > #include "rxe_pool.h" > #include "rxe_task.h" > @@ -285,8 +284,6 @@ struct rxe_qp { > struct timer_list rnr_nak_timer; > > spinlock_t state_lock; /* guard requester and completer */ > - > - struct execute_work cleanup_work; > }; > > enum rxe_mem_state { > -- > 2.25.4 >
On Wed, Jun 03, 2020 at 01:17:38PM +0300, Kamal Heib wrote: > Avoid releasing the socket associated with each QP in rxe_qp_cleanup() > which can sleep and move it to rxe_destroy_qp() instead, after doing > this there is no need for the execute_work that used to avoid calling > rxe_qp_cleanup() directly. also check that the socket is valid in > rxe_skb_tx_dtor() to avoid use-after-free. > > Fixes: 8700e3e7c485 ("Soft RoCE driver") > Fixes: bb3ffb7ad48a ("RDMA/rxe: Fix rxe_qp_cleanup()") > Signed-off-by: Kamal Heib <kamalheib1@gmail.com> > --- This will require more work, please drop this patch. Nacked-by: Kamal Heib <kamalheib1@gmail.com> > drivers/infiniband/sw/rxe/rxe_net.c | 14 ++++++++++++-- > drivers/infiniband/sw/rxe/rxe_qp.c | 22 ++++++---------------- > drivers/infiniband/sw/rxe/rxe_verbs.h | 3 --- > 3 files changed, 18 insertions(+), 21 deletions(-) > > diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c > index 312c2fc961c0..298ccd3fd3e2 100644 > --- a/drivers/infiniband/sw/rxe/rxe_net.c > +++ b/drivers/infiniband/sw/rxe/rxe_net.c > @@ -411,8 +411,18 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) > static void rxe_skb_tx_dtor(struct sk_buff *skb) > { > struct sock *sk = skb->sk; > - struct rxe_qp *qp = sk->sk_user_data; > - int skb_out = atomic_dec_return(&qp->skb_out); > + struct rxe_qp *qp; > + int skb_out; > + > + if (!sk) > + return; > + > + qp = sk->sk_user_data; > + > + if (!qp) > + return; > + > + skb_out = atomic_dec_return(&qp->skb_out); > > if (unlikely(qp->need_req_skb && > skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) > diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c > index 6c11c3aeeca6..89dac6c1111c 100644 > --- a/drivers/infiniband/sw/rxe/rxe_qp.c > +++ b/drivers/infiniband/sw/rxe/rxe_qp.c > @@ -787,6 +787,7 @@ void rxe_qp_destroy(struct rxe_qp *qp) > if (qp_type(qp) == IB_QPT_RC) { > del_timer_sync(&qp->retrans_timer); > del_timer_sync(&qp->rnr_nak_timer); > + sk_dst_reset(qp->sk->sk); > } > > rxe_cleanup_task(&qp->req.task); > @@ -798,12 +799,15 @@ void rxe_qp_destroy(struct rxe_qp *qp) > __rxe_do_task(&qp->comp.task); > __rxe_do_task(&qp->req.task); > } > + > + kernel_sock_shutdown(qp->sk, SHUT_RDWR); > + sock_release(qp->sk); > } > > /* called when the last reference to the qp is dropped */ > -static void rxe_qp_do_cleanup(struct work_struct *work) > +void rxe_qp_cleanup(struct rxe_pool_entry *arg) > { > - struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); > + struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); > > rxe_drop_all_mcast_groups(qp); > > @@ -828,19 +832,5 @@ static void rxe_qp_do_cleanup(struct work_struct *work) > qp->resp.mr = NULL; > } > > - if (qp_type(qp) == IB_QPT_RC) > - sk_dst_reset(qp->sk->sk); > - > free_rd_atomic_resources(qp); > - > - kernel_sock_shutdown(qp->sk, SHUT_RDWR); > - sock_release(qp->sk); > -} > - > -/* called when the last reference to the qp is dropped */ > -void rxe_qp_cleanup(struct rxe_pool_entry *arg) > -{ > - struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); > - > - execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work); > } > diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h > index 92de39c4a7c1..339debaf095f 100644 > --- a/drivers/infiniband/sw/rxe/rxe_verbs.h > +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h > @@ -35,7 +35,6 @@ > #define RXE_VERBS_H > > #include <linux/interrupt.h> > -#include <linux/workqueue.h> > #include <rdma/rdma_user_rxe.h> > #include "rxe_pool.h" > #include "rxe_task.h" > @@ -285,8 +284,6 @@ struct rxe_qp { > struct timer_list rnr_nak_timer; > > spinlock_t state_lock; /* guard requester and completer */ > - > - struct execute_work cleanup_work; > }; > > enum rxe_mem_state { > -- > 2.25.4 >
On Fri, Jun 12, 2020 at 04:32:51PM +0800, Zhu Yanjun wrote: > On Fri, Jun 12, 2020 at 4:31 PM Yanjun Zhu <yanjunz@mellanox.com> wrote: > > > > > > > > -----Original Message----- > > From: Kamal Heib <kamalheib1@gmail.com> > > Sent: Wednesday, June 3, 2020 6:18 PM > > To: linux-rdma@vger.kernel.org > > Cc: Doug Ledford <dledford@redhat.com>; Jason Gunthorpe <jgg@ziepe.ca>; Yanjun Zhu <yanjunz@mellanox.com>; Kamal Heib <kamalheib1@gmail.com> > > Subject: [PATCH for-rc] RDMA/rxe: Fix QP cleanup flow > > > > Avoid releasing the socket associated with each QP in rxe_qp_cleanup() which can sleep and move it to rxe_destroy_qp() instead, after doing this there is no need for the execute_work that used to avoid calling > > rxe_qp_cleanup() directly. also check that the socket is valid in > > rxe_skb_tx_dtor() to avoid use-after-free. > > > > Fixes: 8700e3e7c485 ("Soft RoCE driver") > > Fixes: bb3ffb7ad48a ("RDMA/rxe: Fix rxe_qp_cleanup()") > > Signed-off-by: Kamal Heib <kamalheib1@gmail.com> > > --- > > drivers/infiniband/sw/rxe/rxe_net.c | 14 ++++++++++++-- > > drivers/infiniband/sw/rxe/rxe_qp.c | 22 ++++++---------------- > > drivers/infiniband/sw/rxe/rxe_verbs.h | 3 --- > > 3 files changed, 18 insertions(+), 21 deletions(-) > > > > diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c > > index 312c2fc961c0..298ccd3fd3e2 100644 > > --- a/drivers/infiniband/sw/rxe/rxe_net.c > > +++ b/drivers/infiniband/sw/rxe/rxe_net.c > > @@ -411,8 +411,18 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) static void rxe_skb_tx_dtor(struct sk_buff *skb) { > > struct sock *sk = skb->sk; > > - struct rxe_qp *qp = sk->sk_user_data; > > - int skb_out = atomic_dec_return(&qp->skb_out); > > + struct rxe_qp *qp; > > + int skb_out; > > + > > + if (!sk) > > When does sk become NULL? > Looks like the sk isn't set to NULL when it gets released..., This change will require more work, please drop this patch. Thanks, Kamal > > + return; > > + > > + qp = sk->sk_user_data; > > + > > + if (!qp) > > When does qp become NULL? > > > + return; > > + > > + skb_out = atomic_dec_return(&qp->skb_out); > > > > if (unlikely(qp->need_req_skb && > > skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c > > index 6c11c3aeeca6..89dac6c1111c 100644 > > --- a/drivers/infiniband/sw/rxe/rxe_qp.c > > +++ b/drivers/infiniband/sw/rxe/rxe_qp.c > > @@ -787,6 +787,7 @@ void rxe_qp_destroy(struct rxe_qp *qp) > > if (qp_type(qp) == IB_QPT_RC) { > > del_timer_sync(&qp->retrans_timer); > > del_timer_sync(&qp->rnr_nak_timer); > > + sk_dst_reset(qp->sk->sk); > > } > > > > rxe_cleanup_task(&qp->req.task); > > @@ -798,12 +799,15 @@ void rxe_qp_destroy(struct rxe_qp *qp) > > __rxe_do_task(&qp->comp.task); > > __rxe_do_task(&qp->req.task); > > } > > + > > + kernel_sock_shutdown(qp->sk, SHUT_RDWR); > > + sock_release(qp->sk); > > } > > > > /* called when the last reference to the qp is dropped */ -static void rxe_qp_do_cleanup(struct work_struct *work) > > +void rxe_qp_cleanup(struct rxe_pool_entry *arg) > > { > > - struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); > > + struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); > > > > rxe_drop_all_mcast_groups(qp); > > > > @@ -828,19 +832,5 @@ static void rxe_qp_do_cleanup(struct work_struct *work) > > qp->resp.mr = NULL; > > } > > > > - if (qp_type(qp) == IB_QPT_RC) > > - sk_dst_reset(qp->sk->sk); > > - > > free_rd_atomic_resources(qp); > > - > > - kernel_sock_shutdown(qp->sk, SHUT_RDWR); > > - sock_release(qp->sk); > > -} > > - > > -/* called when the last reference to the qp is dropped */ -void rxe_qp_cleanup(struct rxe_pool_entry *arg) -{ > > - struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); > > - > > - execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work); > > } > > diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h > > index 92de39c4a7c1..339debaf095f 100644 > > --- a/drivers/infiniband/sw/rxe/rxe_verbs.h > > +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h > > @@ -35,7 +35,6 @@ > > #define RXE_VERBS_H > > > > #include <linux/interrupt.h> > > -#include <linux/workqueue.h> > > #include <rdma/rdma_user_rxe.h> > > #include "rxe_pool.h" > > #include "rxe_task.h" > > @@ -285,8 +284,6 @@ struct rxe_qp { > > struct timer_list rnr_nak_timer; > > > > spinlock_t state_lock; /* guard requester and completer */ > > - > > - struct execute_work cleanup_work; > > }; > > > > enum rxe_mem_state { > > -- > > 2.25.4 > >
diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 312c2fc961c0..298ccd3fd3e2 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -411,8 +411,18 @@ int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb, u32 *crc) static void rxe_skb_tx_dtor(struct sk_buff *skb) { struct sock *sk = skb->sk; - struct rxe_qp *qp = sk->sk_user_data; - int skb_out = atomic_dec_return(&qp->skb_out); + struct rxe_qp *qp; + int skb_out; + + if (!sk) + return; + + qp = sk->sk_user_data; + + if (!qp) + return; + + skb_out = atomic_dec_return(&qp->skb_out); if (unlikely(qp->need_req_skb && skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 6c11c3aeeca6..89dac6c1111c 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -787,6 +787,7 @@ void rxe_qp_destroy(struct rxe_qp *qp) if (qp_type(qp) == IB_QPT_RC) { del_timer_sync(&qp->retrans_timer); del_timer_sync(&qp->rnr_nak_timer); + sk_dst_reset(qp->sk->sk); } rxe_cleanup_task(&qp->req.task); @@ -798,12 +799,15 @@ void rxe_qp_destroy(struct rxe_qp *qp) __rxe_do_task(&qp->comp.task); __rxe_do_task(&qp->req.task); } + + kernel_sock_shutdown(qp->sk, SHUT_RDWR); + sock_release(qp->sk); } /* called when the last reference to the qp is dropped */ -static void rxe_qp_do_cleanup(struct work_struct *work) +void rxe_qp_cleanup(struct rxe_pool_entry *arg) { - struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); + struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); rxe_drop_all_mcast_groups(qp); @@ -828,19 +832,5 @@ static void rxe_qp_do_cleanup(struct work_struct *work) qp->resp.mr = NULL; } - if (qp_type(qp) == IB_QPT_RC) - sk_dst_reset(qp->sk->sk); - free_rd_atomic_resources(qp); - - kernel_sock_shutdown(qp->sk, SHUT_RDWR); - sock_release(qp->sk); -} - -/* called when the last reference to the qp is dropped */ -void rxe_qp_cleanup(struct rxe_pool_entry *arg) -{ - struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); - - execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work); } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 92de39c4a7c1..339debaf095f 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -35,7 +35,6 @@ #define RXE_VERBS_H #include <linux/interrupt.h> -#include <linux/workqueue.h> #include <rdma/rdma_user_rxe.h> #include "rxe_pool.h" #include "rxe_task.h" @@ -285,8 +284,6 @@ struct rxe_qp { struct timer_list rnr_nak_timer; spinlock_t state_lock; /* guard requester and completer */ - - struct execute_work cleanup_work; }; enum rxe_mem_state {
Avoid releasing the socket associated with each QP in rxe_qp_cleanup() which can sleep and move it to rxe_destroy_qp() instead, after doing this there is no need for the execute_work that used to avoid calling rxe_qp_cleanup() directly. also check that the socket is valid in rxe_skb_tx_dtor() to avoid use-after-free. Fixes: 8700e3e7c485 ("Soft RoCE driver") Fixes: bb3ffb7ad48a ("RDMA/rxe: Fix rxe_qp_cleanup()") Signed-off-by: Kamal Heib <kamalheib1@gmail.com> --- drivers/infiniband/sw/rxe/rxe_net.c | 14 ++++++++++++-- drivers/infiniband/sw/rxe/rxe_qp.c | 22 ++++++---------------- drivers/infiniband/sw/rxe/rxe_verbs.h | 3 --- 3 files changed, 18 insertions(+), 21 deletions(-)