From patchwork Mon Jan 11 17:56:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 8007841 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2B94EBEEED for ; Mon, 11 Jan 2016 18:08:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 412CC2028D for ; Mon, 11 Jan 2016 18:08:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0213D202DD for ; Mon, 11 Jan 2016 18:08:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759157AbcAKSIM (ORCPT ); Mon, 11 Jan 2016 13:08:12 -0500 Received: from mga04.intel.com ([192.55.52.120]:63034 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759208AbcAKSIL (ORCPT ); Mon, 11 Jan 2016 13:08:11 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP; 11 Jan 2016 09:57:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,554,1444719600"; d="scan'208";a="858306925" Received: from phlsvsds.ph.intel.com ([10.228.195.38]) by orsmga001.jf.intel.com with ESMTP; 11 Jan 2016 09:57:45 -0800 Received: from phlsvsds.ph.intel.com (localhost.localdomain [127.0.0.1]) by phlsvsds.ph.intel.com (8.13.8/8.13.8) with ESMTP id u0BHvh1x015163; Mon, 11 Jan 2016 12:57:43 -0500 Received: (from iweiny@localhost) by phlsvsds.ph.intel.com (8.13.8/8.13.8/Submit) id u0BHvhue015160; Mon, 11 Jan 2016 12:57:43 -0500 X-Authentication-Warning: phlsvsds.ph.intel.com: iweiny set sender to ira.weiny@intel.com using -f From: ira.weiny@intel.com To: dledford@redhat.com, linux-rdma@vger.kernel.org Cc: gregkh@linuxfoundation.org, devel@driverdev.osuosl.org, Mitko Haralanov Subject: [RESUBMIT PATCH v2 11/14] staging/rdma/hfi1: Add MMU notifier callback function Date: Mon, 11 Jan 2016 12:56:47 -0500 Message-Id: <1452535010-14087-12-git-send-email-ira.weiny@intel.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1452535010-14087-1-git-send-email-ira.weiny@intel.com> References: <1452535010-14087-1-git-send-email-ira.weiny@intel.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Mitko Haralanov TID caching will rely on the MMU notifier to be told when memory is being invalidated. When the callback is called, the driver will find all RcvArray entries that span the invalidated buffer and "schedule" them to be freed by the PSM library. This function is currently unused and is being added in preparation for the TID caching feature. Signed-off-by: Mitko Haralanov Reviewed-by: Ira Weiny --- drivers/staging/rdma/hfi1/user_exp_rcv.c | 67 +++++++++++++++++++++++++++++++- 1 file changed, 65 insertions(+), 2 deletions(-) diff --git a/drivers/staging/rdma/hfi1/user_exp_rcv.c b/drivers/staging/rdma/hfi1/user_exp_rcv.c index 843023e2e2c7..1787c55d21d6 100644 --- a/drivers/staging/rdma/hfi1/user_exp_rcv.c +++ b/drivers/staging/rdma/hfi1/user_exp_rcv.c @@ -104,7 +104,7 @@ static int set_rcvarray_entry(struct file *, unsigned long, u32, static inline int mmu_addr_cmp(struct mmu_rb_node *, unsigned long, unsigned long); static struct mmu_rb_node *mmu_rb_search_by_addr(struct rb_root *, - unsigned long) __maybe_unused; + unsigned long); static inline struct mmu_rb_node *mmu_rb_search_by_entry(struct rb_root *, u32); static int mmu_rb_insert_by_addr(struct rb_root *, struct mmu_rb_node *); @@ -683,7 +683,70 @@ static void mmu_notifier_mem_invalidate(struct mmu_notifier *mn, unsigned long start, unsigned long end, enum mmu_call_types type) { - /* Stub for now */ + struct hfi1_filedata *fd = container_of(mn, struct hfi1_filedata, mn); + struct hfi1_ctxtdata *uctxt = fd->uctxt; + struct rb_root *root = &fd->tid_rb_root; + struct mmu_rb_node *node; + unsigned long addr = start; + + spin_lock(&fd->rb_lock); + while (addr < end) { + node = mmu_rb_search_by_addr(root, addr); + + if (!node) { + /* + * Didn't find a node at this address. However, the + * range could be bigger than what we have registered + * so we have to keep looking. + */ + addr += PAGE_SIZE; + continue; + } + + /* + * The next address to be looked up is computed based + * on the node's starting address. This is due to the + * fact that the range where we start might be in the + * middle of the node's buffer so simply incrementing + * the address by the node's size would result is a + * bad address. + */ + addr = node->virt + (node->npages * PAGE_SIZE); + if (node->freed) + continue; + + node->freed = true; + + spin_lock(&fd->invalid_lock); + if (fd->invalid_tid_idx < uctxt->expected_count) { + fd->invalid_tids[fd->invalid_tid_idx] = + rcventry2tidinfo(node->rcventry - + uctxt->expected_base); + fd->invalid_tids[fd->invalid_tid_idx] |= + EXP_TID_SET(LEN, node->npages); + if (!fd->invalid_tid_idx) { + unsigned long *ev; + + /* + * hfi1_set_uevent_bits() sets a user event flag + * for all processes. Because calling into the + * driver to process TID cache invalidations is + * expensive and TID cache invalidations are + * handled on a per-process basis, we can + * optimize this to set the flag only for the + * process in question. + */ + ev = uctxt->dd->events + + (((uctxt->ctxt - + uctxt->dd->first_user_ctxt) * + HFI1_MAX_SHARED_CTXTS) + fd->subctxt); + set_bit(_HFI1_EVENT_TID_MMU_NOTIFY_BIT, ev); + } + fd->invalid_tid_idx++; + } + spin_unlock(&fd->invalid_lock); + } + spin_unlock(&fd->rb_lock); } static inline int mmu_addr_cmp(struct mmu_rb_node *node, unsigned long addr,