From patchwork Thu Mar 14 00:11:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 10851979 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 237E717DF for ; Thu, 14 Mar 2019 00:14:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C77B28756 for ; Thu, 14 Mar 2019 00:14:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F21EB28796; Thu, 14 Mar 2019 00:14:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 98EB528756 for ; Thu, 14 Mar 2019 00:14:16 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 498DB21FB00; Wed, 13 Mar 2019 17:14:16 -0700 (PDT) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id E3E5D21FD88 for ; Wed, 13 Mar 2019 17:14:13 -0700 (PDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id C41DCAC4B; Thu, 14 Mar 2019 00:14:12 +0000 (UTC) From: NeilBrown To: Andreas Dilger , James Simmons , Oleg Drokin Date: Thu, 14 Mar 2019 11:11:50 +1100 Message-ID: <155252231013.26912.8330785791899165016.stgit@noble.brown> In-Reply-To: <155252182126.26912.1842463462595601611.stgit@noble.brown> References: <155252182126.26912.1842463462595601611.stgit@noble.brown> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Subject: [lustre-devel] [PATCH 13/32] lustre: lnet: always put a page list into struct lnet_libmd. X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" X-Virus-Scanned: ClamAV using ClamSMTP 'struct lnet_libmd' is only created in lnet_md_build(). It can be given a list of pages or a virtual address. In the latter case, the memory will eventually be split into a list of pages. It is cleaner to split it into a list of pages early so that all lower levels only need to handle one type: a page list. Signed-off-by: NeilBrown --- drivers/staging/lustre/lnet/lnet/lib-md.c | 43 ++++++++++++++++++++--------- 1 file changed, 29 insertions(+), 14 deletions(-) diff --git a/drivers/staging/lustre/lnet/lnet/lib-md.c b/drivers/staging/lustre/lnet/lnet/lib-md.c index 26c560a1e8b9..970db903552d 100644 --- a/drivers/staging/lustre/lnet/lnet/lib-md.c +++ b/drivers/staging/lustre/lnet/lnet/lib-md.c @@ -180,14 +180,13 @@ lnet_md_build(struct lnet_md *umd, int unlink) struct lnet_libmd *lmd; unsigned int size; - if (umd->options & LNET_MD_KIOV) { + if (umd->options & LNET_MD_KIOV) niov = umd->length; - size = offsetof(struct lnet_libmd, md_iov.kiov[niov]); - } else { - niov = 1; - size = offsetof(struct lnet_libmd, md_iov.iov[niov]); - } + else + niov = DIV_ROUND_UP(offset_in_page(umd->start) + umd->length, + PAGE_SIZE); + size = offsetof(struct lnet_libmd, md_iov.kiov[niov]); lmd = kzalloc(size, GFP_NOFS); if (!lmd) return ERR_PTR(-ENOMEM); @@ -208,8 +207,6 @@ lnet_md_build(struct lnet_md *umd, int unlink) lmd->md_bulk_handle = umd->bulk_handle; if (umd->options & LNET_MD_KIOV) { - niov = umd->length; - lmd->md_niov = umd->length; memcpy(lmd->md_iov.kiov, umd->start, niov * sizeof(lmd->md_iov.kiov[0])); @@ -232,12 +229,29 @@ lnet_md_build(struct lnet_md *umd, int unlink) kfree(lmd); return ERR_PTR(-EINVAL); } - } else { /* contiguous */ - lmd->md_length = umd->length; - niov = 1; - lmd->md_niov = 1; - lmd->md_iov.iov[0].iov_base = umd->start; - lmd->md_iov.iov[0].iov_len = umd->length; + } else { /* contiguous - split into pages */ + void *pa = umd->start; + int len = umd->length; + + lmd->md_length = len; + i = 0; + while (len) { + struct page *p; + int plen; + + if (is_vmalloc_addr(pa)) + p = vmalloc_to_page(pa); + else + p = virt_to_page(pa); + plen = min_t(int, len, PAGE_SIZE - offset_in_page(pa)); + + lmd->md_iov.kiov[i].bv_page = p; + lmd->md_iov.kiov[i].bv_offset = offset_in_page(pa); + lmd->md_iov.kiov[i].bv_len = plen; + len -= plen; + pa += plen; + i += 1; + } if ((umd->options & LNET_MD_MAX_SIZE) && /* max size used */ (umd->max_size < 0 || @@ -245,6 +259,7 @@ lnet_md_build(struct lnet_md *umd, int unlink) kfree(lmd); return ERR_PTR(-EINVAL); } + lmd->md_options |= LNET_MD_KIOV; } return lmd;