From patchwork Fri Oct 19 16:04:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 10649725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4449B17DE for ; Fri, 19 Oct 2018 16:05:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2DB1128329 for ; Fri, 19 Oct 2018 16:05:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F1D3283C7; Fri, 19 Oct 2018 16:05:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4548F2836F for ; Fri, 19 Oct 2018 16:05:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D07A6B026A; Fri, 19 Oct 2018 12:04:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 057516B026B; Fri, 19 Oct 2018 12:04:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E38C86B026C; Fri, 19 Oct 2018 12:04:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by kanga.kvack.org (Postfix) with ESMTP id A5D516B026A for ; Fri, 19 Oct 2018 12:04:55 -0400 (EDT) Received: by mail-qk1-f197.google.com with SMTP id s123-v6so35680361qkf.12 for ; Fri, 19 Oct 2018 09:04:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=KAVNlPILEZQ/0UbQ7sW5MDMCZKMqR8r4JD5zTgYimsQ=; b=fZkcq33GjA98nx/pCHbAKVbaHCuyjjnEionjTDh4BydO+0KTfhAfG8py4wkL4JdYB3 gntSN7BVY4rymdwWaUpPyMRQ7Eb1A+9YdfLaciky8f6yHs1Fwe5/Pf4+lQEaoVHc6cXE AfvsvHO4k1msui4FKEPkHvrD39BWuX5gtr6GjPdPUgpRKu6PLIUqPKYlrd2+W3nUnVB5 PnbT45AVNplIZ77Bd1kiov+KgZvI7PN+SBUeFWQNIFJ34u/TDjTrAubyq6SleXHP6TAt rjWUhWCQN7N4CE3V+6q8AD8D4X+BkxPvO3/wgT6yzj4BF9OaFJZCf2ujOfnZEmGXkPaD kX/Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of jglisse@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=jglisse@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: ABuFfoivuBaAvDTJSFB4Q49PSyA7CkkC8fcrOM2j8RMylh/Xlan5W1tH VbzWcUueO8RWoATy1Yg9jTu5c7dlK9LBMSmb7RW9z3vxsXzZkyk4tRdAZpYMPsoid59Y6IXeQOY AFXeFD5P1C+IVkxhhmNGA85zTEd7BWiWgQAXVVtDuznwwXkVsPsgA9ro7C+PNFjgv9w== X-Received: by 2002:a37:37d5:: with SMTP id e204-v6mr32957460qka.1.1539965095424; Fri, 19 Oct 2018 09:04:55 -0700 (PDT) X-Google-Smtp-Source: ACcGV60z4/1xtZjKHM1gs2NLA+/bwD9IxAhqyHpUniZXKmUzojSSFNNGgFwPWEl5ZMVhrkoHVVC/ X-Received: by 2002:a37:37d5:: with SMTP id e204-v6mr32957372qka.1.1539965094380; Fri, 19 Oct 2018 09:04:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539965094; cv=none; d=google.com; s=arc-20160816; b=WgPjIr4JDBrIH8iMO/E72FZcTXHIhozh5JOmve4H6wYGA7cdAphxUgk91eZkrav5gG GRttVuPMb/wVvu8YTNKMNuHnWnjVoEInAmBF6a/7BkFvExjC15+bgqMA2ehEHolzyPe+ IhG5IT+ZAneKFMBLAk8uTycB+Fv4QhN1C4+ZhY3ypP+UDLp7aGDm+kv7hLpONDXkmx/u pWC5wSqj6vsHgwXWGhlGwDnK3yfVlD+RK2wvcCkvXRDJzrNLpXIGiuVdwMd4CcFKrj0u nAqusrI6zFS2iwPV4oekTfT4R9Wx2D3QnUkdAlXRCQvqND+PSvW459mz7IclINDVWo+e vNFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=KAVNlPILEZQ/0UbQ7sW5MDMCZKMqR8r4JD5zTgYimsQ=; b=xRIF5EWs5Ri932GVYIfL+DQILe1/l3XNU1OLSCNnDXEmj0YTaJudI+jenb3iA/e0+j nCBtjJTVW/E+kktsYirDOhg7DHCilJL5JvLA2/LPPhwO/HZrPUwQpTrkt53VPSEA4Ghi fYyzVa4FPiaVd9wmuGIuL8iywUnZEE6k+yHIqb6elc7gE4xq2gecFf1VZtapoA5JUGYQ AbzdYU+ZH+OvenUW8SdFFRyBs3a0nnphb6TfSd5fS3zlfAPtD+LBJFzISmvmyPr75GuL hxTSnNOjZZ7JsZvmvheKRL2N1rb7lO3SCS6G04rC/Gfk0gh7/vsAdjkCLoVlImsCX2Q5 /Bnw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of jglisse@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=jglisse@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id w64-v6si840612qkb.193.2018.10.19.09.04.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Oct 2018 09:04:54 -0700 (PDT) Received-SPF: pass (google.com: domain of jglisse@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jglisse@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=jglisse@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8D10E30014A1; Fri, 19 Oct 2018 16:04:53 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-124-217.rdu2.redhat.com [10.10.124.217]) by smtp.corp.redhat.com (Postfix) with ESMTP id B6D2465949; Fri, 19 Oct 2018 16:04:52 +0000 (UTC) From: jglisse@redhat.com To: linux-mm@kvack.org Cc: Andrew Morton , linux-kernel@vger.kernel.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ralph Campbell , John Hubbard Subject: [PATCH 5/6] mm/hmm: use a structure for update callback parameters v2 Date: Fri, 19 Oct 2018 12:04:41 -0400 Message-Id: <20181019160442.18723-6-jglisse@redhat.com> In-Reply-To: <20181019160442.18723-1-jglisse@redhat.com> References: <20181019160442.18723-1-jglisse@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Fri, 19 Oct 2018 16:04:53 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse Use a structure to gather all the parameters for the update callback. This make it easier when adding new parameters by avoiding having to update all callback function signature. The hmm_update structure is always associated with a mmu_notifier callbacks so we are not planing on grouping multiple updates together. Nor do we care about page size for the range as range will over fully cover the page being invalidated (this is a mmu_notifier property). Changed since v1: - support for blockable mmu_notifier flags - improved commit log Signed-off-by: Jérôme Glisse Cc: Ralph Campbell Cc: John Hubbard Cc: Andrew Morton --- include/linux/hmm.h | 31 ++++++++++++++++++++++--------- mm/hmm.c | 33 ++++++++++++++++++++++----------- 2 files changed, 44 insertions(+), 20 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 1ff4bae7ada7..afc04dbbaf2f 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -274,13 +274,28 @@ static inline uint64_t hmm_pfn_from_pfn(const struct hmm_range *range, struct hmm_mirror; /* - * enum hmm_update_type - type of update + * enum hmm_update_event - type of update * @HMM_UPDATE_INVALIDATE: invalidate range (no indication as to why) */ -enum hmm_update_type { +enum hmm_update_event { HMM_UPDATE_INVALIDATE, }; +/* + * struct hmm_update - HMM update informations for callback + * + * @start: virtual start address of the range to update + * @end: virtual end address of the range to update + * @event: event triggering the update (what is happening) + * @blockable: can the callback block/sleep ? + */ +struct hmm_update { + unsigned long start; + unsigned long end; + enum hmm_update_event event; + bool blockable; +}; + /* * struct hmm_mirror_ops - HMM mirror device operations callback * @@ -300,9 +315,9 @@ struct hmm_mirror_ops { /* sync_cpu_device_pagetables() - synchronize page tables * * @mirror: pointer to struct hmm_mirror - * @update_type: type of update that occurred to the CPU page table - * @start: virtual start address of the range to update - * @end: virtual end address of the range to update + * @update: update informations (see struct hmm_update) + * Returns: -EAGAIN if update.blockable false and callback need to + * block, 0 otherwise. * * This callback ultimately originates from mmu_notifiers when the CPU * page table is updated. The device driver must update its page table @@ -313,10 +328,8 @@ struct hmm_mirror_ops { * page tables are completely updated (TLBs flushed, etc); this is a * synchronous call. */ - void (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, - enum hmm_update_type update_type, - unsigned long start, - unsigned long end); + int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, + const struct hmm_update *update); }; /* diff --git a/mm/hmm.c b/mm/hmm.c index a7aff319bc5a..0eacf9627bc9 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -126,10 +126,8 @@ void hmm_mm_destroy(struct mm_struct *mm) kfree(mm->hmm); } -static void hmm_invalidate_range(struct hmm *hmm, - enum hmm_update_type action, - unsigned long start, - unsigned long end) +static int hmm_invalidate_range(struct hmm *hmm, + const struct hmm_update *update) { struct hmm_mirror *mirror; struct hmm_range *range; @@ -138,22 +136,30 @@ static void hmm_invalidate_range(struct hmm *hmm, list_for_each_entry(range, &hmm->ranges, list) { unsigned long addr, idx, npages; - if (end < range->start || start >= range->end) + if (update->end < range->start || update->start >= range->end) continue; range->valid = false; - addr = max(start, range->start); + addr = max(update->start, range->start); idx = (addr - range->start) >> PAGE_SHIFT; - npages = (min(range->end, end) - addr) >> PAGE_SHIFT; + npages = (min(range->end, update->end) - addr) >> PAGE_SHIFT; memset(&range->pfns[idx], 0, sizeof(*range->pfns) * npages); } spin_unlock(&hmm->lock); down_read(&hmm->mirrors_sem); - list_for_each_entry(mirror, &hmm->mirrors, list) - mirror->ops->sync_cpu_device_pagetables(mirror, action, - start, end); + list_for_each_entry(mirror, &hmm->mirrors, list) { + int ret; + + ret = mirror->ops->sync_cpu_device_pagetables(mirror, update); + if (!update->blockable && ret == -EAGAIN) { + up_read(&hmm->mirrors_sem); + return -EAGAIN; + } + } up_read(&hmm->mirrors_sem); + + return 0; } static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) @@ -202,11 +208,16 @@ static void hmm_invalidate_range_end(struct mmu_notifier *mn, unsigned long start, unsigned long end) { + struct hmm_update update; struct hmm *hmm = mm->hmm; VM_BUG_ON(!hmm); - hmm_invalidate_range(mm->hmm, HMM_UPDATE_INVALIDATE, start, end); + update.start = start; + update.end = end; + update.event = HMM_UPDATE_INVALIDATE; + update.blockable = true; + hmm_invalidate_range(hmm, &update); } static const struct mmu_notifier_ops hmm_mmu_notifier_ops = {