From patchwork Fri Aug 24 19:25:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 10575711 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82AE5921 for ; Fri, 24 Aug 2018 19:26:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 720B82BA85 for ; Fri, 24 Aug 2018 19:26:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6455D2BAE7; Fri, 24 Aug 2018 19:26:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C39112BA85 for ; Fri, 24 Aug 2018 19:26:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A842F6B3136; Fri, 24 Aug 2018 15:25:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 96D7A6B3135; Fri, 24 Aug 2018 15:25:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6ACC46B3136; Fri, 24 Aug 2018 15:25:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f198.google.com (mail-qk0-f198.google.com [209.85.220.198]) by kanga.kvack.org (Postfix) with ESMTP id 2E0886B3133 for ; Fri, 24 Aug 2018 15:25:57 -0400 (EDT) Received: by mail-qk0-f198.google.com with SMTP id y130-v6so8635507qka.1 for ; Fri, 24 Aug 2018 12:25:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=VsPoJdPYZN3cAurM4nKwcE8vHkfFKNNaomFQl5BOPqE=; b=fUmbXdVsxtzhUS38kdpKZWWQNBQVSR62atvaoJPWTXQNSBQtmqoS4f6k/RA85poEhK PtpjiCDIaklkAfJOLmA0RRVEai9PinComO/jSNfORq4fZFkmYeLaIOkOf2x2uctsQ19t s/Pg1c9PVtYU5uDeu2/AvXg/HlySKjgQNw+iPOo7nu42X+epKX9PB6SCK4Y85mBm9wws fMjhdN3jdpGOmWYQXLxsVxk6pFBIAhe66Sfppb6lXZH68uSf3/IWzidQxy2bNVOUMfx0 SKIzLKhXmctJhjvxQBCQzkPdlK6w1dgwQzoG++rfgeZolpm5CDzvEKVgf+/+uv8Pq3RU wRcw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of jglisse@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=jglisse@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: APzg51CrMfMxeWXeAm5+RBR70Y9LYelq1iHT9Ut+QSpzhzV8sPOHYNBA 87MtY3YdzoK0RQUTgZm+BjINYKaQFcQRf3gN8iYdpdSdIOSZrYiRRQB2pjZCNQI/o79oHptvJvm TKyxHcyV4a2meYPAqhSbaCzZpbAHUDJaJ48ygDvjb1FdCqdTGdt+4AN+SKvMcnqJVJw== X-Received: by 2002:ac8:427:: with SMTP id v39-v6mr3226459qtg.120.1535138756952; Fri, 24 Aug 2018 12:25:56 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbYJg5tMOSTz9ULn4u/sNdIPEo1A64/7vlsl56vD0hWle0fRW4sIPAxwDF/SCrWolObsj+8 X-Received: by 2002:ac8:427:: with SMTP id v39-v6mr3226424qtg.120.1535138756368; Fri, 24 Aug 2018 12:25:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535138756; cv=none; d=google.com; s=arc-20160816; b=vohvGyQwky2GyxZXS+7A23htAzG50noEvnd6hqWBa7HWKMvqW7k5tqffVz7/h0aZmn KFrPGl7wNoHIRZb2iC2XXGslue/gAmBFrQtO5BL8PjaO8vOC0R0E0cLgzmBZ2zOHDO7F 5dWrLfKYuVxpbvBHS20lwHBirLprZCftfofDdEhqFFjrgcW0HB3DbBHGrr6G6HLSQxFg iq8NFZ6VCGf7COuP3ej5VWppYEgNkPlwt0iIrk+lt0MHTFSf/jJvHKyKUqYL9UFn8Bo1 r6Ex7FKGHPwKzWYqZLh5VRvjhGK+TtmUkrVynG/GyUAnRxw2SRC6cL8NyvVqeGMm+ykw M6zA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=VsPoJdPYZN3cAurM4nKwcE8vHkfFKNNaomFQl5BOPqE=; b=Jn/Ybp6QO8gnRMVdooSwX446ZZ2MgRYyUGOIKn6Vkwl7uqdpJZT/0dVbfsIxouAk/e m64RfJn5aut4MhaKJhsw7gbq6ynnTul4qbOQ/oM9i3+TsQ2pTAL4ATefD/nLTZPLyfYY 82IVqXKqyQ35cUD0qRUGFUDfTsgZgN0lL4buS8VZzCNGKINzMlQo1MmP3V7mHwpk04oz jxgchVivTluaIuftQMOoM/uZKhum7SHTBYDpkk8gsUkR19deQQcgYiWPp35QlsDZLEBI SzCXCJIbdG60yRbHmW71H5aaND/DloaRKH2ewiAwdNMqvx2KYU3p5CT8+Sz5Os0zSQZr JPIQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of jglisse@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=jglisse@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id j18-v6si4515277qtj.294.2018.08.24.12.25.56 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 24 Aug 2018 12:25:56 -0700 (PDT) Received-SPF: pass (google.com: domain of jglisse@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jglisse@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=jglisse@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0B0564023475; Fri, 24 Aug 2018 19:25:56 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-122-125.rdu2.redhat.com [10.10.122.125]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8AF432026D6D; Fri, 24 Aug 2018 19:25:55 +0000 (UTC) From: jglisse@redhat.com To: linux-mm@kvack.org Cc: Andrew Morton , linux-kernel@vger.kernel.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Michal Hocko , Ralph Campbell , John Hubbard Subject: [PATCH 7/7] mm/hmm: proper support for blockable mmu_notifier Date: Fri, 24 Aug 2018 15:25:49 -0400 Message-Id: <20180824192549.30844-8-jglisse@redhat.com> In-Reply-To: <20180824192549.30844-1-jglisse@redhat.com> References: <20180824192549.30844-1-jglisse@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 24 Aug 2018 19:25:56 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 24 Aug 2018 19:25:56 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jglisse@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse When mmu_notifier calls invalidate_range_start callback with blockable set to false we should not sleep. Properly propagate this to HMM users. Signed-off-by: Jérôme Glisse Cc: Michal Hocko Cc: Ralph Campbell Cc: John Hubbard Cc: Andrew Morton --- include/linux/hmm.h | 12 +++++++++--- mm/hmm.c | 39 ++++++++++++++++++++++++++++----------- 2 files changed, 37 insertions(+), 14 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 064924bce75c..c783916f8732 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -287,11 +287,13 @@ enum hmm_update_event { * @start: virtual start address of the range to update * @end: virtual end address of the range to update * @event: event triggering the update (what is happening) + * @blockable: can the callback block/sleep ? */ struct hmm_update { unsigned long start; unsigned long end; enum hmm_update_event event; + bool blockable; }; /* @@ -314,6 +316,8 @@ struct hmm_mirror_ops { * * @mirror: pointer to struct hmm_mirror * @update: update informations (see struct hmm_update) + * Returns: -EAGAIN if update.blockable false and callback need to + * block, 0 otherwise. * * This callback ultimately originates from mmu_notifiers when the CPU * page table is updated. The device driver must update its page table @@ -322,10 +326,12 @@ struct hmm_mirror_ops { * * The device driver must not return from this callback until the device * page tables are completely updated (TLBs flushed, etc); this is a - * synchronous call. + * synchronous call. If driver need to sleep and update->blockable is + * false then you need to abort (do not do anything that would sleep or + * block) and return -EAGAIN. */ - void (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, - const struct hmm_update *update); + int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, + const struct hmm_update *update); }; /* diff --git a/mm/hmm.c b/mm/hmm.c index 6fe31e2bfa1e..1d8fcaa0606f 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -123,12 +123,18 @@ void hmm_mm_destroy(struct mm_struct *mm) kfree(mm->hmm); } -static void hmm_invalidate_range(struct hmm *hmm, bool device, - const struct hmm_update *update) +static int hmm_invalidate_range(struct hmm *hmm, bool device, + const struct hmm_update *update) { struct hmm_mirror *mirror; struct hmm_range *range; + /* + * It is fine to wait on lock here even if update->blockable is false + * as the hmm->lock is only held for short period of time (when adding + * or walking the ranges list). We could also convert the range list + * into a lru list and avoid the spinlock all together. + */ spin_lock(&hmm->lock); list_for_each_entry(range, &hmm->ranges, list) { unsigned long addr, idx, npages; @@ -145,12 +151,26 @@ static void hmm_invalidate_range(struct hmm *hmm, bool device, spin_unlock(&hmm->lock); if (!device) - return; + return 0; + /* + * It is fine to wait on mirrors_sem here even if update->blockable is + * false as this semaphore is only taken in write mode for short period + * when adding a new mirror to the list. + */ down_read(&hmm->mirrors_sem); - list_for_each_entry(mirror, &hmm->mirrors, list) - mirror->ops->sync_cpu_device_pagetables(mirror, update); + list_for_each_entry(mirror, &hmm->mirrors, list) { + int ret; + + ret = mirror->ops->sync_cpu_device_pagetables(mirror, update); + if (!update->blockable && ret == -EAGAIN) { + up_read(&hmm->mirrors_sem); + return -EAGAIN; + } + } up_read(&hmm->mirrors_sem); + + return 0; } static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) @@ -188,17 +208,13 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, struct hmm_update update; struct hmm *hmm = mm->hmm; - if (!blockable) - return -EAGAIN; - VM_BUG_ON(!hmm); update.start = start; update.end = end; update.event = HMM_UPDATE_INVALIDATE; - hmm_invalidate_range(hmm, true, &update); - - return 0; + update.blockable = blockable; + return hmm_invalidate_range(hmm, true, &update); } static void hmm_invalidate_range_end(struct mmu_notifier *mn, @@ -214,6 +230,7 @@ static void hmm_invalidate_range_end(struct mmu_notifier *mn, update.start = start; update.end = end; update.event = HMM_UPDATE_INVALIDATE; + update.blockable = true; hmm_invalidate_range(hmm, false, &update); }