From patchwork Wed Aug 2 15:13:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78B91C001DF for ; Wed, 2 Aug 2023 15:14:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F0DA228018D; Wed, 2 Aug 2023 11:14:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EBCB828018C; Wed, 2 Aug 2023 11:14:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5E5428018D; Wed, 2 Aug 2023 11:14:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B9054280143 for ; Wed, 2 Aug 2023 11:14:13 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 82DBE16077C for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) X-FDA: 81079510386.06.ACEA3CA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id A9BE62002F for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OmRnFTZI; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989251; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Yf2mFEKjBO59RwlIYxkKXRSQvHLiwJ1OfIHNBFfxeqo=; b=pyyIDyNaOt7hXpit5qqMb1yUEUOy0xNHzKjOaHhSo2NWgYPsyd6frsDbiS9B0JFlSe/eA5 xx67tzLVtCw3ShUHNmU9lAMYKKYBoGKygw0ewCPAZvLeCIH01sdRokkBB8aMqig2AhSt9g WLtczSQ1MBGMSq1XVC7y/gMdJa5h9WQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OmRnFTZI; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989251; a=rsa-sha256; cv=none; b=o0NQPt0d+w6HWjHDrwo1rbLbd9VPMQUKqsyrgAB44GCmglE0KDQUfRiHrFITceT2gAZTkC oH0/LXCOpCg7rTe9ykyyEY9kcatrG+mYBV8z1Am5fjt5lktYM5yZO4AEiY4y6b9i/pJDtF s+h2sp3HkhyW2mfilk0Ryoj5YzGc+/Q= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Yf2mFEKjBO59RwlIYxkKXRSQvHLiwJ1OfIHNBFfxeqo=; b=OmRnFTZItsiyzDtfjihMX1kjs8 86F7wSVvASym8zZUsWOlXl4Hu4bXOv4QyJzoypLJT+MqmNsFlFF1y6np0j3bolEq8ZFe2r4Wh3DO3 BMm3T8hSSDB7rfqIjX+heaJxCt9lJUi3EIHnCjW/GRtlWWEYBTK7YeAEwiLfEqrRFwr3vBGhcXrpz 67RBqgWD3IZgznGf7+GpMXn9h58MiCBV5KX+Zt4RDFi142NwjxyuOYH7SO7ntuP5kDNIc4S6BldNf 1wwIl48BfOVVFU99fY6aJgmqIHfk4P3BPBwGOcQOkYr5VX5nIsWeXpJysc1Y4lYKi3zGXvKjnoT9b PMwq8ALQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffic-1H; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 01/38] minmax: Add in_range() macro Date: Wed, 2 Aug 2023 16:13:29 +0100 Message-Id: <20230802151406.3735276-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A9BE62002F X-Stat-Signature: 8iuhcsya73bef88t36ci431f8bi8cjmp X-Rspam-User: X-HE-Tag: 1690989251-104351 X-HE-Meta: U2FsdGVkX18X0Z94f8wS4ZlhtK4G51AkvI5yH4maBF8e4iHA/1/E4uZDSeTBXOqjkIYMMkGT7s0DetiGcp8kvdxoOamCqu+pDS9UIJgwMHiRqF8x6jeRYeI+RmrijmvGLQdtKF0UJScxTv7gvvQUZDcjEBNzcyZaP75ps4VkZrea+dfsy45AO5diHnCB4bhmpJpOJ4uPv/9nVQDCUD+OYNRmgTVZ9Wo7XWO9E+OdqORgf93fE0ZmdtlkGwwNCjCNJyw+VEI1ml39JPfEezojLNC/4UvHsJ4s748sUnT4CHfM7rFBVEhh3t/EgH/Nv42UmyHPyMGpmdHV5MFCGaktqeGrQwkDTLF5xepbdcOIA44LM5mhjUdWAHc6PSobbfgYB23dSpRsIsq8hUdIBComki+u50L8KbmGkcnp2mIlIryUu4S+uaQEfjwjxJphQao4X2L11uNs8ARz/bmB2oqeojH4ZuHLCK4wcVGbU/7aHk15m72RqSCb1v6osM0Y8K3YOv88WFCEE95V9dWQ16mft0qDBO0hqhs8/U+Bm8APt1UxCcBkK34jo6Hig6MDql21ddQ11HKEFJ5qMfHiHfLwfzNPh9fUrtldTbAjJhrA8xLpWX8LX1P1oChA2dbu89Ajpe026f5QL+4fqc+riNfMeMgLIPDyPdXwDKTjwTVZhBGIKJZBl2xn/5P1w/+C4gI1UF5okQdk+P8f1sskrndRfrHcdwFYdmmmtY47CjQfo5Vt48bHQK0xXKu06w6M86n5XQzHBRIh6uu5GiuIfB0TJFkltSvrzXn2WLzs3/DvgU0JQea2trL8qmMfe+npXoTzFYUyNHt4NzG+rBARhys/MZ7FxDvksVvkhWVGF6a5F1Sr8KFRSaUYopQF3e6MovdsDc056fHC+AnBc4Ndb7XnsuisHclXi59JKOo0bt2/F6bEW3xCYnayNnFzETlJnd7X/WnH0rJ8h5yKooxcBK3 /zFCEyIp /rcyqjWlbO+iAa3piAlXwii2KqmFU1mJAzCwZ/rehU1zEkHdj8aOPSSuxCghNT0A5Uz21kuu9b9jVtjVQqyWtzyLg2pQ7edWT49+WfL6rTRM7Q36sbVp+NY8LDMB1P/HKAEQcy/GY52tv/xWqvo+i7sNURtzFMYuyVItufe2iNBXaI0Lq/T33oUdwBPFaRqr/t2JXck7qSgk4Lhp/htlfBQvaFwVX25/lGyEn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Determine if a value lies within a range more efficiently (subtraction + comparison vs two comparisons and an AND). It also has useful (under some circumstances) behaviour if the range exceeds the maximum value of the type. Convert all the conflicting definitions of in_range() within the kernel; some can use the generic definition while others need their own definition. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm/mm/pageattr.c | 6 ++--- .../drm/arm/display/include/malidp_utils.h | 2 +- .../display/komeda/komeda_pipeline_state.c | 24 ++++++++--------- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 ----- .../net/ethernet/chelsio/cxgb3/cxgb3_main.c | 18 ++++++------- drivers/virt/acrn/ioreq.c | 4 +-- fs/btrfs/misc.h | 2 -- fs/ext2/balloc.c | 2 -- fs/ext4/ext4.h | 2 -- fs/ufs/util.h | 6 ----- include/linux/minmax.h | 27 +++++++++++++++++++ lib/logic_pio.c | 3 --- net/netfilter/nf_nat_core.c | 6 ++--- net/tipc/core.h | 2 +- net/tipc/link.c | 10 +++---- .../selftests/bpf/progs/get_branch_snapshot.c | 4 +-- 16 files changed, 65 insertions(+), 59 deletions(-) diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c index c3c34fe714b0..064ad508c149 100644 --- a/arch/arm/mm/pageattr.c +++ b/arch/arm/mm/pageattr.c @@ -25,7 +25,7 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data) return 0; } -static bool in_range(unsigned long start, unsigned long size, +static bool range_in_range(unsigned long start, unsigned long size, unsigned long range_start, unsigned long range_end) { return start >= range_start && start < range_end && @@ -63,8 +63,8 @@ static int change_memory_common(unsigned long addr, int numpages, if (!size) return 0; - if (!in_range(start, size, MODULES_VADDR, MODULES_END) && - !in_range(start, size, VMALLOC_START, VMALLOC_END)) + if (!range_in_range(start, size, MODULES_VADDR, MODULES_END) && + !range_in_range(start, size, VMALLOC_START, VMALLOC_END)) return -EINVAL; return __change_memory_common(start, size, set_mask, clear_mask); diff --git a/drivers/gpu/drm/arm/display/include/malidp_utils.h b/drivers/gpu/drm/arm/display/include/malidp_utils.h index 49a1d7f3539c..9f83baac6ed8 100644 --- a/drivers/gpu/drm/arm/display/include/malidp_utils.h +++ b/drivers/gpu/drm/arm/display/include/malidp_utils.h @@ -35,7 +35,7 @@ static inline void set_range(struct malidp_range *rg, u32 start, u32 end) rg->end = end; } -static inline bool in_range(struct malidp_range *rg, u32 v) +static inline bool malidp_in_range(struct malidp_range *rg, u32 v) { return (v >= rg->start) && (v <= rg->end); } diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c index 3276a3e82c62..4618687a8f4d 100644 --- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c +++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c @@ -305,12 +305,12 @@ komeda_layer_check_cfg(struct komeda_layer *layer, if (komeda_fb_check_src_coords(kfb, src_x, src_y, src_w, src_h)) return -EINVAL; - if (!in_range(&layer->hsize_in, src_w)) { + if (!malidp_in_range(&layer->hsize_in, src_w)) { DRM_DEBUG_ATOMIC("invalidate src_w %d.\n", src_w); return -EINVAL; } - if (!in_range(&layer->vsize_in, src_h)) { + if (!malidp_in_range(&layer->vsize_in, src_h)) { DRM_DEBUG_ATOMIC("invalidate src_h %d.\n", src_h); return -EINVAL; } @@ -452,14 +452,14 @@ komeda_scaler_check_cfg(struct komeda_scaler *scaler, hsize_out = dflow->out_w; vsize_out = dflow->out_h; - if (!in_range(&scaler->hsize, hsize_in) || - !in_range(&scaler->hsize, hsize_out)) { + if (!malidp_in_range(&scaler->hsize, hsize_in) || + !malidp_in_range(&scaler->hsize, hsize_out)) { DRM_DEBUG_ATOMIC("Invalid horizontal sizes"); return -EINVAL; } - if (!in_range(&scaler->vsize, vsize_in) || - !in_range(&scaler->vsize, vsize_out)) { + if (!malidp_in_range(&scaler->vsize, vsize_in) || + !malidp_in_range(&scaler->vsize, vsize_out)) { DRM_DEBUG_ATOMIC("Invalid vertical sizes"); return -EINVAL; } @@ -574,13 +574,13 @@ komeda_splitter_validate(struct komeda_splitter *splitter, return -EINVAL; } - if (!in_range(&splitter->hsize, dflow->in_w)) { + if (!malidp_in_range(&splitter->hsize, dflow->in_w)) { DRM_DEBUG_ATOMIC("split in_w:%d is out of the acceptable range.\n", dflow->in_w); return -EINVAL; } - if (!in_range(&splitter->vsize, dflow->in_h)) { + if (!malidp_in_range(&splitter->vsize, dflow->in_h)) { DRM_DEBUG_ATOMIC("split in_h: %d exceeds the acceptable range.\n", dflow->in_h); return -EINVAL; @@ -624,13 +624,13 @@ komeda_merger_validate(struct komeda_merger *merger, return -EINVAL; } - if (!in_range(&merger->hsize_merged, output->out_w)) { + if (!malidp_in_range(&merger->hsize_merged, output->out_w)) { DRM_DEBUG_ATOMIC("merged_w: %d is out of the accepted range.\n", output->out_w); return -EINVAL; } - if (!in_range(&merger->vsize_merged, output->out_h)) { + if (!malidp_in_range(&merger->vsize_merged, output->out_h)) { DRM_DEBUG_ATOMIC("merged_h: %d is out of the accepted range.\n", output->out_h); return -EINVAL; @@ -866,8 +866,8 @@ void komeda_complete_data_flow_cfg(struct komeda_layer *layer, * input/output range. */ if (dflow->en_scaling && scaler) - dflow->en_split = !in_range(&scaler->hsize, dflow->in_w) || - !in_range(&scaler->hsize, dflow->out_w); + dflow->en_split = !malidp_in_range(&scaler->hsize, dflow->in_w) || + !malidp_in_range(&scaler->hsize, dflow->out_w); } static bool merger_is_available(struct komeda_pipeline *pipe, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index b20ef6c8ea26..57dc601fc95a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -678,12 +678,6 @@ struct block_header { u32 data[]; }; -/* this should be a general kernel helper */ -static int in_range(u32 addr, u32 start, u32 size) -{ - return addr >= start && addr < start + size; -} - static bool fw_block_mem(struct a6xx_gmu_bo *bo, const struct block_header *blk) { if (!in_range(blk->addr, bo->iova, bo->size)) diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c index 9b84c8d8d309..d117022d15d7 100644 --- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c +++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c @@ -2126,7 +2126,7 @@ static const struct ethtool_ops cxgb_ethtool_ops = { .set_link_ksettings = set_link_ksettings, }; -static int in_range(int val, int lo, int hi) +static int cxgb_in_range(int val, int lo, int hi) { return val < 0 || (val <= hi && val >= lo); } @@ -2162,19 +2162,19 @@ static int cxgb_siocdevprivate(struct net_device *dev, return -EINVAL; if (t.qset_idx >= SGE_QSETS) return -EINVAL; - if (!in_range(t.intr_lat, 0, M_NEWTIMER) || - !in_range(t.cong_thres, 0, 255) || - !in_range(t.txq_size[0], MIN_TXQ_ENTRIES, + if (!cxgb_in_range(t.intr_lat, 0, M_NEWTIMER) || + !cxgb_in_range(t.cong_thres, 0, 255) || + !cxgb_in_range(t.txq_size[0], MIN_TXQ_ENTRIES, MAX_TXQ_ENTRIES) || - !in_range(t.txq_size[1], MIN_TXQ_ENTRIES, + !cxgb_in_range(t.txq_size[1], MIN_TXQ_ENTRIES, MAX_TXQ_ENTRIES) || - !in_range(t.txq_size[2], MIN_CTRL_TXQ_ENTRIES, + !cxgb_in_range(t.txq_size[2], MIN_CTRL_TXQ_ENTRIES, MAX_CTRL_TXQ_ENTRIES) || - !in_range(t.fl_size[0], MIN_FL_ENTRIES, + !cxgb_in_range(t.fl_size[0], MIN_FL_ENTRIES, MAX_RX_BUFFERS) || - !in_range(t.fl_size[1], MIN_FL_ENTRIES, + !cxgb_in_range(t.fl_size[1], MIN_FL_ENTRIES, MAX_RX_JUMBO_BUFFERS) || - !in_range(t.rspq_size, MIN_RSPQ_ENTRIES, + !cxgb_in_range(t.rspq_size, MIN_RSPQ_ENTRIES, MAX_RSPQ_ENTRIES)) return -EINVAL; diff --git a/drivers/virt/acrn/ioreq.c b/drivers/virt/acrn/ioreq.c index cecdc1c13af7..29e1ef1915fd 100644 --- a/drivers/virt/acrn/ioreq.c +++ b/drivers/virt/acrn/ioreq.c @@ -351,7 +351,7 @@ static bool handle_cf8cfc(struct acrn_vm *vm, return is_handled; } -static bool in_range(struct acrn_ioreq_range *range, +static bool acrn_in_range(struct acrn_ioreq_range *range, struct acrn_io_request *req) { bool ret = false; @@ -389,7 +389,7 @@ static struct acrn_ioreq_client *find_ioreq_client(struct acrn_vm *vm, list_for_each_entry(client, &vm->ioreq_clients, list) { read_lock_bh(&client->range_lock); list_for_each_entry(range, &client->range_list, list) { - if (in_range(range, req)) { + if (acrn_in_range(range, req)) { found = client; break; } diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h index 005751a12911..40f2d9f1a17a 100644 --- a/fs/btrfs/misc.h +++ b/fs/btrfs/misc.h @@ -8,8 +8,6 @@ #include #include -#define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len)) - /* * Enumerate bits using enum autoincrement. Define the @name as the n-th bit. */ diff --git a/fs/ext2/balloc.c b/fs/ext2/balloc.c index eca60b747c6b..c8049c90323d 100644 --- a/fs/ext2/balloc.c +++ b/fs/ext2/balloc.c @@ -36,8 +36,6 @@ */ -#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1) - struct ext2_group_desc * ext2_get_group_desc(struct super_block * sb, unsigned int block_group, struct buffer_head ** bh) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 1e2259d9967d..481491e892df 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3780,8 +3780,6 @@ static inline void set_bitmap_uptodate(struct buffer_head *bh) set_bit(BH_BITMAP_UPTODATE, &(bh)->b_state); } -#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1) - /* For ioend & aio unwritten conversion wait queues */ #define EXT4_WQ_HASH_SZ 37 #define ext4_ioend_wq(v) (&ext4__ioend_wq[((unsigned long)(v)) %\ diff --git a/fs/ufs/util.h b/fs/ufs/util.h index 4931bec1a01c..89247193d96d 100644 --- a/fs/ufs/util.h +++ b/fs/ufs/util.h @@ -11,12 +11,6 @@ #include #include "swab.h" - -/* - * some useful macros - */ -#define in_range(b,first,len) ((b)>=(first)&&(b)<(first)+(len)) - /* * functions used for retyping */ diff --git a/include/linux/minmax.h b/include/linux/minmax.h index 798c6963909f..83aebc244cba 100644 --- a/include/linux/minmax.h +++ b/include/linux/minmax.h @@ -3,6 +3,7 @@ #define _LINUX_MINMAX_H #include +#include /* * min()/max()/clamp() macros must accomplish three things: @@ -222,6 +223,32 @@ */ #define clamp_val(val, lo, hi) clamp_t(typeof(val), val, lo, hi) +static inline bool in_range64(u64 val, u64 start, u64 len) +{ + return (val - start) < len; +} + +static inline bool in_range32(u32 val, u32 start, u32 len) +{ + return (val - start) < len; +} + +/** + * in_range - Determine if a value lies within a range. + * @val: Value to test. + * @start: First value in range. + * @len: Number of values in range. + * + * This is more efficient than "if (start <= val && val < (start + len))". + * It also gives a different answer if @start + @len overflows the size of + * the type by a sufficient amount to encompass @val. Decide for yourself + * which behaviour you want, or prove that start + len never overflow. + * Do not blindly replace one form with the other. + */ +#define in_range(val, start, len) \ + ((sizeof(start) | sizeof(len) | sizeof(val)) <= sizeof(u32) ? \ + in_range32(val, start, len) : in_range64(val, start, len)) + /** * swap - swap values of @a and @b * @a: first value diff --git a/lib/logic_pio.c b/lib/logic_pio.c index 07b4b9a1f54b..2ea564a40064 100644 --- a/lib/logic_pio.c +++ b/lib/logic_pio.c @@ -20,9 +20,6 @@ static LIST_HEAD(io_range_list); static DEFINE_MUTEX(io_range_mutex); -/* Consider a kernel general helper for this */ -#define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len)) - /** * logic_pio_register_range - register logical PIO range for a host * @new_range: pointer to the IO range to be registered. diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c index fadbd4ed3dc0..c4e0516a8dfa 100644 --- a/net/netfilter/nf_nat_core.c +++ b/net/netfilter/nf_nat_core.c @@ -327,7 +327,7 @@ static bool l4proto_in_range(const struct nf_conntrack_tuple *tuple, /* If we source map this tuple so reply looks like reply_tuple, will * that meet the constraints of range. */ -static int in_range(const struct nf_conntrack_tuple *tuple, +static int nf_in_range(const struct nf_conntrack_tuple *tuple, const struct nf_nat_range2 *range) { /* If we are supposed to map IPs, then we must be in the @@ -376,7 +376,7 @@ find_appropriate_src(struct net *net, &ct->tuplehash[IP_CT_DIR_REPLY].tuple); result->dst = tuple->dst; - if (in_range(result, range)) + if (nf_in_range(result, range)) return 1; } } @@ -607,7 +607,7 @@ get_unique_tuple(struct nf_conntrack_tuple *tuple, if (maniptype == NF_NAT_MANIP_SRC && !(range->flags & NF_NAT_RANGE_PROTO_RANDOM_ALL)) { /* try the original tuple first */ - if (in_range(orig_tuple, range)) { + if (nf_in_range(orig_tuple, range)) { if (!nf_nat_used_tuple(orig_tuple, ct)) { *tuple = *orig_tuple; return; diff --git a/net/tipc/core.h b/net/tipc/core.h index 0a3f7a70a50a..7eccd97e0609 100644 --- a/net/tipc/core.h +++ b/net/tipc/core.h @@ -197,7 +197,7 @@ static inline int less(u16 left, u16 right) return less_eq(left, right) && (mod(right) != mod(left)); } -static inline int in_range(u16 val, u16 min, u16 max) +static inline int tipc_in_range(u16 val, u16 min, u16 max) { return !less(val, min) && !more(val, max); } diff --git a/net/tipc/link.c b/net/tipc/link.c index 2eff1c7949cb..e33b4f29f77c 100644 --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -1623,7 +1623,7 @@ static int tipc_link_advance_transmq(struct tipc_link *l, struct tipc_link *r, last_ga->bgack_cnt); } /* Check against the last Gap ACK block */ - if (in_range(seqno, start, end)) + if (tipc_in_range(seqno, start, end)) continue; /* Update/release the packet peer is acking */ bc_has_acked = true; @@ -2251,12 +2251,12 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb, strncpy(if_name, data, TIPC_MAX_IF_NAME); /* Update own tolerance if peer indicates a non-zero value */ - if (in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) { + if (tipc_in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) { l->tolerance = peers_tol; l->bc_rcvlink->tolerance = peers_tol; } /* Update own priority if peer's priority is higher */ - if (in_range(peers_prio, l->priority + 1, TIPC_MAX_LINK_PRI)) + if (tipc_in_range(peers_prio, l->priority + 1, TIPC_MAX_LINK_PRI)) l->priority = peers_prio; /* If peer is going down we want full re-establish cycle */ @@ -2299,13 +2299,13 @@ static int tipc_link_proto_rcv(struct tipc_link *l, struct sk_buff *skb, l->rcv_nxt_state = msg_seqno(hdr) + 1; /* Update own tolerance if peer indicates a non-zero value */ - if (in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) { + if (tipc_in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) { l->tolerance = peers_tol; l->bc_rcvlink->tolerance = peers_tol; } /* Update own prio if peer indicates a different value */ if ((peers_prio != l->priority) && - in_range(peers_prio, 1, TIPC_MAX_LINK_PRI)) { + tipc_in_range(peers_prio, 1, TIPC_MAX_LINK_PRI)) { l->priority = peers_prio; rc = tipc_link_fsm_evt(l, LINK_FAILURE_EVT); } diff --git a/tools/testing/selftests/bpf/progs/get_branch_snapshot.c b/tools/testing/selftests/bpf/progs/get_branch_snapshot.c index a1b139888048..511ac634eef0 100644 --- a/tools/testing/selftests/bpf/progs/get_branch_snapshot.c +++ b/tools/testing/selftests/bpf/progs/get_branch_snapshot.c @@ -15,7 +15,7 @@ long total_entries = 0; #define ENTRY_CNT 32 struct perf_branch_entry entries[ENTRY_CNT] = {}; -static inline bool in_range(__u64 val) +static inline bool gbs_in_range(__u64 val) { return (val >= address_low) && (val < address_high); } @@ -31,7 +31,7 @@ int BPF_PROG(test1, int n, int ret) for (i = 0; i < ENTRY_CNT; i++) { if (i >= total_entries) break; - if (in_range(entries[i].from) && in_range(entries[i].to)) + if (gbs_in_range(entries[i].from) && gbs_in_range(entries[i].to)) test1_hits++; else if (!test1_hits) wasted_entries++; From patchwork Wed Aug 2 15:13:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51E32C001DF for ; Wed, 2 Aug 2023 15:14:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39E23280198; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 32769280197; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E97FA280193; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A3085280195 for ; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3EFFF160F58 for ; Wed, 2 Aug 2023 15:14:15 +0000 (UTC) X-FDA: 81079510470.15.1834A8C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 26F20C0005 for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=W9UdzekK; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5otL2xfi5fq8beqcNQO018MOMk9VZAnogawmcQkO95A=; b=yd7fXIV0HHpoB/k68XHL5fWmw1gpfAAFYVWM0k8Ferc3bTCSFvTEYwfQJvLFETVHIA6f4Y 3DFkfFA9LzjQq6CNVGkBNs5pm2q5n5YgbnvvYnEu81hbQ85Y/Vvb8yJR3nvx74wSrv0Yb+ Tflc7iX0DXfMzhz0oy1tBQlZOY1Tp1o= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989252; a=rsa-sha256; cv=none; b=tbj5C1TvycmqwPLs6iasUfBf6k2J1N1lIbhzcQmpSJ79z09r0AdutwHOJRdvUZlvFHYIwa 0xuCCWk1+L/E5R2EU331KxwFUVm+dyGHqofynPTSKARzSyVf0MFJtuoVKlQAHAIUfdGlEp TAXG8Lo6TMQwH9txFWf9AHJVEnUqknA= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=W9UdzekK; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5otL2xfi5fq8beqcNQO018MOMk9VZAnogawmcQkO95A=; b=W9UdzekKdA4eNBMudYVz4/3g59 eFMFALpM104qYTLYM3rs/SU/i4NE8rLpF7TKGXeBJ/H7hsL3eblOQjHX/wwFdSitbk5kgfTH0ijy3 Q5zhEC9UeW+wWJgexOrM+ndfv/tCsTOOfejOmyj4Uw1mK8YV6ShnWWhKRd8uCiAcq/0OPX//HRByZ AA72KqfC4oIWt+KiHcehLjK7+QmMJcl+qaTRfRpBCcqIkkg1ofYbMjFMdgx2AcUhXRw4LUvbwtTB0 PE0kPwYjf4pflBECZryuEsv97ccMioO81DH+r3PO25wFt397UVl+sst7znklIhzsRRP0RTFZ+rEta QO4KxADA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffie-3z; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Pasha Tatashin , Anshuman Khandual Subject: [PATCH v6 02/38] mm: Convert page_table_check_pte_set() to page_table_check_ptes_set() Date: Wed, 2 Aug 2023 16:13:30 +0100 Message-Id: <20230802151406.3735276-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 26F20C0005 X-Rspam-User: X-Stat-Signature: wy4b19nf76ki4zrqdrbc49dhnmdppce3 X-Rspamd-Server: rspam03 X-HE-Tag: 1690989251-618521 X-HE-Meta: U2FsdGVkX19K/qHOLI6ukBtGS2I2QPnvDKR5cxSOb+tlxoTUeQgx6Yq8s09Ei8QugIngyKWr0I2sdtdsrG2W7qV7lOC+yIKDl2JGRUq8QAVHYtpIKMtRxIojbkUBK5Ft33QXHJ+sT8u7f5JpuZQYLuKa+ArlCG2JdGCP1u9Xv7fAYkgFeMs78FHckuC667mBxcp/1B0EP84WUd9qlsC6R+I028DRoPDheQ99j9g7e29dBQQslcer7IVFPIS4lJRMCzrpNl2N/xlrDHB7ZpRXaKaFRxbHxDxuW8YJqnd6k1xOFr3taqkDLwRiFeze5lfN+FSakoeqAYHlfO9eDw7tUuVJhWGCEPP+TaKapQkMcGyqCvHtExThSjAeVyf3pVDBVFb6NframW3nt5Xj4674mFeNsUzCEm+XqKtc6GG79nMd3pD1Qi8MZNjaLEyIqqxtCgs7AH8zn4QdqzE8fNVnbBqfv+FF706dyxICwJwEMJbjQ+f15B5gGxHmLMuxl9I6YhKVQDQ5vEcPuwDdVTK1fhqFVuNsuer1S/4QRv3NQIp6bXD4qHoqUgQtkWTE0o8hzHwybdY48w1WVwd9HMDLepRhnzd+HaPKEcIhbJpLIYHksqG+S2peGSZPnQJJpJp5AXnVWcqMBupUC7AGwdQ0or7bJnJsUBwMUjvzKkyrfumW/iRMOutBPBPwhQTf63LFcyuOFTx7ya+IoHBW+PtvSRRVSq/cRNQ9hRlUZHjRltM2X8DXNEnsG/M/pn18F2w+3jthDW4F8QXcmKOwuUg6ayW8AfpEZYt6HqjL6Okzk7lgYgGGpJVDStfl0MsPmuhk18+J3A4zl1El5EoH3sC0hx4oUJL3ACufoRilz0n5uDPvslaTPAnwE9ECNjEWqI6GT2iMcMVHvWgxyhEPyNjsAVvz6le0Xrk/C/VHcRb47mWA/tnPw0qtmHmrBOGgcwK5MdLlOJKx0vLr3x10tFc +RGbqPg1 KM4bFV0AiDmz67cigsq5zqJGaWHTFxdrJDM1KnZnpiCCyTKGLHYKPynSYzgZFffHjp8o2TYeapAEFV3NzkYCFyhJYNo1wlFovX8WNszvzpoWVoJEUqdmSIX80ftuNetx11o8prYJdmRLv6ST9o9TgJoH+cOGv92mlkXiv+E7JGl0FPsQWqyNFvtKfQuON11KtIkl5ckXvuiVeldHU/Arn8R8wi/rBBt6h+kGglNUYJvlgGlcj6Ybh0lJC3lzyqM730TVnD0soacgmuWUzGze1zClbOWTkpO3SI9cFAlz0H5CdKOZYvJMYVgVTfqUi62CiAsDExlbSW0aQaTsmsIfyqfOV7qlsT5c77YhKMIDF84VnJwtNMxtKJznEaVe/+pU3Bfsi6bSm8Vl8V6Lb8+s0Oc2PeA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell the page table check how many PTEs & PFNs we want it to check. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport (IBM) Acked-by: Pasha Tatashin Reviewed-by: Anshuman Khandual --- arch/arm64/include/asm/pgtable.h | 2 +- arch/riscv/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 2 +- include/linux/page_table_check.h | 13 +++++++------ mm/page_table_check.c | 16 +++++++++------- 5 files changed, 19 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index fe4b913589ee..445b18d7a47c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -348,7 +348,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, ptep, pte); + page_table_check_ptes_set(mm, ptep, pte, 1); return __set_pte_at(mm, addr, ptep, pte); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 44377f0d7c35..01e4aabc8898 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -499,7 +499,7 @@ static inline void __set_pte_at(struct mm_struct *mm, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) { - page_table_check_pte_set(mm, ptep, pteval); + page_table_check_ptes_set(mm, ptep, pteval, 1); __set_pte_at(mm, addr, ptep, pteval); } diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index ada1bbf12961..cd0b6337d03c 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1023,7 +1023,7 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, ptep, pte); + page_table_check_ptes_set(mm, ptep, pte, 1); set_pte(ptep, pte); } diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h index 7f6b9bf926c5..6722941c7cb8 100644 --- a/include/linux/page_table_check.h +++ b/include/linux/page_table_check.h @@ -17,7 +17,8 @@ void __page_table_check_zero(struct page *page, unsigned int order); void __page_table_check_pte_clear(struct mm_struct *mm, pte_t pte); void __page_table_check_pmd_clear(struct mm_struct *mm, pmd_t pmd); void __page_table_check_pud_clear(struct mm_struct *mm, pud_t pud); -void __page_table_check_pte_set(struct mm_struct *mm, pte_t *ptep, pte_t pte); +void __page_table_check_ptes_set(struct mm_struct *mm, pte_t *ptep, pte_t pte, + unsigned int nr); void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd); void __page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp, pud_t pud); void __page_table_check_pte_clear_range(struct mm_struct *mm, @@ -64,13 +65,13 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, pud_t pud) __page_table_check_pud_clear(mm, pud); } -static inline void page_table_check_pte_set(struct mm_struct *mm, pte_t *ptep, - pte_t pte) +static inline void page_table_check_ptes_set(struct mm_struct *mm, + pte_t *ptep, pte_t pte, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; - __page_table_check_pte_set(mm, ptep, pte); + __page_table_check_ptes_set(mm, ptep, pte, nr); } static inline void page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, @@ -123,8 +124,8 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, pud_t pud) { } -static inline void page_table_check_pte_set(struct mm_struct *mm, pte_t *ptep, - pte_t pte) +static inline void page_table_check_ptes_set(struct mm_struct *mm, + pte_t *ptep, pte_t pte, unsigned int nr) { } diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 46e77c12c81e..af69c3c8f7c2 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -182,18 +182,20 @@ void __page_table_check_pud_clear(struct mm_struct *mm, pud_t pud) } EXPORT_SYMBOL(__page_table_check_pud_clear); -void __page_table_check_pte_set(struct mm_struct *mm, pte_t *ptep, pte_t pte) +void __page_table_check_ptes_set(struct mm_struct *mm, pte_t *ptep, pte_t pte, + unsigned int nr) { + unsigned int i; + if (&init_mm == mm) return; - __page_table_check_pte_clear(mm, ptep_get(ptep)); - if (pte_user_accessible_page(pte)) { - page_table_check_set(pte_pfn(pte), PAGE_SIZE >> PAGE_SHIFT, - pte_write(pte)); - } + for (i = 0; i < nr; i++) + __page_table_check_pte_clear(mm, ptep_get(ptep + i)); + if (pte_user_accessible_page(pte)) + page_table_check_set(pte_pfn(pte), nr, pte_write(pte)); } -EXPORT_SYMBOL(__page_table_check_pte_set); +EXPORT_SYMBOL(__page_table_check_ptes_set); void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd) { From patchwork Wed Aug 2 15:13:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68A7CC04FDF for ; Wed, 2 Aug 2023 15:14:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 261C128018C; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 15957280190; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2012280143; Wed, 2 Aug 2023 11:14:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C8D1D28018C for ; Wed, 2 Aug 2023 11:14:13 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 968C0A06A3 for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) X-FDA: 81079510386.05.AFEC7D7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 9811218001F for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ODgX07Ll; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989251; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sWJNYdXqGD763FMdnHwt3YTZQSetHeelliQAJYOMW5U=; b=Oc2rHTpMGOtX+HvjVBSThM7cl5YZQbyhCfTeuEu3ZFXQAyLXmZtEFgKlQq1sC5sq7VRj0L aApNqbLuJEQb+VKREL6lqt52WgSGNE5MoslGMnwJXiaNFN+DzKu+xCuC3AqnALXiDtTJTG 1KE9SPEw4CNF5ojAN+DMI/OTdEHK7Vc= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ODgX07Ll; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989251; a=rsa-sha256; cv=none; b=SKkysK8Y1w9aQz7s43skGDbsj0k5QjrJbN7B+qWGaxz1UxGQ89AQLCbvAKIi7kHJrdj4CB B9XSfvQrd7GZMFmbY5m2IFP1TgbY9q5H0fuCgKGTuLiSYfGPzROT0KSo1HyyCktWYsaxVb Q9kgTScGxfb3ucVopacGkdAS7Sb/Ik0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sWJNYdXqGD763FMdnHwt3YTZQSetHeelliQAJYOMW5U=; b=ODgX07LlP3sQFSkIbptb9AtJgc aaCt28wefXbzh+jzF+mtW4/M2MS161SOAexmrxiEJVBvfMf4Dfthf+3NjqhKyrcEkEOmRfLAgsr2o Zkk3vRGGJAs8v76/DT1wxvYQyQnslgAHIcM9NeAuGr3GAv3/i2oOdrGFNFI/d+ccyTKu6XPbqwSQ/ COCod5aBIAQ7z+dXxFJMofhLgtjXZu5F77MSFSgoRMsJ+U9IlNsgp20zP/XVmaeDFFRbHwIgFndzb je7K0gzcmMJsibrqt1C/K6PDAIPqugOFgc4j7kRSb8OBNR4d9LS2i0IQZSZ4eJPK9V6Al4Zz1+ZOz Gqw0+z5Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffig-6O; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Anshuman Khandual Subject: [PATCH v6 03/38] mm: Add generic flush_icache_pages() and documentation Date: Wed, 2 Aug 2023 16:13:31 +0100 Message-Id: <20230802151406.3735276-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9811218001F X-Rspam-User: X-Stat-Signature: du7nkuwudcw64eu4nsij1spuhy5wtqin X-Rspamd-Server: rspam01 X-HE-Tag: 1690989251-154693 X-HE-Meta: U2FsdGVkX188jJtR4n6fqivjn/zX36wzHC4HJ05NA8uiVTkiqxLLXoQdJaJCQsUUq9a7Sd+lHdVe7jjJE/tnia1rxqzJuY1iZ55x6n550chSNb2uzTDUkH1+DbapI3WO+D73EEizgrgh7giLKbHiRM5aHDavZrich5JvsM1QTN44eflEMji2JURC8Qu69aqVTbfgCSFtmQfdBSwO8Wm98Ogo22epCuElptW/5zTFLsL358K2uFwaRNDqvKoaySoWGPDW1RiNtT+m7zfUAKgFRLqTUV1JDJzFc4h0z8i0CSg700LTj7ppC0O1MBDaZkQavPVCKsZbD45cVZ8US/qygO7UETlfTKVPPUFQSbWNarWHJHhI8yq9c1Q3sLt8XHSK0NRoojJyCDeGgJ/mVdUEtiI9Nrr18+9lOQi4Z3aVcQx0vVBYg/bQbUfTV5VUHfL8RsMr9dzIvX4a/gbM9wttJwa5T/pWTOsB0LrhCw9liX0jxXPXpzidWwN38I+6330aUlhqvEtPpxdTTU6WpqrR3ikOmmIondW2OMP+I6IdZAf6ygZrtaSFugpzuvXEikiq3cLEmIgJKTEop77zmxp6dddug8tIn5kZ7nRGd6IUpLXdW7Ax0iL25UXl1k8KkWVwvLvuZXV5gCI+tN0/jSAT+Y3jopHCadGrR0Dam27sdLaVjB3C3Wub55c5IC5bdISRC9QUT6c/qNYCtryZ/6/6x052E1+vKbGAGtnmXIU1XpbZ7iay5YJHKVvbUxCUrebyX/tqayIyZS1gG1Y40mCzvflagpcD93TbbCENM5XNf9lNP184d7sZ2lYwl66eG6hfXHzuXcB7zoe++24TET0oZVBtRdiaKl9iGwa/eJKoQ8Sq7d34fPgh1FX+ASBwgm/+hkcI4QiBuAYYQiqWTe+JNmrb7FUmubk1fePXCdvsmt0Mbc4Z9GjKosuQ9NWffmNp8LnsA3ta5sjWytWG1ph UwRcMICQ TahphKxd90YY4A8KlGWwh8TE9X7PMXNj6Ig90x85wY3prT+HHWcP+qpzCC48MImCu8AjIzwUJcLnDcPU2EFL4NGjP8XyBPj+TBwgKMwu94VbRf4eCiVgBobeGCNG2WfkYLPCb4wL6ICs+C2saQ2D6vhuOjMzI62VKc+xPIiqeeoIpy3Esaepy61JR7wlzK4/2H5G1vehNThF+wI+5zwWovfUbDxD83kgxLYfXfvPb9f0qVs2LbtDWCP6c0VVk1QFYv3c5Pyl1y7FgPDxLUvMsnxcT1tjh2aUpsQkXf6QWTXHY9fgt6uAbZ/VKYv3YFvwRI3EakIfdM8USvOrf4kUQxcPhsYZ9OcB2txvC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: flush_icache_page() is deprecated but not yet removed, so add a range version of it. Change the documentation to refer to update_mmu_cache_range() instead of update_mmu_cache(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- Documentation/core-api/cachetlb.rst | 39 ++++++++++++++++------------- include/asm-generic/cacheflush.h | 5 ++++ 2 files changed, 27 insertions(+), 17 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index 5c0552e78c58..b645947954fb 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -88,13 +88,17 @@ changes occur: This is used primarily during fault processing. -5) ``void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep)`` +5) ``void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, pte_t *ptep, + unsigned int nr)`` - At the end of every page fault, this routine is invoked to - tell the architecture specific code that a translation - now exists at virtual address "address" for address space - "vma->vm_mm", in the software page tables. + At the end of every page fault, this routine is invoked to tell + the architecture specific code that translations now exists + in the software page tables for address space "vma->vm_mm" + at virtual address "address" for "nr" consecutive pages. + + This routine is also invoked in various other places which pass + a NULL "vmf". A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -306,17 +310,18 @@ maps this page at its virtual address. private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. - This allows these interfaces to be implemented much more efficiently. - It allows one to "defer" (perhaps indefinitely) the actual flush if - there are currently no user processes mapping this page. See sparc64's - flush_dcache_page and update_mmu_cache implementations for an example - of how to go about doing this. + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) the + actual flush if there are currently no user processes mapping this + page. See sparc64's flush_dcache_page and update_mmu_cache_range + implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if page_file_mapping() - returns a mapping, and mapping_mapped on that mapping returns %false, - just mark the architecture private page flag bit. Later, in - update_mmu_cache(), a check is made of this flag bit, and if set the - flush is done and the flag bit is cleared. + The idea is, first at flush_dcache_page() time, if + page_file_mapping() returns a mapping, and mapping_mapped on that + mapping returns %false, just mark the architecture private page + flag bit. Later, in update_mmu_cache_range(), a check is made + of this flag bit, and if set the flush is done and the flag bit + is cleared. .. important:: @@ -369,7 +374,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache. In the future, the hope + flush_dcache_page and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index f46258d1a080..09d51a680765 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #endif #ifndef flush_icache_page +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} + static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) { From patchwork Wed Aug 2 15:13:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E94BCC001DF for ; Wed, 2 Aug 2023 15:14:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B17C02801A4; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A56EA2801A3; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C9EB2801A2; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5C9AB2801A1 for ; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0DE89C0FB2 for ; Wed, 2 Aug 2023 15:14:19 +0000 (UTC) X-FDA: 81079510638.28.7E2E53F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 39E31180026 for ; Wed, 2 Aug 2023 15:14:16 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QrowCvRJ; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989257; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uh/dLU05HXiEbSg20w2LmQ0wgPaOMm+owDlujvnA+nk=; b=yvRNPcfyv4jx4YizdOjt9P92RWmIplCZ9I32Xp56cIpqp/UlewXssalQk0TA7T7qnuqIoB lhFlA8/s4fmS+LVcnjz1FkoUl/Dv5AxKzrevimnZqEuPgpdIJAtzteXT61M2hapx92R5e7 SGNLH0HLqiuunYvU5y8MbRqBtgH8gik= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QrowCvRJ; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989257; a=rsa-sha256; cv=none; b=jGZS+mTpZ76/3RX9giXHVKGRKzMm6YhNYbBbMRlOhfEudwt00Gd2YrPSay2GzN3J7MdKu0 ptnvG9qGg3oVsVZu/2UVRtNiu1CYQzhk5xScRq7aPpC02XawG7Rxhecdh1TPpRFm5jSxWD Kf9BP2iN3jxc1JK8fD0V0/NRhzEMAFU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uh/dLU05HXiEbSg20w2LmQ0wgPaOMm+owDlujvnA+nk=; b=QrowCvRJ89eB18rNesY1xCrndt rfqJLhx5g6XpNfP22H0oocN9EwYAt5Cc/PboODlXqt5QzyjwDBQ4fNwyoj2aOxATvpaVYYMCyBgY7 pbAc5FwRFbjtQjrjik2+JD0c2FKxNpzRCCd3DyfuGGmTSM0uWCI49O5ay5Ct3tiU+ej0DDpA210mW NaXwpRVsrFDP4Bqmmt3B25hCYTfT9WMn2KmAhvlTIxF2Ygc7nqIQDJdylwaaiGy7Hjk980g/7FROT 3VrkiMbsI2pQK5PRdr/dMlwNzV6g5TlCp6B+t1Q6AKz/YjdV6yszP1j0tRuRrRPg3Pnvf9BktFT/T QV2/NySw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffii-AF; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Anshuman Khandual Subject: [PATCH v6 04/38] mm: Add folio_flush_mapping() Date: Wed, 2 Aug 2023 16:13:32 +0100 Message-Id: <20230802151406.3735276-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 39E31180026 X-Rspam-User: X-Stat-Signature: tzrdtascfemkdki1qooknshuhezz39qf X-Rspamd-Server: rspam01 X-HE-Tag: 1690989256-62260 X-HE-Meta: U2FsdGVkX184F7YlExhf2BkLBG1fgAqbNAbL590SLS5PEVyEZ7o4rarbBHlgMvEbwR53osyH8PcMwo3dlhZ+USaM1HlkmpqeHbGJXj2LtCkUf5AQBSfAfrMh+d96TJeoHT5Y7JrMVO7l+SwgjYHPz+fc/LMvRMzStMFmRqVbhDH8YK6qT8TDWFBD4NX9Hrbw5A2Ium6e+zlAQLWElbCz2CtYzUVJn5RsrLe5UhF4xiv/DuaKRWfvkWqtpgrgmZeFuCcXCXbUvTxcq/PZBjfxdibwHq7H+NGKMDfIlZk4E1t+ptys58tctDBACWT/KaQ2d6hHdhFKn1Ijjtj52Ja7cZx+f9SWev+jA0dYbt4TJ5bzyyStWz26wnz3/d7YIBBwzSPHS4Vew27AXGQXgcYdGFvW8hA1FSqA2EktJ/XqvbuLPl6/LPIIXjrwWYLZUt8X5VjFRo9f3PKFsYUVBazf+BZadp/7cSandCha0EnLBEALmawtDLvS4r9TNo1GyxtK4RFnRkme0i/xVHYkdYcanRSvrTFqznN+f1/WaViSAcbB81nT8GBB60Swmo5yGFL2M7et+6k3vdUFHSZZydyoWntReYRppC6id+lBS5KPyDZ7BvynPVLOuERjxOx3iWHriZzdKhmo5phH2f2qvtjgHXrSw5hZu9zD0+Cvd4I27vRqYH546GmzJiXZRheed38rIruhUxi7Uy7ibtN0r1axOkeWa5jHP3xwuC+3x1AmcVl8JcGd2RyYYtuN2W0PwmFoZt01WfpkvMStv5izS+v4PVtKc24iESyU0e2AjT38OsDrIVPQv6Q/vBDz5y+zZhierc6vA9vmHmxyRNpxdsd1/iE0Qw88ILqSe2J4SYxxQzmaV8t7KzQMuDaCPcuq+8a/sz5fd/qCE8vmfhHLaOrYVPmbq5r8SMAuPESseU/iGDbTO9amUXoB2hGdmfTqhKMb2Z4VB1RcGXv91RXQl7e S12sfewp 2dgxWLbAplVx2Nt0SZCGZyBE7NQANuFvG6KTblfkyXCe2VXKPRUTwQ+LhwQHtWgm6M5oKUh15wiQOG+/f4wEonFprrWmCVN1SOuAFl43Kl76TbXUqeYNH8sWTPxdm3/Xl+D97xfvldT9t+2+3VA05EOI+7x+3N3I1gkNvvJ4moDmqi+jxMslo5+ytYXVHFqr67SiHm+Xp74hCHl6kK6IN29LAPJUo8ZrwVBqnDVCjU2dpd4XDx1foQ7low/Ptc2A4bxzy2gpfj8vsypFrRH2zwV8kM3qMB4x9dHWomppZR1emKdGoT2ilbq2LtL6yuT/Agn35Y8rZGY00wKXowSJ5yVOU3Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_mapping_file(), but rename it to make it clear that it's very different from page_file_mapping(). Theoretically, there's nothing flush-only about it, but there are no other users today, and I doubt there will be; it's almost always more useful to know the swapfile's mapping or the swapcache's mapping. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- include/linux/pagemap.h | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 04c0fc6f81b3..bd522a64b714 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -389,6 +389,26 @@ static inline struct address_space *folio_file_mapping(struct folio *folio) return folio->mapping; } +/** + * folio_flush_mapping - Find the file mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Anonymous folios return NULL, even if they're in + * the swap cache. Other kinds of folio also return NULL. + * + * This is ONLY used by architecture cache flushing code. If you aren't + * writing cache flushing code, you want either folio_mapping() or + * folio_file_mapping(). + */ +static inline struct address_space *folio_flush_mapping(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return NULL; + + return folio_mapping(folio); +} + static inline struct address_space *page_file_mapping(struct page *page) { return folio_file_mapping(page_folio(page)); @@ -399,11 +419,7 @@ static inline struct address_space *page_file_mapping(struct page *page) */ static inline struct address_space *page_mapping_file(struct page *page) { - struct folio *folio = page_folio(page); - - if (unlikely(folio_test_swapcache(folio))) - return NULL; - return folio_mapping(folio); + return folio_flush_mapping(page_folio(page)); } /** From patchwork Wed Aug 2 15:13:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 928EEC04A6A for ; Wed, 2 Aug 2023 15:15:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B8D092801A9; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B14B02801A3; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A3CC2801A9; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6EF362801A8 for ; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 1232140F9D for ; Wed, 2 Aug 2023 15:14:22 +0000 (UTC) X-FDA: 81079510764.20.572CADC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 6EA491C0015 for ; Wed, 2 Aug 2023 15:14:20 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Zm+0Cngx; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989260; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b/IhDBaU8cZlRIdYXmEsOGIK2MyXItqBYHaddfMXiGg=; b=GNgXkYA035bf31NE3c6N+oJOO3iZZK8lTxBYK9J0qD84gX3gfGWK1NgYL2QJehzCimwIdX vooVwhd/FJCYYdK3qvOlYIGrZ/mqcZBWvz91wCv32vAP6FZCzKRceQxZxaERZGKmHOliGI M+qVCgImoj32oQF5B0gfMQrxaErRdAw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989260; a=rsa-sha256; cv=none; b=V9hPUR84pphMfSh4UsKm6JXDqoPq75u1CNFE9ETsX3FtdGOkGBTBWhUmsXbr4M1vNLxghC hIfKhlSwaE5uJxxLsjlPa8NaFeqhqymkcEZs0QZvVHL/Eo5UlbnHqqVUmFPs4wHs8L3UMr cFi5VxX9gNiKsVdcWVwOxDbmGEtLsT0= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Zm+0Cngx; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=b/IhDBaU8cZlRIdYXmEsOGIK2MyXItqBYHaddfMXiGg=; b=Zm+0CngxzGv2WvFuZEvaffMNDW 900jXk5SzkrupjIQ7n3/oJO/nQpbRA9NgknwrvLWC1+mx/mOvwBSfT3ilnCcqtACLz178mHdY9NkG 75MxbgLE0lPjlUnGR47kISHpFPDKxjK8ViGYw2CKWaJ6IgWrM/EWkLU/BR9/us7m+nvbwcLRH6sUz yRPoUS0oH/x8Or3Ab48V8Oxz5/nVXGB1XIhCpA8YumYbmL5/mCHv4P6mS2bblmJLQtnmXXGEILa3a BBt0ThaJ8CUzlQzl3xvJ/dvu125kH20h0I8NgtQ7ag7NhcndvuXo/merf9kqInQk900kJg8smhJHO jTowKZtg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffik-Ce; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Anshuman Khandual Subject: [PATCH v6 05/38] mm: Remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO Date: Wed, 2 Aug 2023 16:13:33 +0100 Message-Id: <20230802151406.3735276-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6EA491C0015 X-Rspam-User: X-Stat-Signature: ihkufx5xbodns8op3bycmgk1zu8nenim X-Rspamd-Server: rspam03 X-HE-Tag: 1690989260-106509 X-HE-Meta: U2FsdGVkX18qGuT5gtAp4ts+kSQzy6iKuzCCk5OTV82ubP7zNI5WDZF+igIVYC8mCbygMZ6+v3Ewcihla/DXqD8Q0zjdL1VN2YGdntPZ0Lb2rtA5z85fD8r/Raj45DWTMr6ln9hEGtdoTeVPajo8oDxhjPYJVasJGbC52OdTLFR/CMVopRSz3isAwggi4h/e/J/BdaV9b0mDjWtnb4FjBwYhjhr8/ViGmLou89KaNVKWCU32GhSM0sapWUNPDEE3jldAhXqvh0YmS1nnGPMH8hNV18RmQM5hiKLoAKpmSXlaJ0qKcJw8+ywONjSTVOA9mvxc1X7qbVoDZNheq/tr76iuhH1uoxUfRDULDFMly6CKugEMdvY6KtdBoC/6JIYu3sXvQIU6VuR5wP7CiMoSCmHGvYKnutlzV8GctajjQbkV4UMFMUKjsNWrdyWo8NyQw0JQLTauy86LdVuef1PE7z1dBsNgmJqFSTFrOinmJh/SE8Z0Qcn4xCfLN+8y7LUFksm2QA+yqkEvi8tPNvnZIL6+OM+vrN13KVHMtbRiUyHZisWrEkdXu2/WKC4UCBdrWTyL1kk82hFNIc4ZES/4C1ZwOIkMK+hMEyNt+MDURJYvZ0CSzyQjtUbIAZ4xAmh1oD2H653gUzemkhFAGdkJK11tF2LRHyFj64g4raO2GYbZ6vw6PbRZmV/GgRAuu98zSRVm+i6gYDiYjff0FkgPXch49FCvh1uenHL+2vVAU8Qm2jCfQCZYqsOSNj+Df8TsxDPOT7nOq8GGLz71bFIIKRp62sGpVPszMv3rLiyPRXEE/ALoeYhKGX144wz139FgT047IP1ZGMTVVixSWZU9xJVpGI+O/az0+t37X5zQSATbgCVxY1w40HySNbPmjgirm1ADmPxi9INye0yUsSy9WM8/JabHoE5T4+Oy9tMW5q9G1ts+amKWLON6UMDpgj4OyScnuWvaXxqezybbg0x WXKfjHH2 lagdpeDnimZlojiRK5sSO1jeHJIjqJVoWssy8t57aNGMfQM0gW7BG5CVQTnR3Ra0YGMD/+++js6OknGSLri6KwNyHXgt+2vc2zmYNDy8wSiGoRL+XOqOYfqUMYybjrXMKassktH701GvDkyLcEohlxPaXXqDHeoPCF92VHxR/sXofMBCnsSwP2efNTuq2OeemuWNDnfoDRAWahy1YqisCEZBI6St3rlngsmDTYMD+mEkcskTNTjNlH+foK7zMhjHtTBPro9rS9WA/Z2gpkp0I3ksW7PQ1H53MTbN8po0Nm0slHGQ8KHc+e6y+GlGpDvH8cu6cHFRWSBM4jtk0reMQPDgBSEXmHf4yW8gY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Current best practice is to reuse the name of the function as a define to indicate that the function is implemented by the architecture. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- Documentation/core-api/cachetlb.rst | 24 +++++++++--------------- include/linux/cacheflush.h | 4 ++-- mm/util.c | 2 +- 3 files changed, 12 insertions(+), 18 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index b645947954fb..889fc84ccd1b 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -273,7 +273,7 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, these two routines may simply call memcpy/memset directly and do nothing more. - ``void flush_dcache_page(struct page *page)`` + ``void flush_dcache_folio(struct folio *folio)`` This routines must be called when: @@ -281,7 +281,7 @@ maps this page at its virtual address. and / or in high memory b) the kernel is about to read from a page cache page and user space shared/writable mappings of this page potentially exist. Note - that {get,pin}_user_pages{_fast} already call flush_dcache_page + that {get,pin}_user_pages{_fast} already call flush_dcache_folio on any page found in the user address space and thus driver code rarely needs to take this into account. @@ -295,7 +295,7 @@ maps this page at its virtual address. The phrase "kernel writes to a page cache page" means, specifically, that the kernel executes store instructions that dirty data in that - page at the page->virtual mapping of that page. It is important to + page at the kernel virtual mapping of that page. It is important to flush here to handle D-cache aliasing, to make sure these kernel stores are visible to user space mappings of that page. @@ -306,18 +306,18 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, this routine may simply be defined as a nop on that architecture. - There is a bit set aside in page->flags (PG_arch_1) as "architecture + There is a bit set aside in folio->flags (PG_arch_1) as "architecture private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. This allows these interfaces to be implemented much more efficiently. It allows one to "defer" (perhaps indefinitely) the actual flush if there are currently no user processes mapping this - page. See sparc64's flush_dcache_page and update_mmu_cache_range + page. See sparc64's flush_dcache_folio and update_mmu_cache_range implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if - page_file_mapping() returns a mapping, and mapping_mapped on that + The idea is, first at flush_dcache_folio() time, if + folio_flush_mapping() returns a mapping, and mapping_mapped() on that mapping returns %false, just mark the architecture private page flag bit. Later, in update_mmu_cache_range(), a check is made of this flag bit, and if set the flush is done and the flag bit @@ -331,12 +331,6 @@ maps this page at its virtual address. dirty. Again, see sparc64 for examples of how to deal with this. - ``void flush_dcache_folio(struct folio *folio)`` - This function is called under the same circumstances as - flush_dcache_page(). It allows the architecture to - optimise for flushing the entire folio of pages instead - of flushing one page at a time. - ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long user_vaddr, void *dst, void *src, int len)`` ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page, @@ -357,7 +351,7 @@ maps this page at its virtual address. When the kernel needs to access the contents of an anonymous page, it calls this function (currently only - get_user_pages()). Note: flush_dcache_page() deliberately + get_user_pages()). Note: flush_dcache_folio() deliberately doesn't work for an anonymous page. The default implementation is a nop (and should remain so for all coherent architectures). For incoherent architectures, it should flush @@ -374,7 +368,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache_range. In the future, the hope + flush_dcache_folio and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index a6189d21f2ba..82136f3fcf54 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -7,14 +7,14 @@ struct folio; #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio); #endif #else static inline void flush_dcache_folio(struct folio *folio) { } -#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 +#define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* _LINUX_CACHEFLUSH_H */ diff --git a/mm/util.c b/mm/util.c index 5e9305189c3f..cde229b05eb3 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1119,7 +1119,7 @@ void page_offline_end(void) } EXPORT_SYMBOL(page_offline_end); -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio) { long i, nr = folio_nr_pages(folio); From patchwork Wed Aug 2 15:13:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF542C04FDF for ; Wed, 2 Aug 2023 15:15:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F37F22801A7; Wed, 2 Aug 2023 11:14:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DF9DF2801A2; Wed, 2 Aug 2023 11:14:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC3662801A6; Wed, 2 Aug 2023 11:14:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BA5752801A3 for ; Wed, 2 Aug 2023 11:14:21 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7FC50B1C5C for ; Wed, 2 Aug 2023 15:14:21 +0000 (UTC) X-FDA: 81079510722.26.D96ED48 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id A1FC61C0028 for ; Wed, 2 Aug 2023 15:14:19 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NEyqsPqS; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7lDn/wtFMDStStfJSFg4JFdBjyCxjqZcRZNsesSoGFY=; b=oEeNnZx9wI+8O58wHlP9f+P6X5U6x7W2FuXMjBBsZLJ0H0dITzNg+5rkLgfGgDMW9QEHxE DFNEaDSPlQjYe343UHRCQ4cSUjHjH8Ws6mu1iuRBasKmOZkxPzBru6Q4bk2MY/WZvZd2kM yyWmRlZgFSgFchRCvtMpP2sTP7c9YtM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989259; a=rsa-sha256; cv=none; b=TDIxcy5zLm6k0VGTiP5GXJG2yOlnhHM7KRXOJLzkylWIAvRJpBe5O1PpnuUv2IBiPmVzLa 3SKP3CdAJnW2omrSJz3C7ewL2wK8/ZxKiRaKpbnSsdzjtBRh6oAlGkgGSlVq9hlvp1kpRA h0phiDJKZipVdR25ucXYPLl64C94sLM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NEyqsPqS; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7lDn/wtFMDStStfJSFg4JFdBjyCxjqZcRZNsesSoGFY=; b=NEyqsPqSVW9mPGio75eZb8vstz BwjATcHvubSU38801ClVjqC6T9M8z6Z/kBkSdqAWHV13QQpXH8yi57P/rvidiDEJBvXS6smJX1bZ4 kF8RIFwrxaSEtzRUv7tq90xT/8geYsbp7ij1iXYIwDX3yWj8uzV49ELoKS5mA5J5A5wd73YmGijDk zQU9GEMSNz5pBP9qsQJ+6nd3nsUeGEwwgduHihDEorD3tL7mg4oytaJ6TBaSzp2DLHtLyJDpxvFFO A4pG5s7mmIDvtzDsi8nhMuNuQ015e7dJPivYtELxxm7HSSMvIAqiFQ//Eq+5yJJaZMNhO2FBL4tZk 3l+xhvhA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffim-Fd; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport Subject: [PATCH v6 06/38] mm: Add default definition of set_ptes() Date: Wed, 2 Aug 2023 16:13:34 +0100 Message-Id: <20230802151406.3735276-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: rjgps398rdq8f9843ffpybb8fqzco5s4 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A1FC61C0028 X-Rspam-User: X-HE-Tag: 1690989259-158974 X-HE-Meta: U2FsdGVkX1+uBs2N7NQ/8KHwl1cQImzYDawxdlFYtnShCi/3H4yjYVD+6dwwC31FYBdAqFdz5YjS2YTzfDb3I4OW0eI7OKZ+1t+7xCp1E/YZg2b1spaLkmdPvY9QgxZpVZlyA6O5HRzdXDY9fw+QH90hWPqhZXSqZTW3EvreYqfM2nD5EXpdPtWIu7qxBGXu5eydl9EU21QZCKfX7i/E1ujoJWfsPfXnRF8TRXM7/s5IgJIR0pBzQcL1ZFGbTAZYXTbE+sOS4Mpvvcwfc4NXfxiUnw1VKuLHqOv56hFCTU0tV34zOVfmM+Z82Ie9zNn12hTxuokTyL9Rl8Tfob8Ah/8SaNgwlc+AqIrLIo1h2JEQRncgwBZCIUrydXBSpe3bQ5CS2ajrWQWjvYKG7I/6W44jYJ9chSh869WNsRa1mzi9peX0U6hlKmjMUdPsDWo6PbyrawW3mX8E+Pghbg3LKNgxqhsN0yHtaoImcuvgZkMsTq5SjbNn2f0FTsKHjX+xwVl3sRvqQGg6an/xvqwVnuMd6cI+lmxpLFFu0BtNfJgSDhqk+/pAGl1ZJagED1K76ZhtPqHx0xvQZRzP5u0r7ymkepzlGDZPxx2hbrgNLAudKSlItxBNTmugjYThPN8atb5xFr9ofXAruthinBjuppDe3V9+Z1oI5JUzp5vNRMunK+S+62oGVNhrsP60IYSsI77MK2aeaZbGNmFZ+WRgGrTMY/LIDii4nHI2JP5+D8jFQrQ0bmYDTOmzpm85PkcK2wd1LsnKqRy9nAKdTQphVw3Vl/ocL+V+72jRy1AraWzEvOkJx6aLlmVTwt2sXjH7xiuHOP3gebgG8yslEP1K660A5W/Gk+wMt395KCla2/QgQVrACVcBHgBGA/MNmywtRxs49EW4vUpxv+9jnCcrDGEOU0EE2Xq6Q1z3pp3nM4yaERNm5tC+cCHSL82aOzDoyFlSCHwhNXub6PSGQuZ Nj2eEn/U keVeKGkVDxRp7WfYlmQ/8Xf2e/bbSu2S0wm8kFNfKMU/0rEdKhnWGG/mj9LLc7bQQCpQr4SV/OX23A2jsLuNIRssfJVUS2oF2E3+TirFYQIFVPcgX6qRdRQZm3WvGJClWXvGe7G+h3KmgaQbSHhC6BPkKlrbtyzCGJxvArbquRuKEdr12cPrBE77w4vqBjq2y1UDAZ9KDJ0iLPRoKrwaiUAPkml6ygSUtFfv1jXuvPRPzE49alsaZbB9m2v2DRXoIY7cjtjVY+AWSmTMDwi4mnMCDEfXI0oTOTWRY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Most architectures can just define set_pte() and PFN_PTE_SHIFT to use this definition. It's also a handy spot to document the guarantees provided by the MM. Suggested-by: Mike Rapoport (IBM) Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport (IBM) Tested-by: David Woodhouse --- include/linux/pgtable.h | 81 ++++++++++++++++++++++++++++++----------- 1 file changed, 60 insertions(+), 21 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index f34e0f2cb4d8..3fde0d5d1c29 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -182,6 +182,66 @@ static inline int pmd_young(pmd_t pmd) } #endif +/* + * A facility to provide lazy MMU batching. This allows PTE updates and + * page invalidations to be delayed until a call to leave lazy MMU mode + * is issued. Some architectures may benefit from doing this, and it is + * beneficial for both shadow and direct mode hypervisors, which may batch + * the PTE updates which happen during this window. Note that using this + * interface requires that read hazards be removed from the code. A read + * hazard could result in the direct mode hypervisor case, since the actual + * write to the page tables may not yet have taken place, so reads though + * a raw PTE pointer after it has been modified are not guaranteed to be + * up to date. This mode can only be entered and left under the protection of + * the page table locks for all page tables which may be modified. In the UP + * case, this is required so that preemption is disabled, and in the SMP case, + * it must synchronize the delayed page table writes properly on other CPUs. + */ +#ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE +#define arch_enter_lazy_mmu_mode() do {} while (0) +#define arch_leave_lazy_mmu_mode() do {} while (0) +#define arch_flush_lazy_mmu_mode() do {} while (0) +#endif + +#ifndef set_ptes +#ifdef PFN_PTE_SHIFT +/** + * set_ptes - Map consecutive pages to a contiguous range of addresses. + * @mm: Address space to map the pages into. + * @addr: Address to map the first page at. + * @ptep: Page table pointer for the first entry. + * @pte: Page table entry for the first page. + * @nr: Number of pages to map. + * + * May be overridden by the architecture, or the architecture can define + * set_pte() and PFN_PTE_SHIFT. + * + * Context: The caller holds the page table lock. The pages all belong + * to the same folio. The PTEs are all in the same PMD. + */ +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, ptep, pte, nr); + + arch_enter_lazy_mmu_mode(); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); + } + arch_leave_lazy_mmu_mode(); +} +#ifndef set_pte_at +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#endif +#endif +#else +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#endif + #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, @@ -1051,27 +1111,6 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot) #define pgprot_decrypted(prot) (prot) #endif -/* - * A facility to provide lazy MMU batching. This allows PTE updates and - * page invalidations to be delayed until a call to leave lazy MMU mode - * is issued. Some architectures may benefit from doing this, and it is - * beneficial for both shadow and direct mode hypervisors, which may batch - * the PTE updates which happen during this window. Note that using this - * interface requires that read hazards be removed from the code. A read - * hazard could result in the direct mode hypervisor case, since the actual - * write to the page tables may not yet have taken place, so reads though - * a raw PTE pointer after it has been modified are not guaranteed to be - * up to date. This mode can only be entered and left under the protection of - * the page table locks for all page tables which may be modified. In the UP - * case, this is required so that preemption is disabled, and in the SMP case, - * it must synchronize the delayed page table writes properly on other CPUs. - */ -#ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE -#define arch_enter_lazy_mmu_mode() do {} while (0) -#define arch_leave_lazy_mmu_mode() do {} while (0) -#define arch_flush_lazy_mmu_mode() do {} while (0) -#endif - /* * A facility to provide batching of the reload of page tables and * other process state with the actual context switch code for From patchwork Wed Aug 2 15:13:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 986D8C04A6A for ; Wed, 2 Aug 2023 15:14:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6AB76280191; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 65B8E280190; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 46215280191; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0EAAB28018E for ; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CD9CD1C922D for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) X-FDA: 81079510386.02.0E0A44A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 173822002E for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ErsQAAfP; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xj8uW5ewiXzjiHwdaA5q0RZjd8bQYNa27tcGV7uaG9M=; b=yUNxkJ9ttk2UTcdKtoYH1srvL5yI7at9Q2R9TkSi/94b0P04yLfU+uu4m0M7zA883o7lX0 +DZsioDN1MXA2ycgRcjCsqUuRqYkduSPMntD46HAYyegqyGVMB3DNqoFG3PZUL1HCa6xIC 4knbsdkqJ3x4V3AZC4G6+IeHTid7/DY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ErsQAAfP; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989252; a=rsa-sha256; cv=none; b=oNSES/iNTcT7rCt3SV/a23ztViKcoGKpQoW25g6PNyjz60HFyLepZF1x1Q9KUdkN6RrIVN F7tD/Gos6eCEaVPzoHFhm+U0lYd+PUCiJ4d8efNCZ+fNhGF/2gAUszQjbrIN/gCUlN+tMV 34xUwA+m/Bib7Ad1jhlXLKK4RE/g9VQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Xj8uW5ewiXzjiHwdaA5q0RZjd8bQYNa27tcGV7uaG9M=; b=ErsQAAfPNLzKZrrz27+l/e+wy6 BRlxiIe+zLXT5mXV5cFWIEboUJ6BnrB5yv0oFf8XLudTqBOQ8vuMFArRVlPg6CMy6xdLTGXZj25GB cA+4YTQNdvthbIM7znqC7ZK4OipL+Bhu83okxFpzmt/IbpD4YwaVfondLpy5ACNgdaOeN+VypLvsW A8go5wSw1ynI53DpGOEi4nb4rjHEr7vedF81ySfd1CEs5A0eetrr5JeSt3h40UfQgmfHPKM4gzd3/ uDijD6XRj94ktjriYbUZ1KLUISWJZY582sTiQYZAvCJQH2ybUz/P9U2r8i0guxS4Sm49pUExzaYYs 7fapMlrQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffio-Id; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH v6 07/38] alpha: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:35 +0100 Message-Id: <20230802151406.3735276-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 173822002E X-Rspam-User: X-Stat-Signature: 56415dh7kirpfkamt8kdwmf3ohksir61 X-Rspamd-Server: rspam01 X-HE-Tag: 1690989251-855678 X-HE-Meta: U2FsdGVkX1/IGstVEFP0QOdAd+bHpvtVU4/gOZ+VrK3aDiCerWOY2eNM++asL+PKTVHZNpPu2MOS+ZXSgnvghrESqFaxvs5i4P44DIiHWd7iiGExjEzmKvtQgHGQWPS3/zgEeXkfQBxzJ/v0uOk7YpSTTe83aJkWUE3nyNsZxmukYysqrzdJI/x/1seMh7cDZNzgFceSstSnG5Z96+8s9bo6wqo0HpFi6OE2qUkXSp4jiDe75RUD0gjKrC78uv25oEsfGrYLyMBJhLWNBQpbZk0rH9TX9ibqSB/4sVCm6t0EUTxvWb915cY2nVFOzhcSiE2mUD3IMXmNwXDvtc0xIH+Nyfln61irzfmyCQKVgPfj2UrlLOkCgMDPf8nOhelpH00ANB0Gt3yywEWMEaY20MuofcU6zzl1sPKaRhXgvXbairNwcdAWGafcXhtmibZo70tXo/X+NrRyTEkR0w4wnkbLxSZrxDjU8lrvPrmQyb8f8VTDaJf0EMrKh+E1ps4YjRadMmEK3c0MSArTvX1E8OL3+cgF4ESJU6egQ1Gjcme1FAQlsmvACyzn9byBgur14bJ11ncXLsStvpFj23EX+dxda3eP5ksEcWpxY2WU0uG44QEkGdZXE6OwSPQPFBD1CYy4GlhMPaNd/EWh1/xNl+blyjaVjUgEvRJDnutHFBjiK8GlyOYWsQb7ae2U8OH5qxAYRRPG2Q87OcCPDMCpzVuAK6UftDgQflqFRi6/QXTT602DOGDlZ+KLXpeBCxdV8UJhq0SWpWtHdko9+mr0XGUvKoZSbsym+Je+0bpWGINfqIXKWz6yx1qf3j4jc8Rn/Bo3nmmblx/4V6IZkjRO/eFZS13j5JsfPVqFY3O/hDW35O/lPYCeciOpu0VgtRyjCXD8hs9l6nlXSPABQEYgZZTMdnSZk+7SWiPffKjx6CVn9vd3aIJjJc2xSiaBRF9u45pVpN3sMz5349SvSLZ vrvpWrY6 8pW3l71AAutwC3Ase0w3ae/XRHJLPAnM7H4xGQloufgqBwXF50NNS4WOHbH8KCxMUhvv8RD0sbxDRu18zn5OU/r7XK7yW5ub6m77BsFoeOISMemnBPtnQxtJpmWCYf4F/7J3mWzl3HibwXeGMynETovL+7aKPwQ2PdMdBg44ajuXuZeuJf3AyQCixH+mdHF5uGUSIh94N8ssA40VUXDHDQfWJeujcM0YjpamupamHH7w5oAwaMrgxooF8nZnvtaE95tjG1/qEzCCRv70wdq8PuRbke3VmqHQJyxvMHXlPsgqQsNJ6ceTw/xvgwboEoNo33uos1aCe6K3VqSkGLY3PfS2nFbQxIldTjLUxj6iD48eapBD2pwtgoOQqX3by3SfQeXUY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: linux-alpha@vger.kernel.org --- arch/alpha/include/asm/cacheflush.h | 10 ++++++++++ arch/alpha/include/asm/pgtable.h | 10 ++++++++-- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 9945ff483eaf..3956460e69e2 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -57,6 +57,16 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_page(vma, page) \ flush_icache_user_page((vma), (page), 0, 0) +/* + * Both implementations of flush_icache_user_page flush the entire + * address space, so one call, no matter how many pages. + */ +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ + flush_icache_user_page(vma, page, 0, 0); +} + #include #endif /* _ALPHA_CACHEFLUSH_H */ diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index ba43cb841d19..747b5f706c47 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -26,7 +26,6 @@ struct vm_area_struct; * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3)) @@ -189,7 +188,8 @@ extern unsigned long __zero_page(void); * and a page entry and page directory to the page they refer to. */ #define page_to_pa(page) (page_to_pfn(page) << PAGE_SHIFT) -#define pte_pfn(pte) (pte_val(pte) >> 32) +#define PFN_PTE_SHIFT 32 +#define pte_pfn(pte) (pte_val(pte) >> PFN_PTE_SHIFT) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) #define mk_pte(page, pgprot) \ @@ -303,6 +303,12 @@ extern inline void update_mmu_cache(struct vm_area_struct * vma, { } +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) +{ +} + /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that * are !pte_none() && !pte_present(). From patchwork Wed Aug 2 15:13:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 799FEC001DF for ; Wed, 2 Aug 2023 15:14:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A294328018E; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 920C8280143; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A93228018F; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 151B328018F for ; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BB28940FB5 for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) X-FDA: 81079510386.30.A3DCCB4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id EFE8D16001B for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ohWPgQCC; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dN6TKlj9sMiRY0mz1iXvxydzplvaifkpqpAhfj55T04=; b=AdZZhy2IK2hQkGh7kFUZ8pMSju6qC44FBp3Z9s8X0xhsKiCQf9Oy8xLrhXFhmPYa1mJTqS 4LplPWZhipk2JF2zC3/fixfxwYmpq2hzdMc4pShdvOptT0NouWV1A7999EwNO7dNlIRDvK joB4Zi5yP7xItBES49TAJT5Gehe8pPY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ohWPgQCC; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989252; a=rsa-sha256; cv=none; b=8c8zeajjaGMpZvj9nWIxALRRdolQp0VTZzViyEkQ0Y3L2OVrWojqZI+DLzJJUHNO4ozque L/IRx2LdaZBrOM+XqRb7gPgug024A8A5J7BGoGtDGK8Ic8C5cMLi2fIFPWzgYxpyImdwDS 3PTeKrUmCJUowXOtCteJcPGoV8HRpvI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dN6TKlj9sMiRY0mz1iXvxydzplvaifkpqpAhfj55T04=; b=ohWPgQCCUdv/0jA4AADtiTMs71 Cl66u8vu9i1UyyvrzJNP+agvituxFg3uyC6RWrHOgM6aYeOKzZT7BXKXiVq1Ewf7Eo6bp2ChOJzBi HbTeL4ephEJLM3ecBws7B2Le4Z/BjYwlqEdcqhtb4IpM0navdkUL1/yfkv8DUySFX5dKScNlGHVhI lDDsHC0RbuHOPO6VU5IKbdm69wdpNMxvYK5RCtzrMc5paeFo+Ba20sJzmQoyjaCA/x7MJO4fBymeH gwoClQN3HvtMSaIS0znAmt8KNv7FC+MN0rIC+LtJoZ1OQR8UYdvIQxsY53U+Zq1L8FfNu0V2iOxkS P6wo720w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffit-MR; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH v6 08/38] arc: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:36 +0100 Message-Id: <20230802151406.3735276-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: EFE8D16001B X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: zc7iowe8589rxn56jaqxgxjecfij35ze X-HE-Tag: 1690989251-810461 X-HE-Meta: U2FsdGVkX1/ES70T4sSfLEwIM8Fgg9sQxbmha7Gw6D4wkyVod34T6oO6gSBo7jl6scR7w9HK3qmHLy8dIruzkDolFzJiC/hLays6nVL85VPnsBjVVLXzCcB1EauXtEZdM1bCaEGMAUwP2bw+dhEbZpopnn+hrVHyZ+M1Wdw4ofqlbUGwq3OrgzRJ8sl55ksgND51KJ387SuxQHRPOL9Uxm6VkiaBWMuVQpKw/7PY8pgcea7h8m9/oR0PwUGunT10hiUa5ckXDYywfY+0ffUEL34jpXPWK1AXoFeZ4gYaWZSkOwp6wv6svKNic7POEjEH+NtDDtpNAZX8J+MbkPZI/CX9BgjZ5GnXLUnsVzVwd4VJZR/+DU1jo5zxcJg3zy+hphSWdB4qt6ZVHSenIX3uJ3frJl8tyJFYEH4f24BEmRzJaJXc6CSGbW6Z8sTEgXayiDhG6MmT3e+zt/LIOkSF2FRTUeLlWV2SdBf/MwZ0+6tjto9OlGKiiLSZb4letGlJfyj7urokqnsJILS0k//98HxMQI89kH+hdqUeD4sit93hnjyqMSxAqnj3kPGrQllSrjKDRFYuiMQ0QafRPNov0GNfK+QZR945oX3W7ONlH7XkMVTHJZpWCCZNDQmOTgqLVhsqIwtxs46QmkScBdZtJqncaomZKItfbyNFpryvXJIe19CxmsvxOUep7+E1LQFgwj9XGbY3jdfvGzRNaf4RUyqG2sUToomfwRQxWbG4gn9wi4XS0RKUG7oijiaTReDs71V1+5LgWehOItjRW5UlLJaKEfGheXX7mRaanIIel9s5RrdziRIAn22/zynhbrV208C9jqNEBk5ng5yWDvNsxTqqr2wug6wTGPTza0+AWpXrQL8T6afiM5Sai4esBq4u1PfcqdEyVNKAz7EBZk0W82ZTlAVEYvw34Ucxhu5At6uCRmeaAonUitxTdeJMqE1/ppkbB55N0IdxtgHjysx fWE6Djy8 1fL0x6DdsMEKwikK4pQcQ9AmtZGTLX5N+SBO3ELkoK1R0QwFoPGPUXoJcXLhPSPljhkEXoZNbCrUs1JQ/bv7f9jRXJUjADGvvbqjndZRThUklCJ8WS94027yNSP7ymssIkj5Ayy/m0htbaPmCyvW5lI0JgG86p8ODjAI6lP7KWAxgWzzasVca1Y8Fg1GSWz29VA0CY+pP/G43krwpZPw3psyC97SMRP+b3vZkOs9W41WIqQ27p2BrW4gvSA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dc_clean flag from being per-page to per-folio (which means it cannot always be set as we don't know that all pages in this folio were cleaned). Enhance the internal flush routines to take the number of pages to flush. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Vineet Gupta Cc: linux-snps-arc@lists.infradead.org --- arch/arc/include/asm/cacheflush.h | 7 ++- arch/arc/include/asm/pgtable-bits-arcv2.h | 12 ++--- arch/arc/include/asm/pgtable-levels.h | 1 + arch/arc/mm/cache.c | 61 ++++++++++++++--------- arch/arc/mm/tlb.c | 18 ++++--- 5 files changed, 59 insertions(+), 40 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index e201b4b1655a..04f65f588510 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -25,17 +25,20 @@ * in update_mmu_cache() */ #define flush_icache_page(vma, page) +#define flush_icache_pages(vma, page, nr) void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len); -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr); -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr); +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio void dma_cache_wback_inv(phys_addr_t start, unsigned long sz); void dma_cache_inv(phys_addr_t start, unsigned long sz); diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 6e9f8ca6d6a1..ee78ab30958d 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -100,14 +100,12 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); -} +struct vm_fault; +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index ef68758b69f7..fc417c75c24d 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -169,6 +169,7 @@ #define pte_ERROR(e) \ pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_none(x) (!pte_val(x)) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_clear(mm,addr,ptep) set_pte_at(mm, addr, ptep, __pte(0)) diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index 55c6de138eae..3c16ee942a5c 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -752,17 +752,17 @@ static inline void arc_slc_enable(void) * There's a corollary case, where kernel READs from a userspace mapped page. * If the U-mapping is not congruent to K-mapping, former needs flushing. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; if (!cache_is_vipt_aliasing()) { - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); return; } /* don't handle anon pages here */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) return; @@ -771,17 +771,27 @@ void flush_dcache_page(struct page *page) * Make a note that K-mapping is dirty */ if (!mapping_mapped(mapping)) { - clear_bit(PG_dc_clean, &page->flags); - } else if (page_mapcount(page)) { - + clear_bit(PG_dc_clean, &folio->flags); + } else if (folio_mapped(folio)) { /* kernel reading from page with U-mapping */ - phys_addr_t paddr = (unsigned long)page_address(page); - unsigned long vaddr = page->index << PAGE_SHIFT; + phys_addr_t paddr = (unsigned long)folio_address(folio); + unsigned long vaddr = folio_pos(folio); + /* + * vaddr is not actually the virtual address, but is + * congruent to every user mapping. + */ if (addr_not_cache_congruent(paddr, vaddr)) - __flush_dcache_page(paddr, vaddr); + __flush_dcache_pages(paddr, vaddr, + folio_nr_pages(folio)); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + return flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); /* @@ -921,18 +931,18 @@ void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len) } /* wrapper to compile time eliminate alignment checks in flush loop */ -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr) +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE); + __ic_line_inv_vaddr(paddr, vaddr, nr * PAGE_SIZE); } /* * wrapper to clearout kernel or userspace mappings of a page * For kernel mappings @vaddr == @paddr */ -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr) +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __dc_line_op(paddr, vaddr & PAGE_MASK, PAGE_SIZE, OP_FLUSH_N_INV); + __dc_line_op(paddr, vaddr & PAGE_MASK, nr * PAGE_SIZE, OP_FLUSH_N_INV); } noinline void flush_cache_all(void) @@ -962,10 +972,10 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long u_vaddr, u_vaddr &= PAGE_MASK; - __flush_dcache_page(paddr, u_vaddr); + __flush_dcache_pages(paddr, u_vaddr, 1); if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, u_vaddr); + __inv_icache_pages(paddr, u_vaddr, 1); } void flush_cache_range(struct vm_area_struct *vma, unsigned long start, @@ -978,9 +988,9 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long u_vaddr) { /* TBD: do we really need to clear the kernel mapping */ - __flush_dcache_page((phys_addr_t)page_address(page), u_vaddr); - __flush_dcache_page((phys_addr_t)page_address(page), - (phys_addr_t)page_address(page)); + __flush_dcache_pages((phys_addr_t)page_address(page), u_vaddr, 1); + __flush_dcache_pages((phys_addr_t)page_address(page), + (phys_addr_t)page_address(page), 1); } @@ -989,6 +999,8 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); + struct folio *dst = page_folio(to); void *kfrom = kmap_atomic(from); void *kto = kmap_atomic(to); int clean_src_k_mappings = 0; @@ -1005,7 +1017,7 @@ void copy_user_highpage(struct page *to, struct page *from, * addr_not_cache_congruent() is 0 */ if (page_mapcount(from) && addr_not_cache_congruent(kfrom, u_vaddr)) { - __flush_dcache_page((unsigned long)kfrom, u_vaddr); + __flush_dcache_pages((unsigned long)kfrom, u_vaddr, 1); clean_src_k_mappings = 1; } @@ -1019,17 +1031,17 @@ void copy_user_highpage(struct page *to, struct page *from, * non copied user pages (e.g. read faults which wire in pagecache page * directly). */ - clear_bit(PG_dc_clean, &to->flags); + clear_bit(PG_dc_clean, &dst->flags); /* * if SRC was already usermapped and non-congruent to kernel mapping * sync the kernel mapping back to physical page */ if (clean_src_k_mappings) { - __flush_dcache_page((unsigned long)kfrom, (unsigned long)kfrom); - set_bit(PG_dc_clean, &from->flags); + __flush_dcache_pages((unsigned long)kfrom, + (unsigned long)kfrom, 1); } else { - clear_bit(PG_dc_clean, &from->flags); + clear_bit(PG_dc_clean, &src->flags); } kunmap_atomic(kto); @@ -1038,8 +1050,9 @@ void copy_user_highpage(struct page *to, struct page *from, void clear_user_page(void *to, unsigned long u_vaddr, struct page *page) { + struct folio *folio = page_folio(page); clear_page(to); - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); } EXPORT_SYMBOL(clear_user_page); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 5f71445f26bd..6f40f37e6550 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -467,8 +467,8 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * Note that flush (when done) involves both WBACK - so physical page is * in sync as well as INV - so any non-congruent aliases don't remain */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, - pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr) { unsigned long vaddr = vaddr_unaligned & PAGE_MASK; phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; @@ -491,15 +491,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, */ if ((vma->vm_flags & VM_EXEC) || addr_not_cache_congruent(paddr, vaddr)) { - - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(page); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); if (dirty) { + unsigned long offset = offset_in_folio(folio, paddr); + nr = folio_nr_pages(folio); + paddr -= offset; + vaddr -= offset; /* wback + inv dcache lines (K-mapping) */ - __flush_dcache_page(paddr, paddr); + __flush_dcache_pages(paddr, paddr, nr); /* invalidate any existing icache lines (U-mapping) */ if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, vaddr); + __inv_icache_pages(paddr, vaddr, nr); } } } @@ -531,7 +535,7 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { pte_t pte = __pte(pmd_val(*pmd)); - update_mmu_cache(vma, addr, &pte); + update_mmu_cache_range(NULL, vma, addr, &pte, HPAGE_PMD_NR); } void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, From patchwork Wed Aug 2 15:13:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82049C04E69 for ; Wed, 2 Aug 2023 15:14:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA3EF280192; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C747C280194; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C7FE280193; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 22C13280143 for ; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E4ECE120F6C for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) X-FDA: 81079510386.30.3D3B327 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id F235B1C001C for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LMB8G19V; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zN82RAgJ6IpqANG3bPKZmrE3u6XUqhDYhTquSOOlcS0=; b=coDbKPXoQ1KRgNWJ25wo27EWPGvnY7KmtB2ZEAuNYXHmJE0zkB4z+5vHJlSFDlBoeqSz7S kQboV8eBGeaZVeFepiuzRdPTzVoLm7mXOLWLVC5YmyHIQTwm1SKe7XA4D+kxjZD+/uQBH6 Rckb7tlqqXc/YsG7WSx9n4ruh6fqKa8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989252; a=rsa-sha256; cv=none; b=MSTjhUhBxYU1BKD1os9Y62GrVKHGpnzWlElDsgyXjc6/gM2eXuqvOypugIagusJWhaFcP1 ODUsIV4z9TWsajiXXNnsyU/hee/VA4BjplXoG49cHQaA8NuJpkk8UH97yvqJ6a5/ksga3h NX8HTxrD8MSItvfKfBhqVqQlPw3uYIE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LMB8G19V; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zN82RAgJ6IpqANG3bPKZmrE3u6XUqhDYhTquSOOlcS0=; b=LMB8G19VeFRuysQa2dStgx0rbw Y69XrDndiefElZSQjOz503Ul12J+Il5gaYl8MX/0cYIENz8hQE5E6NDJoqHlqH57X0jarbX3mvmCY EIEcj+mYdHFUB5aJ1ZQrtHvKwlYQRastS4SU/zB2upZu0uUfmXFFJJxXcq4O9hoeeZnkQo6z0K3lc GrqRRMKQKGzrPRt86mktO4/YI+7DH1Gf3ONNIpaTxzipHvcKR35kE2dywo6Iz56Kb19CBXfCvmrMD hKMFpYVBXjMScCHyuEdEY1edPTaIEJx7RFDiR7c/cTK4iG+HEkpkZA1kTW9Z1iziXw+Y3HRkXfXuU joEnnzFg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffiy-P6; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH v6 09/38] arm: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:37 +0100 Message-Id: <20230802151406.3735276-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: F235B1C001C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 7sta8iucegoiwzeqp7wd8rxjao7tpo4n X-HE-Tag: 1690989251-442936 X-HE-Meta: U2FsdGVkX1+epIOtUJd0SIvqv1q8Z9vNQYP3zxIGBaj9C7kTiz31rczZ9fHRjt8R0FXsPVJHdUDFCOPQBAJ5o/RZu4AkEJso7GY73z0IEA7XM9KTv9IV1j9iz2evxiZmlTlwK7J1pKmsbVx9lbivMlSqNEBF5AydgLwsz3YnyaYeUFJY5/SaT5yTryIx9oemHjWotPAEzlhnrp13LGJkb65i2aRTsOcCrGo7/vXnCKM8kF068S5hexEkFUQ1ROl5miu7MjXG3WImol5wUB82sXvhXhD8ukOWlyJpKKHkamd0dT/hpIlThQqEh6UJ6khoOAdkzkNPLgvht4uShuk05N7E0z27M7E15HZc3IvbkD+odSr+p8Pq47zJ8KdsmQErVduxgrbgY5JKnZ10aTMr/cfXt89XizDzDlnwNYzekWkMBMucjKOv/Wqo28wf5LV/PQpDU8PLP1733Q8Ql4GYrD6Yr7slpmiG4XBfRpb3rSoO69arewJSBNNeBAPZ+AOzMfIBiNF5s9AeGUBePt7CXbPQwFyZ+0nkvnCYmtHnzafVGmGeQ/kx9P42aoGe8CRumsyDpID09YfjzLJF//KOgLN7Us9bIeGDq7YTIhnUrD3HHR60i8J4KeQZ0hiMa2x2Mn6MjGSogitojxSOgNQsxwcKp53MMjHLgwBQ5NLFCAe9Ge5k7IGPb8PqORi3wHK4MT87GXhUY8Qx5Lk/4yNgV0FJpmQhF6G0eekUiuAPqVn3TmabiC1Sd9Mms/ZyMZ0DgQKaXkLWtoz+uT2LIEZ6HUktt1tIQng48r8zFnT4Sq591G0WkMkrtVIKq5G6QK9zRXK3XlpoH4ALSFs64OCkNx3Zzk3v9TRkn9aNlyT6BAKLhJIIoAHZCMM5bARa+1bNv+i6M6Dcg4sZCsEiSGsNP3SagcggfATOzV6LBkARAyCbEwd2AdMFSvoHMxz/D4CYYUKleuSZuqSflmV4qM7 xbbUo/qi M3bdt9t2P1bxqxRfZKdpe1qlRhS4GZUna0PjR8uoPRQK4YO+jMKR6p1EAU235dV8WrnU0CzMaY5k2eCIJqESJyPJf/FVUW4seLdpWc5Fc1WmJ3EoEYf+kvDYFkUuB+bFHhZA3isOL6nQeCA9YqmQiJZFA1QKkk9+GI/pzDwPk9CMV+OtKf7Alj09grYWW3dSrdLpPwgYHhX4w29IMUXZibURjUTxQPrIakz406oSgUiNcst8a+PLdaHlphpSA7HH2PU0Sz4eifDYO7FqqGIx6mkwdL+IFViYChKWXM2oLEO6E5pRri3NTiYWleZ/bg/cQz3wU+4K/PziU23PU44LsB+5n5Wj7DCUqcx0YSFvV3HqXoQmUA1ZoXbTt76Z/dD+Dzl5u X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clear flag from being per-page to per-folio which makes __dma_page_dev_to_cpu() a bit more exciting. Also add flush_cache_pages(), even though this isn't used by generic code (yet?) Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Russell King (Oracle) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm/include/asm/cacheflush.h | 24 +++++--- arch/arm/include/asm/pgtable.h | 5 +- arch/arm/include/asm/tlbflush.h | 14 +++-- arch/arm/mm/copypage-v4mc.c | 5 +- arch/arm/mm/copypage-v6.c | 5 +- arch/arm/mm/copypage-xscale.c | 5 +- arch/arm/mm/dma-mapping.c | 24 ++++---- arch/arm/mm/fault-armv.c | 16 ++--- arch/arm/mm/flush.c | 99 +++++++++++++++++++------------ arch/arm/mm/mm.h | 2 +- arch/arm/mm/mmu.c | 14 +++-- arch/arm/mm/nommu.c | 6 ++ 12 files changed, 133 insertions(+), 86 deletions(-) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index a094f964c869..841e268d2374 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -231,14 +231,15 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned vma->vm_flags); } -static inline void -vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +static inline void vivt_flush_cache_pages(struct vm_area_struct *vma, + unsigned long user_addr, unsigned long pfn, unsigned int nr) { struct mm_struct *mm = vma->vm_mm; if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { unsigned long addr = user_addr & PAGE_MASK; - __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); + __cpuc_flush_user_range(addr, addr + nr * PAGE_SIZE, + vma->vm_flags); } } @@ -247,15 +248,17 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig vivt_flush_cache_mm(mm) #define flush_cache_range(vma,start,end) \ vivt_flush_cache_range(vma,start,end) -#define flush_cache_page(vma,addr,pfn) \ - vivt_flush_cache_page(vma,addr,pfn) +#define flush_cache_pages(vma, addr, pfn, nr) \ + vivt_flush_cache_pages(vma, addr, pfn, nr) #else -extern void flush_cache_mm(struct mm_struct *mm); -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); -extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn); +void flush_cache_mm(struct mm_struct *mm); +void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, + unsigned long pfn, unsigned int nr); #endif #define flush_cache_dup_mm(mm) flush_cache_mm(mm) +#define flush_cache_page(vma, addr, pfn) flush_cache_pages(vma, addr, pfn, 1) /* * flush_icache_user_range is used when we want to ensure that the @@ -289,7 +292,9 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr * See update_mmu_cache for the user space part. */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -extern void flush_dcache_page(struct page *); +void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 static inline void flush_kernel_vmap_range(void *addr, int size) @@ -321,6 +326,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, * duplicate cache flushing elsewhere performed by flush_dcache_page(). */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index 34662a9d4cab..ba573f22d7cc 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -207,8 +207,9 @@ static inline void __sync_icache_dcache(pte_t pteval) extern void __sync_icache_dcache(pte_t pteval); #endif -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr); +#define set_ptes set_ptes static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) { diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 0ccc985b90af..38c6e4a2a0b6 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -619,18 +619,22 @@ extern void flush_bp_all(void); * If PG_dcache_clean is not set for the page, we need to ensure that any * cache entries for the kernels virtual memory range are written * back to the page. On ARMv6 and later, the cache coherency is handled via - * the set_pte_at() function. + * the set_ptes() function. */ #if __LINUX_ARM_ARCH__ < 6 -extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep); +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr); #else -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, + unsigned int nr) { } #endif +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #endif diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c index f1da3b439b96..7ddd82b9fe8b 100644 --- a/arch/arm/mm/copypage-v4mc.c +++ b/arch/arm/mm/copypage-v4mc.c @@ -64,10 +64,11 @@ static void mc_copy_user_page(void *from, void *to) void v4_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c index d8a115de5507..a1a71f36d850 100644 --- a/arch/arm/mm/copypage-v6.c +++ b/arch/arm/mm/copypage-v6.c @@ -69,11 +69,12 @@ static void discard_old_kernel_data(void *kto) static void v6_copy_user_highpage_aliasing(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); unsigned int offset = CACHE_COLOUR(vaddr); unsigned long kfrom, kto; - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); /* FIXME: not highmem safe */ discard_old_kernel_data(page_address(to)); diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c index bcb485620a05..f1e29d3e8193 100644 --- a/arch/arm/mm/copypage-xscale.c +++ b/arch/arm/mm/copypage-xscale.c @@ -84,10 +84,11 @@ static void mc_copy_user_page(void *from, void *to) void xscale_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 033a1bce2b17..70cb7e63a9a5 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -695,6 +695,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { + struct folio *folio = page_folio(page); phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ @@ -709,19 +710,18 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - unsigned long pfn; - size_t left = size; - - pfn = page_to_pfn(page) + off / PAGE_SIZE; - off %= PAGE_SIZE; - if (off) { - pfn++; - left -= PAGE_SIZE - off; + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); } - while (left >= PAGE_SIZE) { - page = pfn_to_page(pfn++); - set_bit(PG_dcache_clean, &page->flags); - left -= PAGE_SIZE; + + while (left >= (ssize_t)folio_size(folio)) { + set_bit(PG_dcache_clean, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } } diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index 7cb125497976..2286c2ea60ec 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -180,12 +180,12 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, * * Note that the pte lock will be held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); struct address_space *mapping; - struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) return; @@ -194,13 +194,13 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(folio); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (mapping) { if (cache_is_vivt()) make_coherent(mapping, vma, addr, ptep, pfn); diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 2508be91b7a0..d19d140a10c7 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -95,10 +95,10 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned __flush_icache_all(); } -void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr) { if (cache_is_vivt()) { - vivt_flush_cache_page(vma, user_addr, pfn); + vivt_flush_cache_pages(vma, user_addr, pfn, nr); return; } @@ -196,29 +196,31 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, #endif } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - if (!PageHighMem(page)) { - __cpuc_flush_dcache_area(page_address(page), page_size(page)); + if (!folio_test_highmem(folio)) { + __cpuc_flush_dcache_area(folio_address(folio), + folio_size(folio)); } else { unsigned long i; if (cache_is_vipt_nonaliasing()) { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_atomic(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_local_folio(folio, + i * PAGE_SIZE); __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_atomic(addr); + kunmap_local(addr); } } else { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_high_get(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_high_get(folio_page(folio, i)); if (addr) { __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_high(page + i); + kunmap_high(folio_page(folio, i)); } } } @@ -230,15 +232,14 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) * userspace colour, which is congruent with page->index. */ if (mapping && cache_is_vipt_aliasing()) - flush_pfn_alias(page_to_pfn(page), - page->index << PAGE_SHIFT); + flush_pfn_alias(folio_pfn(folio), folio_pos(folio)); } -static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) +static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; - pgoff_t pgoff; + struct vm_area_struct *vma; + pgoff_t pgoff, pgoff_end; /* * There are possible user space mappings of this page: @@ -246,21 +247,36 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p * data in the current VM view associated with this page. * - aliasing VIPT: we only need to find one mapping of this page. */ - pgoff = page->index; + pgoff = folio->index; + pgoff_end = pgoff + folio_nr_pages(folio) - 1; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff_end) { + unsigned long start, offset, pfn; + unsigned int nr; /* * If this VMA is not in our MM, we can ignore it. */ - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page)); + + start = vma->vm_start; + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + offset = pgoff - vma->vm_pgoff; + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + start += offset * PAGE_SIZE; + } + if (start + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - start) / PAGE_SIZE; + + flush_cache_pages(vma, start, pfn, nr); } flush_dcache_mmap_unlock(mapping); } @@ -269,7 +285,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p void __sync_icache_dcache(pte_t pteval) { unsigned long pfn; - struct page *page; + struct folio *folio; struct address_space *mapping; if (cache_is_vipt_nonaliasing() && !pte_exec(pteval)) @@ -279,14 +295,14 @@ void __sync_icache_dcache(pte_t pteval) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); if (cache_is_vipt_aliasing()) - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); else mapping = NULL; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (pte_exec(pteval)) __flush_icache_all(); @@ -312,7 +328,7 @@ void __sync_icache_dcache(pte_t pteval) * Note that we disable the lazy flush for SMP configurations where * the cache maintenance operations are not automatically broadcasted. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -320,31 +336,36 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); return; } - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!cache_ops_need_broadcast() && - mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping && cache_is_vivt()) - __flush_dcache_aliases(mapping, page); + __flush_dcache_aliases(mapping, folio); else if (mapping) __flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_page); /* * Flush an anonymous page so that users of get_user_pages() * can safely access the data. The expected sequence is: diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index d7ffccb7fea7..419316316711 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -45,7 +45,7 @@ struct mem_type { const struct mem_type *get_mem_type(unsigned int type); -extern void __flush_dcache_page(struct address_space *mapping, struct page *page); +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio); /* * ARM specific vm_struct->flags bits. diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 13fc4bb5f792..c9981c23e8e9 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1788,7 +1788,7 @@ void __init paging_init(const struct machine_desc *mdesc) bootmem_init(); empty_zero_page = virt_to_page(zero_page); - __flush_dcache_page(NULL, empty_zero_page); + __flush_dcache_folio(NULL, page_folio(empty_zero_page)); } void __init early_mm_init(const struct machine_desc *mdesc) @@ -1797,8 +1797,8 @@ void __init early_mm_init(const struct machine_desc *mdesc) early_paging_init(mdesc); } -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { unsigned long ext = 0; @@ -1808,5 +1808,11 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, ext |= PTE_EXT_NG; } - set_pte_ext(ptep, pteval, ext); + for (;;) { + set_pte_ext(ptep, pteval, ext); + if (--nr == 0) + break; + ptep++; + pte_val(pteval) += PAGE_SIZE; + } } diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 43cfd06bbeba..c415f3859b20 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -180,6 +180,12 @@ void setup_mm_for_reboot(void) { } +void flush_dcache_folio(struct folio *folio) +{ + __cpuc_flush_dcache_area(folio_address(folio), folio_size(folio)); +} +EXPORT_SYMBOL(flush_dcache_folio); + void flush_dcache_page(struct page *page) { __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); From patchwork Wed Aug 2 15:13:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC4F3C04A6A for ; Wed, 2 Aug 2023 15:15:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A52392801AD; Wed, 2 Aug 2023 11:14:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DC082801AA; Wed, 2 Aug 2023 11:14:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DFB72801AD; Wed, 2 Aug 2023 11:14:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6C2E72801AA for ; Wed, 2 Aug 2023 11:14:28 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 426D3160F47 for ; Wed, 2 Aug 2023 15:14:28 +0000 (UTC) X-FDA: 81079511016.06.41B4990 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 6C9B91C000D for ; Wed, 2 Aug 2023 15:14:26 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=m2nhsFKC; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989266; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mgSRsBuZwAxpA55hxS9bw6q/jUx3mEY9QYeL/14gdEs=; b=e7S7lPOFpS721w8dyAqtqLZ+kDuAz1yFynN0LQlBao6ibRb9jVsiy1Dlr56+JDwIylBYbt C0DeAi1QE/Vf6HaYbeyCN+179qnFTiq9bJyD9+17MEtvqphN33HyxsbkCggUgC4dkupVyr anAc5cApClz4kPxc3cFhGPRhyikFxCs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989266; a=rsa-sha256; cv=none; b=341yUI4KFJ3zbXfK33vpnG7Lmf2NxXzMAAhQsuLxMKm7ijFmgZeUeW3BvnVwzOfw3b8jjH 3l+xHbSJckJRfwkO1BNTfwnqJZe6XZDZUwbeLpTyVxLnAMnOZ9Yv/rA2xNtpKURK/OQ68t JMSCIm8NmarG5U+pnb0yTITujQZXDFI= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=m2nhsFKC; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mgSRsBuZwAxpA55hxS9bw6q/jUx3mEY9QYeL/14gdEs=; b=m2nhsFKC/z/frhghKqW4xYfvlf 1e219T68traQOXiSiDDZ4G8arXqZoZyLAbAFiBETBn60Ivi1fORm0n/IbuWcSIPfscbWxp3GLBFLK EQRrHCWD6OT/5mToOLh7gsyWzsccpl56Ga9sI028K2qtHA8dnLiucAZpyKjpzXDGD7kKcSrOQxggu YD95HWIrKmmJLT21MmN7jwxuYRna42EpEB+ubdoyTT3/9W1VXGHh5yVs991G1XbJdmEX3Myi6QnBn 1XAvszyFtItLZgM9oplYfy0/nLBk4V53r8vLSVbmyj281T9AMRqEZrRJ8Amf5t6w9ZFxKh4m+MP89 OOwZk8Yw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY8-00Ffj4-TB; Wed, 02 Aug 2023 15:14:08 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Catalin Marinas , Mike Rapoport , linux-arm-kernel@lists.infradead.org Subject: [PATCH v6 10/38] arm64: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:38 +0100 Message-Id: <20230802151406.3735276-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6C9B91C000D X-Rspam-User: X-Stat-Signature: c1hudd9pcxka4nxf9ngnpft6askwxtj4 X-Rspamd-Server: rspam03 X-HE-Tag: 1690989266-494057 X-HE-Meta: U2FsdGVkX18zo3S/F99nrAfUP03jDK5ZEXgLkNW/XCdXvWZSuw9L/RWTyhp6Oa829psBdthhkDB6cw0IDzMTEX8CZ6wA9xQmml4Q/eMTuqEfw+1kZ+3bj+yoOtqhbOVcUTC0R0ThFfDWehSUpr1zo6mYVtgltmrcmJDTPOF3ss+SwbmS7rwLlpRWHSgQXeL2CVRL85fYBh5Ba1BQj6bJfjkaxPrrnmwgTUmfGRdvyReSa1p/27kYWXfzuDBMjwq5kwEgvtkWQ4PlUXcWo/CPpv7QE+rttTC4+wWliPc3CZWVUZ+F1VVGb6e3Lej1YrZ16kmNWzM81aJ35O/aTCYh+tMTyus19MJpr8S6aZlvCFRhahqo3J88Pkzw8htdAhDclYc3pllg4kgeBp/XYfrl4Zm1A1FvAesjhcqz3X5GlvsxY4BZNSu4DFnlUABfYflTTrKrc631DIggmVJQThfEEQd0jmFcMOLi3B5lb17okO4ZFDe0GYqH82j1vnjHS6z3N3qkEWfCJo2Lfs6cQq+tK9uJDO0S6iADgl1h3HLxv++Tt/YhlMO/lBAXKiswGkZ4ake5KSaWkRBF2dtupD/VJEgAJMOA8zPJjqBGXxh4ZBL5OsL6PT+qy6bqKh09S8CLUxZF0J8GJdTsBIfeWN3fV8eVdEqk1IaChUHj0FNVX8TQq6WcjXlxvIxxba2W0Yw6xrjU+L10HEnwKyLHLwiojbJuwBSH98VZkxkiFS+7E13Am0o53m2ALIljE6eM7XqNnwj1LIeucxeHCL01qedRZ5Fyyrp+aa89D3O3hirJQCqfB8sOvwr/eqmB1nVzlWvDHmRv+8BjB37wdjlxuAissjT1ZGfUOr6A7QML3OovnUHAufI2hxhCQI09uCtAO6GsQJ5dRz8Jt2Eq9DupsexlkSyaMPah0HNsYs/7EEM4SBBpVPd7kTOlpL2HjASpI5H1esDejN8cs90cEAjH0iW F/1UmYrv viExhgKPZHmjD+5nY5oPLSASgGjrVqDS82BXxlS6jcqqBC74uDo8DunGvCX3n3qiEydmLmrZysM5lnvwudAR5OpAR9b1dfdeb006dF2EmpVYDHca3jR9mInRIjXKV6DZL5ZM5EKUCr4LMx0yeJqWv16c0iWBrRb4Tiz7W5kSUDznW2c4OpveViD+hECLVp2KqWWMxBqHBmas5/Q5ptrZdd2K+UrYalG1ogPW6AOeqnpoY8aqjzZJl3BKNjw0KTCKyKMnTJmuq4w87vKGQDk4tXQkGrl8I1CbreCUAEBXl953JQlM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Catalin Marinas Acked-by: Mike Rapoport (IBM) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/cacheflush.h | 4 +++- arch/arm64/include/asm/pgtable.h | 26 +++++++++++++++------ arch/arm64/mm/flush.c | 36 +++++++++++------------------ 3 files changed, 36 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 37185e978aeb..d115451ed263 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -114,7 +114,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, #define copy_to_user_page copy_to_user_page /* - * flush_dcache_page is used when the kernel has written to the page + * flush_dcache_folio is used when the kernel has written to the page * cache page at virtual address page->virtual. * * If this page isn't mapped (ie, page_mapping == NULL), or it might @@ -127,6 +127,8 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio static __always_inline void icache_inval_all_pou(void) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 445b18d7a47c..76bba654b5d7 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -345,12 +345,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pte); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, ptep, pte, 1); - return __set_pte_at(mm, addr, ptep, pte); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, ptep, pte, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } } +#define set_ptes set_ptes /* * Huge pte definitions. @@ -1049,8 +1058,9 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, + unsigned int nr) { /* * We don't do anything here, so there's a very small chance of @@ -1059,6 +1069,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #ifdef CONFIG_ARM64_PA_BITS_52 diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 4e6476094952..013eead9b695 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -51,20 +51,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, void __sync_icache_dcache(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { - sync_icache_aliases((unsigned long)page_address(page), - (unsigned long)page_address(page) + - page_size(page)); - set_bit(PG_dcache_clean, &page->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + sync_icache_aliases((unsigned long)folio_address(folio), + (unsigned long)folio_address(folio) + + folio_size(folio)); + set_bit(PG_dcache_clean, &folio->flags); } } EXPORT_SYMBOL_GPL(__sync_icache_dcache); @@ -74,17 +67,16 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); * it as dirty for later flushing when mapped in user space (if executable, * see __sync_icache_dcache). */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in __sync_icache_dcache()). - */ - if (PageHuge(page)) - page = compound_head(page); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +EXPORT_SYMBOL(flush_dcache_folio); - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } EXPORT_SYMBOL(flush_dcache_page); From patchwork Wed Aug 2 15:13:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E2A1C07E8C for ; Wed, 2 Aug 2023 15:14:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D17A0280190; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 858E8280196; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C679280190; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CD4D0280195 for ; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 84446120F9F for ; Wed, 2 Aug 2023 15:14:14 +0000 (UTC) X-FDA: 81079510428.23.9FF3D97 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 6ED7B40013 for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=noGvCnhS; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VNpFvvJuxNncni/NZj8MqUaBRQkp6jW1k1zH+muyRMk=; b=3XzEEwvG/dr4e5rYwQfMa0dAby1n8wGhhuBIg3j3wFDX9jVHyOtuPH7o7EVV7RpFcWXJ5x HyJJ/e0Kv6hTyUEaAmhZ1Rtct2rDEUk8ZMTFJj4PtU+zEUWUl3yb9wyuGus3XMaBFlRybO ufe0OPgJQiR0b6lEA7K+2/6aq5XqFbc= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=noGvCnhS; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989252; a=rsa-sha256; cv=none; b=Y3QEfFK2Kdbt8HkFIluDFm43lHbM2D6qMcol+sHuGTCxJrtHGREROPOGm0JDeQBgUeNOXs 2O+Wm/GZcpWP8lbrlNvXMALPnBU8PCAHLAtYG4vdRL24/RFViR1KX+dsvOzNmkVaxlqJ8G gDq0WwG1d3Tu/uQM4Ia3vio9zTyn13Q= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VNpFvvJuxNncni/NZj8MqUaBRQkp6jW1k1zH+muyRMk=; b=noGvCnhS9S4mgWHMRm9TXvx40M TCeYeWtmyAT9uWLfWJpesadFa0pXWjz2gE4OdZD8eTAypBDZqw2iKRpqTjIDZ2avPr0EULQzAyrMz 7bZoL+hXe9Vo93OQj3Ibqe49WWSrxBLT5EMQDTDTSDEZRYIiQHaqnznlVh8bUNDuv205uuSvqY2fS 6kfggKqXVAhQNd4ztc+q6VCyzpJgwellAvJFm0b2AGFrhU66ESrhJ8d9BEnP05Mkg1Nf9Nhlb+5gp z/gnJSLlCGOQQpRSDrZmSty7pBGapaY4WiqyyO6OoAnkGkzw4qFMNjbfufjGXEz6qLpe/zdpo8SDS e17P3+Wg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY9-00Ffj7-05; Wed, 02 Aug 2023 15:14:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Guo Ren , Mike Rapoport , linux-csky@vger.kernel.org Subject: [PATCH v6 11/38] csky: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:39 +0100 Message-Id: <20230802151406.3735276-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6ED7B40013 X-Rspam-User: X-Stat-Signature: b999budqewdixw9ynckhwh9cg77qsbc6 X-Rspamd-Server: rspam01 X-HE-Tag: 1690989251-894519 X-HE-Meta: U2FsdGVkX18eTsrdPCgg2U+0wAu9+SkrWNs9/EwaacPdGGen/O5PhM7fXp7PzP9cJ+fgFhwE+xzlFBICvfWZIf48AoJFhy5Cj1GENxOOGqCOSmqhjNCZQjZ5ICykuDrdXwQ8TBC7HiNgTnL/DB53d/lmcxz9P5R/AXP4jp9fT2Ng0Z/7gHU1Jy5ExpxFtfJeZLejinBQhK0LIYODNdVBhQTUS1+VECGSK/XBcW6IOHIYrHL7U5gusvPo4pO503mSNZwdp/oLHa71qWniH9Xf0zNj8lwClKo8QY23jcQRdnx6kvXDKZcDSAWm5qriB1neR/UDNIwG8Wg443SwMb9YI4bENW9Qj462MgdjwRhElBLFa5hBIegQL5/R3pZHAnnqYmNNVRMNcn0dLLhk4vIkMrD7E4Wqle/hZzSLVg1g2sIrxD9W/fEHH3QR8mNPp2oKRjf419LR4u2e8lq3pjoN0ZKe7OqUE9ZnKZaxErjmCmkzA6BdMUIvcfk4fTSNW6p9cEIm6MQGFCxLK1aLZdFZd2HDP/x5w2S3akweAmhQsY3N5fls6TTabxqQE2jFTunDPkcppSo1PTV6wwKcMWTOPSF8omYWr1c5nPgv5Mftu8/y7s5EL/aPP3sSgQswcX7MVLODTb8j5GPSPwyBrZ0qKfJo4fbBGsfZPQ4oMyfkXsN8jnLpPYrMoAj7vs9rQ3yI84XS0rxuGU35Z2LW6xmx3+GuVJmO9p37leU79UTSFxZKwlAN6umORGZMWBsYp3zoo3008DAXwnxGOVZklMnx9HJanSW1OYpu84Y/tOtxWk1iMMCjgPvmJq8ecQGOX6f3X4PARgEZv0bWRoEqYgMwaW1xAZQFg53xlOOf34PXlksHZXFu406NsJRIn5R/7cIaXTeQSagl/Po58UOpD6KzDj28Ov+5LdEeQKRliinCMCK6qGdyc0M9A7d6zT2SwkL6F+ff9354LyJqY41BzTr rXYFBoEO v3DVOxP4KdC0u+Iks1gpDB66jKOgvxWrcznMJkztymb8daGWzYtN56nq5mVs6BqzSht+DAErFPTIQ0G3PXjf9I5LuYpymHWQlZQJ2JVG2dxPxNGM3mr1bWB8Ss8o3QjJzlu9DNu6LFUgr9dk/hhP9/5iAL+cvlzKwSJmuwu/cRAQFOt59gpmJwtJtOBKfNo6MARswgs+HJDiziDRNw7eRx9MSSfuYgA1w47YLXKi5uWNfjl3AkpXc9G+X7Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Guo Ren Acked-by: Mike Rapoport (IBM) Cc: linux-csky@vger.kernel.org --- arch/csky/abiv1/cacheflush.c | 32 +++++++++++++++++----------- arch/csky/abiv1/inc/abi/cacheflush.h | 2 ++ arch/csky/abiv2/cacheflush.c | 32 ++++++++++++++-------------- arch/csky/abiv2/inc/abi/cacheflush.h | 10 +++++++-- arch/csky/include/asm/pgtable.h | 8 ++++--- 5 files changed, 50 insertions(+), 34 deletions(-) diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c index 94fbc03cbe70..171e8fb32285 100644 --- a/arch/csky/abiv1/cacheflush.c +++ b/arch/csky/abiv1/cacheflush.c @@ -15,45 +15,51 @@ #define PG_dcache_clean PG_arch_1 -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); - if (mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + if (mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { dcache_wbinv_all(); if (mapping) icache_inv_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; flush_tlb_page(vma, addr); if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) dcache_wbinv_all(); - if (page_mapping_file(page)) { + if (folio_flush_mapping(folio)) { if (vma->vm_flags & VM_EXEC) icache_inv_all(); } diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..0d6cb65624c4 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,8 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 9923cd24db58..d05a551af5d5 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -7,32 +7,32 @@ #include #include -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *pte, unsigned int nr) { - unsigned long addr; - struct page *page; + unsigned long pfn = pte_pfn(*pte); + struct folio *folio; + unsigned int i; flush_tlb_page(vma, address); - if (!pfn_valid(pte_pfn(*pte))) + if (!pfn_valid(pfn)) return; - page = pfn_to_page(pte_pfn(*pte)); - if (page == ZERO_PAGE(0)) - return; + folio = page_folio(pfn_to_page(pfn)); - if (test_and_set_bit(PG_dcache_clean, &page->flags)) + if (test_and_set_bit(PG_dcache_clean, &folio->flags)) return; - addr = (unsigned long) kmap_atomic(page); - - dcache_wb_range(addr, addr + PAGE_SIZE); + for (i = 0; i < folio_nr_pages(folio); i++) { + unsigned long addr = (unsigned long) kmap_local_folio(folio, + i * PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - - kunmap_atomic((void *) addr); + dcache_wb_range(addr, addr + PAGE_SIZE); + if (vma->vm_flags & VM_EXEC) + icache_inv_range(addr, addr + PAGE_SIZE); + kunmap_local((void *) addr); + } } void flush_icache_deferred(struct mm_struct *mm) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..9c728933a776 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -18,11 +18,17 @@ #define PG_dcache_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..42405037c871 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -28,6 +28,7 @@ #define pgd_ERROR(e) \ pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pmd_pfn(pmd) (pmd_phys(pmd) >> PAGE_SHIFT) #define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT)) #define pte_clear(mm, addr, ptep) set_pte((ptep), \ @@ -90,7 +91,6 @@ static inline void set_pte(pte_t *p, pte_t pte) /* prevent out of order excution */ smp_mb(); } -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { @@ -263,8 +263,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte); +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *pte, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot) From patchwork Wed Aug 2 15:13:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D409CC05051 for ; Wed, 2 Aug 2023 15:14:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 38842280196; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F335A280197; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D160828019B; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 95FAC280193 for ; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5CC3B1CA00D for ; Wed, 2 Aug 2023 15:14:16 +0000 (UTC) X-FDA: 81079510512.28.5C3AFA3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id BBBC816000B for ; Wed, 2 Aug 2023 15:14:14 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=b9dkXpP3; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989254; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9RHDmDB95v1l5T7Qwsk5FiocHYrmlic3rJL0E8q61Fc=; b=1IqCmLRtIMI/SSms045BZLHVklARAjo1jUf7potNHoAL8l1C91StUkwFff8pCYXu4PW5IE tB5fAjG5LuW0/3XvS4AtacKs/MzK6/fJe/ioe5+s9Db6xDSRXgHlmTAa2RuFY3u69ZsbsS uwxCHxPk7pSZrOi2gxVUoMe6y+0fdLM= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=b9dkXpP3; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989254; a=rsa-sha256; cv=none; b=eIVan7zcGPDLCG5QBg5m5JELmGyeCY1wFrTBIIjxntJMszRJS4Y60QhHdnWCUn/YrdXOgF pWYEgz/BufL71OuecD8gQ1081dXLAVXOFnA8L5htswx/XpSwFvOIlIFJG7Yo7KRICQea7l MReSm+ggfTBbs/4dDHJ9b10LFHNYVAY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9RHDmDB95v1l5T7Qwsk5FiocHYrmlic3rJL0E8q61Fc=; b=b9dkXpP35uwjKsyy9T0Nzk4a6L w7afQQ1eQRV2xf6D5i591kgMjsl5ex3XJeSgYsnyMfblmncIZp6y/FffNATtxVIVW+aJGswyDJ2FP tKdKGqTn0nVd6RRIk6DKZMpzgFSrCc+8MQ1PN1gQmlHtfdgPtYoprNvrIVcoQwez7dNY4ALKlPvTj +Mtr/4x3fxQhc3P8atSuRKXjOhmqG0hAFSkotVTVWY7wzlA1+Jpl1nhkLMNRwLpA4+QeHvbl+vUXn O1eB80/BTSTKYq01EwFFOKSHmWoyr5LQQTP3mb+pAi8L/8CE4owYLwYSeOL4gWDXgUQxs9iykwP0H +DvhJhqQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY9-00FfjG-43; Wed, 02 Aug 2023 15:14:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brian Cain , Mike Rapoport Subject: [PATCH v6 12/38] hexagon: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:40 +0100 Message-Id: <20230802151406.3735276-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: BBBC816000B X-Stat-Signature: b9796b7oyjrco816mainhs8pog4mcyfu X-Rspam-User: X-HE-Tag: 1690989254-98700 X-HE-Meta: U2FsdGVkX19EVpl90qllRQ0nLZQUXhhAo5gKBpK8k4oP3HYtmEwm+q+1zwjfxohWaI1LX1AgLKXCkJFP9lzsb+lQ1WegtPIbOfq11L3VhcXeofmOLQ6cl4i10qhrzHKEk7z65v57+cXefN00sSlg8E8oCa/WRTVENemIuEpNEREoqKRAlLCbmM6HUQfMLpApK2iSWlqNw+xtnwhbeHmIgDYRcimdsI5/ththjukmcjfWegN+wEsj/97zylA+bPOgm3igmWvYe1OAaFsmcm7x/0AdBeQTyTeopbmZu3EiYwqBFv6dn8Nv44LGg4s9598caYK86v1/B70CCH7cv8VOAbQI0FrI6+XrixpeJvQ/pjFnR/skgJS5C+pa5XqyHnyTR75CcDaI2j+nUx+C3Ifc/L5ELgj5VgNG1vWsxOVbvxBxKnpoli1jRXnbpymC0QGARJRD30Od0H1vGlSz7cWpAE0ZaDGRTjDBl4NiAN2MokP9ALMeBlkmlAjfu9sS/5cBO3/WW+V/fospexXk+1hbO4NNb8Xr5HTwTej9j7EBR0j8gbTX0cb2hOene8sfvD5U3w6ln6rX3kHqU7uwlvEqK5jMQ2pmYnpjssBBHAEQpZnIHZtT4Si3Bna9NtkAJvWTG59y5lfNBOShA+Z1/ZrkNjK7pzCjcYlfujH2PQ577TjFyHUU348mTO3+4WIy6UVrKMA2YamsKCeVo2Nwp5lRAmVlhOPKiP2o3o8+CSwGNLqAUIGGPZ4vaRwMxJsIEohh/5mqPZfbZY0FEVwY455XwEyt785iaZ6i24lHsJzPNzdOl3scaYLOb4ARcN6sQ/89RhDXORxwCXXM7FWjeMJ8uC7XMTzfd3fLuW3or8HUuvOWmTwoa5w7KUU4xZIinTHfCgtEhaKd3+zvfJerV2XZrlfca6x4wlAwSdkYUrHBQAD6cM/ZAbDlUMr6ZKwmK7t0hNSY15mYXntzGItQhxf CQxnD20B w4Dsl/4Z6QIKf2yga5X9Y42Ht30vy4zS07+Fyw8QbZf09I+mCcC9xtOTJWj80ebvHl3HS9n4DrZvWjC83os447rlf30wKGC+fs8Qad3K5b1HDUX/62XD51Id1KXBf30emDrG+j5iIu59Oskw1hYn4KAiM/up9Iuthm9G6KBFvIaMtQQ7fSefd4/GfReuHikZLOJpQtotjPCD5JTd0+t44m0e7HECZKZmfOwHa8xr/5GcpWU2x78niitPWKee4dkdTcUQPjKzhA6D7s1IlTC9cNaCsxryTvekesQN+kvxloQLKUhU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Brian Cain Acked-by: Mike Rapoport (IBM) --- arch/hexagon/include/asm/cacheflush.h | 8 ++++++-- arch/hexagon/include/asm/pgtable.h | 9 +-------- 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 6eff0730e6ef..dc3f500a5a01 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -58,12 +58,16 @@ extern void flush_cache_all_hexagon(void); * clean the cache when the PTE is set. * */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { /* generic_ptrace_pokedata doesn't wind up here, does it? */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, void *src, int len); #define copy_to_user_page copy_to_user_page diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..dd05dd71b8ec 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -338,6 +338,7 @@ static inline int pte_exec(pte_t pte) /* __swp_entry_to_pte - extract PTE from swap entry */ #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) +#define PFN_PTE_SHIFT PAGE_SHIFT /* pfn_pte - convert page number and protection value to page table entry */ #define pfn_pte(pfn, pgprot) __pte((pfn << PAGE_SHIFT) | pgprot_val(pgprot)) @@ -345,14 +346,6 @@ static inline int pte_exec(pte_t pte) #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) -/* - * set_pte_at - update page table and do whatever magic may be - * necessary to make the underlying hardware/firmware take note. - * - * VM may require a virtual instruction to alert the MMU. - */ -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) - static inline unsigned long pmd_page_vaddr(pmd_t pmd) { return (unsigned long)__va(pmd_val(pmd) & PAGE_MASK); From patchwork Wed Aug 2 15:13:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACA1DC04A6A for ; Wed, 2 Aug 2023 15:14:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B680280194; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE522280193; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B37128018F; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5B9C328018E for ; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id ECDEDA074D for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) X-FDA: 81079510386.02.990DFF0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 3B792140020 for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=e2C7IJb5; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xC32Ejh2/kyRgE9+PJ0nvqn0rt8G2+2GBX3n73jLfw4=; b=jyfMydiXbEVBtkgbGN1WCjmNv/2aNx66aa4oibL/OppnEXUTD/vjAq9voC6sqm0PYx+sC5 csnzocA5tMx3bbDoWbvnsvlUHngqzrpRAdtcgrEEL83FVMsdVwn+9S8pSSQcKMIGs+gTt2 mOa1A3e/DBEaBUDAbK0iq+TrL9J0s9c= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=e2C7IJb5; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989252; a=rsa-sha256; cv=none; b=twd27KCgYM2NW0piylXpcJ7MDZ/ma4rXaobP3j9uMgcVZojpu5N5Y1JOOwmI8RomqmhjAo n/WPMLiBL2Gx6JH5y7OE6PSfSZ/Hz0iXGYK4KQqDILD9jEFxqHN19eWjhFjjlqmb9CuUXR H40elcBSpEY192J6T4fXZJtus9/ps8g= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xC32Ejh2/kyRgE9+PJ0nvqn0rt8G2+2GBX3n73jLfw4=; b=e2C7IJb5kxOWX4ySYeyRjX1Yhr iiJCS1+yplIHXndUsUUMC6x73KmUYzv4GDh59ukWc7CW23lcF1JUg7DtOUujf/oC5VfFCLauREPPa jMn3r6K6xfqWBU1GUb53hkPlyEdnhCHMLqekd2nSwCUSEtDPr34l0LTyY1wOMrlqmGNXIsvfg8QM8 J/ZV8v6rDM21WfOm1m2bXvgLTJEh+hK8+cQYJJszrgpgDw9mfM2tBdW7xySd6VgkwnFIGYGf7OVmL PlmV51RxN9R7VhAnzy+lOjytfM7wU4eL/p7NKttbLF8KPYTMtpibXpDdf6JLq//58dFGjBq34wVOp d2qd4haw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY9-00FfjT-99; Wed, 02 Aug 2023 15:14:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , linux-ia64@vger.kernel.org Subject: [PATCH v6 13/38] ia64: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:41 +0100 Message-Id: <20230802151406.3735276-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3B792140020 X-Stat-Signature: yzgwtejpew6srtkatmwq5u4kb3zbioeu X-HE-Tag: 1690989251-283495 X-HE-Meta: U2FsdGVkX1+KZkc2CA++XVYMVjJobW+oTEPhretcP4d9PW8bOMRJ9xrZMLsNYw1s2E0v0zx0ospWF/opfK/W7bhGVaREFFX39K1DQgk9eY1P1yuTLJkrGPCV3xNDcLYLInzSaMnMd3rjPnLsAV82na19g2mZcxkq+ayJpjDsuzlV19JQb2RDYEFt7toWowybU9xCG8gWuzjePfAXE9S/7mCFgHBcem1ZO5nX1Pfst7QJgCDqR0Q6ooL44gw6lPH6d+95HaQeWEbrnbZ3eWkBei4FB3VRYNNEMeR3eOqQJgAtDNDwGb1nFLIu3PDJnn/Es53cGcO/n8sekwOFkevb8CRcAj2RUdN6V6Gt+AC9lOZVbgUny37WP2uBZfA8jTDIXfVBc6q2iOuXeFIvFhNWNMmCVBkSeqa2dtweGwp4peaM3KuX/r6en0liR3g7q0LxAzXcjHy94WNwsnlzgrmo2Hi3E4ndOlTIgmyd1Tmv6ozbdVRDOYrJvXHNtUhqopzAW4AhIC/qW66sKFwyUcJhMRETIIF8oFQpmki+Pla6xbex3cAsnbpulPAZ9tSuTKi7i8ETVmTQycqFDusmBaN6B8sLrDnPKGZojnb9XiVEPc2BgQZomCCJNX5heIOoanvYwqtdsuhkZpjuKBl6S4pOJDLGVn/cuE4sc6tqM87qMxDlKN1GCN8d/wySfw7BhpSc0zAqNRybZYYalNQpE0ELGi0EMm82TlcdNfOzhJmjBNXzN2hJtOGYCBgwaaKMVl1mm6j9FpEe8sCvYx2DSrKvdnTHimz13bzzuXfzcoHBFxd7LZGQ/Ve03FPFgWfGZ5xGAavPQCTEfmKfpMCUtG8eNWE7nxclkWhQa890ozEYSCyZDqZXIVE3QCQDWywVcXw6ZHliryOdTcCOpkxJDKcUJkX+rF3Vm/PqS4JLAyBbm4rfo3cRUxjlrhv6THNBKsNPDKGtv2Ri83tYzldvcZC 9BrlqoyY Zflijug6/l/oWvr98U9s9NgQ0cX5ETgeDJ+vX2YwWU30UqiQmKDNe4DfHH+IkRBJmx2l0q88gwGx55YRQMOmRypWdaNUAgghZpDl96GfsaiK9E4xYucO43mnKfekioME5JZxts8aKOZxIXu361RtpveOGRuSbKLqPLf0c22e5lh/HklYgiNqmZnWzJEJso+KSIL7PaZJEP8psPGCe+vF7PpS9tFK8WhZAxCYLaogflhAziNgGJLrAEPeaXw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_clean) flag from being per-page to per-folio, which makes arch_dma_mark_clean() and mark_clean() a little more exciting. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: linux-ia64@vger.kernel.org --- arch/ia64/hp/common/sba_iommu.c | 26 +++++++++++++++----------- arch/ia64/include/asm/cacheflush.h | 14 ++++++++++---- arch/ia64/include/asm/pgtable.h | 4 ++-- arch/ia64/mm/init.c | 28 +++++++++++++++++++--------- 4 files changed, 46 insertions(+), 26 deletions(-) diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index 8ad6946521d8..48d475f10003 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -798,22 +798,26 @@ sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba) #endif #ifdef ENABLE_MARK_CLEAN -/** +/* * Since DMA is i-cache coherent, any (complete) pages that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ -static void -mark_clean (void *addr, size_t size) +static void mark_clean(void *addr, size_t size) { - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page((void *)pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; + struct folio *folio = virt_to_folio(addr); + ssize_t left = size; + size_t offset = offset_in_folio(folio, addr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= folio_size(folio)) { + set_bit(PG_arch_1, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } #endif diff --git a/arch/ia64/include/asm/cacheflush.h b/arch/ia64/include/asm/cacheflush.h index 708c0fa5d975..eac493fa9e0d 100644 --- a/arch/ia64/include/asm/cacheflush.h +++ b/arch/ia64/include/asm/cacheflush.h @@ -13,10 +13,16 @@ #include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) \ -do { \ - clear_bit(PG_arch_1, &(page)->flags); \ -} while (0) +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_arch_1, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_range flush_icache_range diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 21c97e31a28a..4e5dd800ce1f 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -206,6 +206,7 @@ ia64_phys_addr_valid (unsigned long addr) #define RGN_MAP_SHIFT (PGDIR_SHIFT + PTRS_PER_PGD_SHIFT - 3) #define RGN_MAP_LIMIT ((1UL << RGN_MAP_SHIFT) - PAGE_SIZE) /* per region addr limit */ +#define PFN_PTE_SHIFT PAGE_SHIFT /* * Conversion functions: convert page frame number (pfn) and a protection value to a page * table entry (pte). @@ -303,8 +304,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* * Make page protection values cacheable, uncacheable, or write- * combining. Note that "protection" is really a misnomer here as the @@ -396,6 +395,7 @@ pte_same (pte_t a, pte_t b) return pte_val(a) == pte_val(b); } +#define update_mmu_cache_range(vmf, vma, address, ptep, nr) do { } while (0) #define update_mmu_cache(vma, address, ptep) do { } while (0) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 7f5353e28516..b95debabdc2a 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -50,30 +50,40 @@ void __ia64_sync_icache_dcache (pte_t pte) { unsigned long addr; - struct page *page; + struct folio *folio; - page = pte_page(pte); - addr = (unsigned long) page_address(page); + folio = page_folio(pte_page(pte)); + addr = (unsigned long)folio_address(folio); - if (test_bit(PG_arch_1, &page->flags)) + if (test_bit(PG_arch_1, &folio->flags)) return; /* i-cache is already coherent with d-cache */ - flush_icache_range(addr, addr + page_size(page)); - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ + flush_icache_range(addr, addr + folio_size(folio)); + set_bit(PG_arch_1, &folio->flags); /* mark page as clean */ } /* - * Since DMA is i-cache coherent, any (complete) pages that were written via + * Since DMA is i-cache coherent, any (complete) folios that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ void arch_dma_mark_clean(phys_addr_t paddr, size_t size) { unsigned long pfn = PHYS_PFN(paddr); + struct folio *folio = page_folio(pfn_to_page(pfn)); + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); - do { + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= (ssize_t)folio_size(folio)) { set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); - } while (++pfn <= PHYS_PFN(paddr + size - 1)); + left -= folio_size(folio); + folio = folio_next(folio); + } } inline void From patchwork Wed Aug 2 15:13:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 836E3C04E69 for ; Wed, 2 Aug 2023 15:15:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3D582801B1; Wed, 2 Aug 2023 11:14:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AC7402801AA; Wed, 2 Aug 2023 11:14:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98DCA2801B1; Wed, 2 Aug 2023 11:14:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 89E262801AA for ; Wed, 2 Aug 2023 11:14:37 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 674101A0712 for ; Wed, 2 Aug 2023 15:14:37 +0000 (UTC) X-FDA: 81079511394.13.A217F3E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 7C4C71C0007 for ; Wed, 2 Aug 2023 15:14:35 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Ha9s/sKN"; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989275; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wBzUOAudwvibgR05xcILaDI1lHt6uM+gLVFXP8bsuc0=; b=XvRXhX430W5kQttiY1gD+5/n2CSJSaXaXWm21duPSW+F2Ut3m4pSymcIcEbFmRhMvUxOHQ Psxk9DPHglRj4FGmDAkZcYmCeQTdFUBtKskH1UjAr1sqEcw75GkqYmqMXC8zl6WZI31/pG /7TabRM59umx4HPMCCr9ab3tzWa/HUk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989275; a=rsa-sha256; cv=none; b=Uug559UPOx2nidsbK155qYg4I9Wdk5gEHq9ZfmZpAYItSKO5BfPGz2+6kAGw3o0IVNWgBO Co8WkXN4+QYDjCDtjy9Vdgs7zy18Psv+Da0NrGM1658e/a45+0cWTChXhCQVtzAkl+03qa dcqhcn9iWYtZ2Xxb1JUIk4DAA0vAi1A= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Ha9s/sKN"; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wBzUOAudwvibgR05xcILaDI1lHt6uM+gLVFXP8bsuc0=; b=Ha9s/sKNhdfeybdIYofHo+zqU+ ot8ZgcRg0OXHpTfLXK7qTZ+NFCab/7cCF/cAPpEv7EwGg0NVhtKlv8fxjnbEEnrV9RHLiHvYRTLVx sKmKQRJX6gkR36XaRIHbh6MMjX8PV5CP8uci9/fOC0QF8+vUc8adPFYjIwD4ccpBFnRTaxDRL0lY2 25G6tjp5LzEW8RBGuX20uqsI2hpOBlD0vRBc4n70B5Iqq/RJnjpiIstoVFEDu5/MUkD0ghh+fDtGi KdyynwzxrxNTdwR79IRw0WeUOQ0RWGwgXQrNVErQWXyR3bFnYu0BtiALQ5p2J4wGQ6keaOBySlAsV 2x50BKdA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY9-00FfjV-D6; Wed, 02 Aug 2023 15:14:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev Subject: [PATCH v6 14/38] loongarch: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:42 +0100 Message-Id: <20230802151406.3735276-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7C4C71C0007 X-Rspam-User: X-Stat-Signature: o7i93s3qdtx5sj89jou3yg5xazj1auyf X-Rspamd-Server: rspam03 X-HE-Tag: 1690989275-397447 X-HE-Meta: U2FsdGVkX1/bDzmE7Yk1Ii++raa6DqNjjOXqls/d0NYB7AaKGaom4ZYxNgKAkhEW0DzgvdzhGG4MUXghfvriQm7zFJuJFiAEKmI6QVq6Ej0CgcoC2pRwn6gauRDjtFePXCBPfHtRJtjMowJJ9flaVXzPOICqu6dzSRZl6yCixvg1/8h2wRGfqH571K5lS0xHeJ8qRXw4Ven1m0+Nr9degpuZ+VI6SieQ0DpBFTgfXRBKjpJ7Gb9UfuLmOwI/NMJcaFqrdvPmbt/qv1XWvajcl2Dw8xftb9mpvPW6UCO7yjl5uADwSlIuRTU5/CrMmMuKkGSeejvhCkdd9jQmqP2dDLAQe3wixFvsBjBj7MpM1Fs5SEwyPxN016IaQw3d8sdJaFQWrR8xaWQQYFAdxBpjJ0SXHOWwUXIUjulJqA/i3eEg8v0cYiwtMJ/oJuuMiBU2CkohLfxRuuM1zM6sw3xh3ULc6uFdD2AdwgSRzVvSUldoM7d1RU9Tzs+bKvlowYgfnOI/rO5x4kLBBh48a5avisFbYxD1JKzSpvdVYQma+eL2M/o+qwUn/NolKEEy/KLI1N7kYZVrc9LabdHXSc+F2UY/DhxSAMXV6KHYr5EndJht/xerTmqhtwRf2n3XoFknGo+sf5VsqP+xEUzVyea6rOsK3o8WJZ5DgNOXLPN9wic7yaxHegj3lyM193Z05Y3csg2qzYIHey8Ls6YFLwCJ4LMGYwDkIAY48EUkMc12ERv9sa4++YzrcAEhVM+c+ouLvt/tDRcT5p4d4TGsrQwm9EK3baD0P3FIiAibpiAqNrxSiSdQN3AT9VEHLZaqlcpQdqShGpE3DoT8u1+j/yHu98E7cKHENgSF1HCkrjlHJ2uj3mbJ/vgQ/hpq/GubJkHjdYjwcRMzBndzOrYMvuZmDveGMyh7c390G+yalamhPX9QSWmXPfY2Gr1jo0BO4nXnFq1X55z+sCjfDgwwJPU xyqXKmPf oMhM2sUOMWzilZwSBUpTrv/pWU9HrPUpCEchKHe7UA6LAu7nmETNWZS40txhrdC3R7m2PMsQdIPL5wTfQOrB2SmJhQpUGGcVqbmoQJ6tHV2s5qDdIlQLHFKhYGUkFW8JndR3mGFPC1uhdEqBSXrLTuACI6mm4bS7k1O8asrOv4qc9kZo8m15MgNPRbFsyqmw6UxYZC2zqN+4GoQykvJ8GkGT4kbpjeVoh6ZszVPPn1YnsAD8/oXSExIBLVA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add update_mmu_cache_range() and change _PFN_SHIFT to PFN_PTE_SHIFT. It would probably be more efficient to implement __update_tlb() by flushing the entire folio instead of calling __update_tlb() N times, but I'll leave that for someone who understands the architecture better. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Huacai Chen Cc: WANG Xuerui Cc: loongarch@lists.linux.dev --- arch/loongarch/include/asm/cacheflush.h | 1 + arch/loongarch/include/asm/pgtable-bits.h | 4 +-- arch/loongarch/include/asm/pgtable.h | 33 ++++++++++++----------- arch/loongarch/mm/pgtable.c | 2 +- arch/loongarch/mm/tlb.c | 2 +- 5 files changed, 23 insertions(+), 19 deletions(-) diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 0681788eb474..88a44da50a3b 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -47,6 +47,7 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define flush_icache_page(vma, page) do { } while (0) +#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/loongarch/include/asm/pgtable-bits.h b/arch/loongarch/include/asm/pgtable-bits.h index de46a6b1e9f1..35348d4c4209 100644 --- a/arch/loongarch/include/asm/pgtable-bits.h +++ b/arch/loongarch/include/asm/pgtable-bits.h @@ -50,12 +50,12 @@ #define _PAGE_NO_EXEC (_ULCAST_(1) << _PAGE_NO_EXEC_SHIFT) #define _PAGE_RPLV (_ULCAST_(1) << _PAGE_RPLV_SHIFT) #define _CACHE_MASK (_ULCAST_(3) << _CACHE_SHIFT) -#define _PFN_SHIFT (PAGE_SHIFT - 12 + _PAGE_PFN_SHIFT) +#define PFN_PTE_SHIFT (PAGE_SHIFT - 12 + _PAGE_PFN_SHIFT) #define _PAGE_USER (PLV_USER << _PAGE_PLV_SHIFT) #define _PAGE_KERN (PLV_KERN << _PAGE_PLV_SHIFT) -#define _PFN_MASK (~((_ULCAST_(1) << (_PFN_SHIFT)) - 1) & \ +#define _PFN_MASK (~((_ULCAST_(1) << (PFN_PTE_SHIFT)) - 1) & \ ((_ULCAST_(1) << (_PAGE_PFN_END_SHIFT)) - 1)) /* diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index 38afeb7dd58b..e7cf25e452c0 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -237,9 +237,9 @@ extern pmd_t mk_pmd(struct page *page, pgprot_t prot); extern void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); #define pte_page(x) pfn_to_page(pte_pfn(x)) -#define pte_pfn(x) ((unsigned long)(((x).pte & _PFN_MASK) >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)(((x).pte & _PFN_MASK) >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) /* * Initialize a new pgd / pud / pmd table with invalid pointers. @@ -334,19 +334,13 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) } } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); -} - static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { /* Preserve global status for the pair */ if (pte_val(*ptep_buddy(ptep)) & _PAGE_GLOBAL) - set_pte_at(mm, addr, ptep, __pte(_PAGE_GLOBAL)); + set_pte(ptep, __pte(_PAGE_GLOBAL)); else - set_pte_at(mm, addr, ptep, __pte(0)); + set_pte(ptep, __pte(0)); } #define PGD_T_LOG2 (__builtin_ffs(sizeof(pgd_t)) - 1) @@ -445,11 +439,20 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { - __update_tlb(vma, address, ptep); + for (;;) { + __update_tlb(vma, address, ptep); + if (--nr == 0) + break; + address += PAGE_SIZE; + ptep++; + } } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -462,7 +465,7 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT; + return (pmd_val(pmd) & _PFN_MASK) >> PFN_PTE_SHIFT; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c index 36a6dc0148ae..1260cf30e3ee 100644 --- a/arch/loongarch/mm/pgtable.c +++ b/arch/loongarch/mm/pgtable.c @@ -107,7 +107,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c index 00bb563e3c89..eb8572e201ea 100644 --- a/arch/loongarch/mm/tlb.c +++ b/arch/loongarch/mm/tlb.c @@ -252,7 +252,7 @@ static void output_pgtable_bits_defines(void) pr_define("_PAGE_WRITE_SHIFT %d\n", _PAGE_WRITE_SHIFT); pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT); pr_define("_PAGE_NO_EXEC_SHIFT %d\n", _PAGE_NO_EXEC_SHIFT); - pr_define("_PFN_SHIFT %d\n", _PFN_SHIFT); + pr_define("PFN_PTE_SHIFT %d\n", PFN_PTE_SHIFT); pr_debug("\n"); } From patchwork Wed Aug 2 15:13:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AACB7C04A6A for ; Wed, 2 Aug 2023 15:14:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C30F280143; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 59F43280193; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0182A28018F; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A73AF280190 for ; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 77BB7A074D for ; Wed, 2 Aug 2023 15:14:14 +0000 (UTC) X-FDA: 81079510428.21.A2A524A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id C5FBD120014 for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=N45c28Qi; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989251; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SSWlIbtVHyzx0IrHBw3ENR0Y2jRs7ctUUXoAlk1kgWA=; b=l77kF2inco5wfqnNCy+ax+kzBHG8uxyHp8NkEFXvMmrliHRYELdUvBaopDyLHbtXdW6VX5 lfqybTnb2toUGOOkXFS4j827iJA+Vp0wcQEg9JhErDJ902aWoKkDo4ioPZZGnRdOxkYKoR CfGS9JIaFAEQm0R5pfYPvjq1wztTgCY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=N45c28Qi; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989251; a=rsa-sha256; cv=none; b=FftwyjIxUa4RXS0ddpUkF47tgB0MzFA4dWQdpJnOTdLWpC2WAj3JXj4AK9mjwzaEbsQlvW pSTL0ORqdKpBoK9lTI1N2i6vI2cOao++FPU0pxp/oZeZoQzg2LQxRaGLFZ36QPHdF8+8R6 YNAyEDFtG8cTZ0rCIrZUpEIyWSzDNKU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=SSWlIbtVHyzx0IrHBw3ENR0Y2jRs7ctUUXoAlk1kgWA=; b=N45c28QiG3NCBqqfDCaIM2/rzo MbeEoAbXGTkYx2HBvbU6IhlBEfdz09JHTND5p0SQeJ/zpkU2qEXMLiLBiqH3VV2YbJwgPtZg6vg4e oGrupFpc/y81/W6hUQcZwKt+CN82FS8179+nNNzdBjBBk7U8zbCwf4kESfIXoLuNtr5m44IEZuK4s ON4GF/Ik5yFMtcEh/xMGYifpdvC8W0HXc0kycFvzqryWCLo68SOUkXpci8scab1DyKZaRXWXGPCWi KxOh4BZNcx2uv0Ie/1TMqqb8meNNXD/JevfiEOW+JIsGKgbB2kposmwJwDfoXI5Hh0n9a6r0E1RMJ vjMx3Jbg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY9-00Ffjf-HY; Wed, 02 Aug 2023 15:14:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Geert Uytterhoeven , Mike Rapoport , linux-m68k@lists.linux-m68k.org Subject: [PATCH v6 15/38] m68k: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:43 +0100 Message-Id: <20230802151406.3735276-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: C5FBD120014 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: g139cpa4hk71orun9jnkiaz51u69eqz6 X-HE-Tag: 1690989251-936095 X-HE-Meta: U2FsdGVkX1+3YRSxuv19lHi9/fmSxsFWEbiHdZ9wAMP9PT/NHBdzSzyLkhZ7tL59NVblRJSpxvwLRjH+nn7VStg34WZqCRMLl2EsT56YSb+6tlGrdzF9fHjTgwNoy+FNMTe8bmXqYC2RDg10xoIOD3Sb8ZIzKzdGZtFBeBUajQliNdXrvTIiN45K1UhMJZ7jm2yF7VhO2F2Ng1Jtg+TLEXRTuQhrFtZaJZzKuw04194uDJAIs1HQNNs/GtI7FrtJTkMTnzfqp+MGQhMjN+fqpgYRqPXlKu733k3CoHUSPHRCkpIaKVSf+cnkDOXQyOH4R14iASSFkQs/vHljJQ94rQvxk4LOs//BdKrYEwQ3iOQId1zngigQMB31X3aX5NyXS9h+m364fL3H0AX8kqzt5zRqogePP/bou8AuopuAm2V7xRMlByeCLZIRZP/GiZwn9Sw9H9g9G87q2m4gAG5pm1sZK9jvWGSLzIeG/WKiz/w7TLuUuVZknAx7XwqVq8K/58Oci6wHnB7C2YU/EMmMxZxdB75iHlkpbt9zpUiMNeuYgvdYla5PqrS6XG06dfdDIrsx6fVs7ihpvP7OLyOT802uZTHQ/jWq6FKnP6DnzjYUNndASxHC6La5y/kI432noa97rytGoz2FjHCbpB7fAochy4s7ypcbGbD74bR8nKwTNpHXzNgh61p9pdjXyxwLgY8OeXmHweIA+V+vU8RlvGLhsSVv9IjbJPV4YWQJR2E2smd5xJJiXepqylPI7A3co4T46zGC3fE6kuWsLgr4BgfrlYwj4S6m8xQOPd0LuKGL+HSlFMfBjkKZIt635VfR4sL0lPMQVxfFjwMVStVBqmS+g+qIl69IeO3c0ogO3FDC04ZMLqEvUOXnpcVcK0RDy2cYMmHbPf3GItEGSffMzLKe06l9c2f5fW/r7q29GqKtFtL8OgyADXt8LQgyEIm8gu8PJTBH7Rvw5OuSlSr ozmenyCi jOiRdJ+hfYWX3Nzpgl2fLWNqS4FkBtiyFRDjFtv0JwEgxj8usROoj8uBCNp3XVCl1F0OQJ9e0KLpYWymgGJQs1PcObKfauhnb1VpP1J8XOQdnDnUr0o2cpfgZT8RwhTvszTOHFmPj1C9+Loa26MD8kB+4Osfo97z6CoFHv2MDg8OzchbZV35i2gj7rJnkQrXUq49ssOG/Zygcn7LERTsgDMBSN9/w1DYDB4tMz7IB7Lzbekq5UYW21XdCIti1zFhNE2qiihVK20vZ3uEU8QxDKY361yFRHqq8jfg6AX9Z0om0c08= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Tested-by: Geert Uytterhoeven Acked-by: Mike Rapoport (IBM) Cc: linux-m68k@lists.linux-m68k.org --- arch/m68k/include/asm/cacheflush_mm.h | 27 ++++++++++++++++-------- arch/m68k/include/asm/mcf_pgtable.h | 1 + arch/m68k/include/asm/motorola_pgtable.h | 1 + arch/m68k/include/asm/pgtable_mm.h | 10 +++++---- arch/m68k/include/asm/sun3_pgtable.h | 1 + arch/m68k/mm/motorola.c | 2 +- 6 files changed, 28 insertions(+), 14 deletions(-) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 1ac55e7b47f0..88eb85e81ef6 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -220,24 +220,29 @@ static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vm /* Push the page at kernel virtual address and clear the icache */ /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ -static inline void __flush_page_to_ram(void *vaddr) +static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) { if (CPU_IS_COLDFIRE) { unsigned long addr, start, end; addr = ((unsigned long) vaddr) & ~(PAGE_SIZE - 1); start = addr & ICACHE_SET_MASK; - end = (addr + PAGE_SIZE - 1) & ICACHE_SET_MASK; + end = (addr + nr * PAGE_SIZE - 1) & ICACHE_SET_MASK; if (start > end) { flush_cf_bcache(0, end); end = ICACHE_MAX_ADDR; } flush_cf_bcache(start, end); } else if (CPU_IS_040_OR_060) { - __asm__ __volatile__("nop\n\t" - ".chip 68040\n\t" - "cpushp %%bc,(%0)\n\t" - ".chip 68k" - : : "a" (__pa(vaddr))); + unsigned long paddr = __pa(vaddr); + + do { + __asm__ __volatile__("nop\n\t" + ".chip 68040\n\t" + "cpushp %%bc,(%0)\n\t" + ".chip 68k" + : : "a" (paddr)); + paddr += PAGE_SIZE; + } while (--nr); } else { unsigned long _tmp; __asm__ __volatile__("movec %%cacr,%0\n\t" @@ -249,10 +254,14 @@ static inline void __flush_page_to_ram(void *vaddr) } #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) +#define flush_dcache_page(page) __flush_pages_to_ram(page_address(page), 1) +#define flush_dcache_folio(folio) \ + __flush_pages_to_ram(folio_address(folio), folio_nr_pages(folio)) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) +#define flush_icache_pages(vma, page, nr) \ + __flush_pages_to_ram(page_address(page), nr) +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/m68k/include/asm/mcf_pgtable.h b/arch/m68k/include/asm/mcf_pgtable.h index 43e8da8465f9..772b7e7b0654 100644 --- a/arch/m68k/include/asm/mcf_pgtable.h +++ b/arch/m68k/include/asm/mcf_pgtable.h @@ -291,6 +291,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return pte; } +#define PFN_PTE_SHIFT PAGE_SHIFT #define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT) #define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT)) diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h index ec0dc19ab834..38d5e5edc3e1 100644 --- a/arch/m68k/include/asm/motorola_pgtable.h +++ b/arch/m68k/include/asm/motorola_pgtable.h @@ -112,6 +112,7 @@ static inline void pud_set(pud_t *pudp, pmd_t *pmdp) #define pte_present(pte) (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PROTNONE)) #define pte_clear(mm,addr,ptep) ({ pte_val(*(ptep)) = 0; }) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_page(pte) virt_to_page(__va(pte_val(pte))) #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h index b93c41fe2067..dbdf1c2b2f66 100644 --- a/arch/m68k/include/asm/pgtable_mm.h +++ b/arch/m68k/include/asm/pgtable_mm.h @@ -31,8 +31,6 @@ do{ \ *(pteptr) = (pteval); \ } while(0) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* PMD_SHIFT determines the size of the area a second-level page table can map */ #if CONFIG_PGTABLE_LEVELS == 3 @@ -138,11 +136,15 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); * tables contain all the necessary information. The Sun3 does, but * they are updated on demand. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + #endif /* !__ASSEMBLY__ */ /* MMU-specific headers */ diff --git a/arch/m68k/include/asm/sun3_pgtable.h b/arch/m68k/include/asm/sun3_pgtable.h index 9e7bf8a5f8f8..0cc39a88ce55 100644 --- a/arch/m68k/include/asm/sun3_pgtable.h +++ b/arch/m68k/include/asm/sun3_pgtable.h @@ -105,6 +105,7 @@ static inline void pte_clear (struct mm_struct *mm, unsigned long addr, pte_t *p pte_val (*ptep) = 0; } +#define PFN_PTE_SHIFT 0 #define pte_pfn(pte) (pte_val(pte) & SUN3_PAGE_PGNUM_MASK) #define pfn_pte(pfn, pgprot) \ ({ pte_t __pte; pte_val(__pte) = pfn | pgprot_val(pgprot); __pte; }) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index c75984e2d86b..8bca46e51e94 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -81,7 +81,7 @@ static inline void cache_page(void *vaddr) void mmu_page_ctor(void *page) { - __flush_page_to_ram(page); + __flush_pages_to_ram(page, 1); flush_tlb_kernel_page(page); nocache_page(page); } From patchwork Wed Aug 2 15:13:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D55A6C001DF for ; Wed, 2 Aug 2023 15:14:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D0992801A1; Wed, 2 Aug 2023 11:14:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C7BE2801A2; Wed, 2 Aug 2023 11:14:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F34402801A3; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D11B02801A1 for ; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A7514160F58 for ; Wed, 2 Aug 2023 15:14:19 +0000 (UTC) X-FDA: 81079510638.22.1ED769A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 0B37020029 for ; Wed, 2 Aug 2023 15:14:17 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eJxvofia; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989258; a=rsa-sha256; cv=none; b=Y/eZzFoW45ptbsXrVwnGsgkgwgT3oORaTSdPy+5pmLfDJpYczRMA9K3CxYKOwm7j7dIfub ry5zVY/QYK7sODKFdnYXr+aclIN4r4rzxNjxfrHOWu1iWz5ycDOMAHIi9oEe/HLLMBRgF/ hSboAPdmZ/GAlg+luzv0YaUTzRZN+/k= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eJxvofia; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989258; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=R3CnonFRN2u5W8MlfJcV+Jd6s/k7prD5kgGz6X6tBLU=; b=QhvavINkGMkYi2IrkMt9uMBrJL8JM7iSEnCWiNOuQBgUh9iZCa1vzEjX5ACkFwmqwsn+qv YeiHHEsxCQhiEWgNV0haLzmYbCJUYPVnrMIaJ3J2/5SxiJ1/V5GPKZesq7TY54kqju+PjA aDoLbbrUrDDChtSQ/oxurAeEVe5GDnI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=R3CnonFRN2u5W8MlfJcV+Jd6s/k7prD5kgGz6X6tBLU=; b=eJxvofiaiVVpJGOwTLrJBXPqtx YeGLbjO+66A9eQXkKLJvSSgGpGRqHqJW2p6xP5A+NnqhKA35wO5us0jNM48rUxepNHyniKc/GQIRF T2d8QpgiQwVgyOCyCbWxM8qC7LLIxTlEBe0Bd4+sYNJZUADQTchzH4mMeybT9BDl3bzIvAUky2wzj G6IfdyXgo8ZqBkN9o8RGnN1AwC4ASpbysL0Ye6gA88NsKro0G0L8S25Ge4HNFQfZ1WbgEa1RRpxXV dGIx765omd7Rb0ty/utbJZ+C/DxmGoH0RyGv6CFLSZDOfpM0+XOp5Z55fUsDy9Djqem48uAhnQ6wy ppDtCirw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY9-00Ffjh-KV; Wed, 02 Aug 2023 15:14:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Michal Simek Subject: [PATCH v6 16/38] microblaze: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:44 +0100 Message-Id: <20230802151406.3735276-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0B37020029 X-Stat-Signature: byt1rftyumuoxyxu6obeyiy57kfo1y46 X-Rspam-User: X-HE-Tag: 1690989257-983557 X-HE-Meta: U2FsdGVkX187oBd2ExcZH0KW4kDWOv4HFfnzAwsWo28SA2zFF6IrKBWZ1IH2aPzjen4q0kzHJKaoNMyXP6PQ3Zt6hb2CrA65q2aDSRHyy4LV7sh9+aADZhlqMig+3rbS840SWru1tewMYuehg2mcvgGcN4Wz6+I2C5u9hrSzZxI2O7SJEtZRLlo/GGvi/CtsJ99W823uBHv2RuYGiYMbT62r66Uk9MXRgOKbWhu3jlme6HnfP2hArVgsXopKwcmwlvVxYP1N/ybUM7HXu8KT7P7wK4qISJxuoEhy/47vmoj93N9FOFzFmBvIR/bL2c2QKy6ND+h/ZmW3/3yAd+p+J4Lb24zbGRgavzpIhAYsqWiv5PQWIMua3kh3RShEBVuUU3xcnKECRMrSsjPU5NDPTgdK0X6rV+4Gxy7lPQPuV4tHLjqHSR8GPmCko7wnPjGIYg+TCcvjrn70SNfFFzQlT5x/TnqkF9r8r/QdhDRZSioXUOPsX3jt3PAG8PxicSFU0CcExV5DZWmQ1iaRufBc0cM4dZX0v5ArbSK2bnoArKtTuFcBaGmDZnpx7eVz51lFzG1zQrZoP0vfLWuELZEQn8VHLkkEMgM8hT7V4DTXzqbeYUDAmBiLL0Uu5mQNG+Cl6h10Lfm/BoWBxmo2l1GqLJELqbiTb4ZSZOzEc1LRP7u/vIjzJFCfC3P4GQX4YalCxcjksZor5MQYnjZjtpH30G0WthjAaFy6chigtwnv5DxlLGaq/J5nyjVIOGhxGdJK4DTSzYNY2LVUluqzGmjUadr0EsU9tGzecR4EbZNCt9BK6ThKN+zL8xsdjI5J9NJ4by9NextReBX3husZZgrdTD0zamh3678/LZ6ynIA7jb0hRCEWqeM5AvxO2CSBa0o4Dum1dbXU2nvcLqo/TF5QbulMFWI1wq6btxu10zrXjfjNfWpx6cZ8ld41Q0TRtRpzkZ0ddlv8739lHftsqCO XpSzZ17e f4cRI86knLbPEQSDos8D+OknSgDkkCCJ+6xqGY8LaQj8Sf6JiRWBB8/jm3Tj7OpfQHvFSwiiuuZtyThI9WuzegXGsVOcMDqqpzPILBu1ndBZL7f/uXyejcXnIK27cv5OkqInogIlm/d+X1Xzz2U4256GwUOCkGBUqwOVgKKmJn76k1zDrfxwHRBwsqYf9HZXlDkEXlK4sMj9X+7VszglMffYCLjZ0Lty6Fvt2JzP3ee/F/5lsC+TMdpVXPc4slwAHCXfqYn3m/TWXInSOiT0qIpWql8Ct/F8KI+MGjsizeJyem+o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename PFN_SHIFT_OFFSET to PTE_PFN_SHIFT. Change the calling convention for set_pte() to be the same as other architectures. Add update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Michal Simek --- arch/microblaze/include/asm/cacheflush.h | 8 ++++++++ arch/microblaze/include/asm/pgtable.h | 15 ++++----------- arch/microblaze/include/asm/tlbflush.h | 4 +++- 3 files changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h index 39f8fb6768d8..e6641ff98cb3 100644 --- a/arch/microblaze/include/asm/cacheflush.h +++ b/arch/microblaze/include/asm/cacheflush.h @@ -74,6 +74,14 @@ do { \ flush_dcache_range((unsigned) (addr), (unsigned) (addr) + PAGE_SIZE); \ } while (0); +static void flush_dcache_folio(struct folio *folio) +{ + unsigned long addr = folio_pfn(folio) << PAGE_SHIFT; + + flush_dcache_range(addr, addr + folio_size(folio)); +} +#define flush_dcache_folio flush_dcache_folio + #define flush_cache_page(vma, vmaddr, pfn) \ flush_dcache_range(pfn << PAGE_SHIFT, (pfn << PAGE_SHIFT) + PAGE_SIZE); diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h index d1b8272abcd9..6f9b99082518 100644 --- a/arch/microblaze/include/asm/pgtable.h +++ b/arch/microblaze/include/asm/pgtable.h @@ -230,12 +230,12 @@ extern unsigned long empty_zero_page[1024]; #define pte_page(x) (mem_map + (unsigned long) \ ((pte_val(x) - memory_start) >> PAGE_SHIFT)) -#define PFN_SHIFT_OFFSET (PAGE_SHIFT) +#define PFN_PTE_SHIFT PAGE_SHIFT -#define pte_pfn(x) (pte_val(x) >> PFN_SHIFT_OFFSET) +#define pte_pfn(x) (pte_val(x) >> PFN_PTE_SHIFT) #define pfn_pte(pfn, prot) \ - __pte(((pte_basic_t)(pfn) << PFN_SHIFT_OFFSET) | pgprot_val(prot)) + __pte(((pte_basic_t)(pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) #ifndef __ASSEMBLY__ /* @@ -330,14 +330,7 @@ static inline unsigned long pte_update(pte_t *p, unsigned long clr, /* * set_pte stores a linux PTE into the linux page table. */ -static inline void set_pte(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - *ptep = pte; -} - -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_pte(pte_t *ptep, pte_t pte) { *ptep = pte; } diff --git a/arch/microblaze/include/asm/tlbflush.h b/arch/microblaze/include/asm/tlbflush.h index 2038168ed128..a31ae9d44083 100644 --- a/arch/microblaze/include/asm/tlbflush.h +++ b/arch/microblaze/include/asm/tlbflush.h @@ -33,7 +33,9 @@ static inline void local_flush_tlb_range(struct vm_area_struct *vma, #define flush_tlb_kernel_range(start, end) do { } while (0) -#define update_mmu_cache(vma, addr, ptep) do { } while (0) +#define update_mmu_cache_range(vmf, vma, addr, ptep, nr) do { } while (0) +#define update_mmu_cache(vma, addr, pte) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define flush_tlb_all local_flush_tlb_all #define flush_tlb_mm local_flush_tlb_mm From patchwork Wed Aug 2 15:13:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6015C04FE2 for ; Wed, 2 Aug 2023 15:15:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3E1E2801B2; Wed, 2 Aug 2023 11:14:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEF962801AA; Wed, 2 Aug 2023 11:14:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CCAA2801B2; Wed, 2 Aug 2023 11:14:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 790932801AA for ; Wed, 2 Aug 2023 11:14:39 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4F47C1C9E9E for ; Wed, 2 Aug 2023 15:14:39 +0000 (UTC) X-FDA: 81079511478.16.67CB933 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 66FCC1C0036 for ; Wed, 2 Aug 2023 15:14:37 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MQq3CbMm; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989277; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rxNaw0xdtXNmDaM3E8zO6BC1jw3LHCvkg9NX/222Fno=; b=VcHHLkNSRI86TaU2+1jRmd1em8mclahJVmjVWnDQhe8idjW8fSJ4ktiy7777id1iOg+XdH BtRhX5tkgcgSLRVwlrz4TENH7YENwzVvqSxP6263zxDH3lQhyfrixDyoYZIyAcLnG3ENKR 87y0PXlXtYIE/h3K3CUwvs9hhbKdMq4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989277; a=rsa-sha256; cv=none; b=3VL25JuGW08R6q8F+yfHHzf5Sox90u5aM6KZxN83nrnwnBY8revJku2dKzixO3S7lpkuhl 6h3ORvcoYTwmi/qOGPb1a8LgK3w0IulFJTLYhX7/FTLosa2UOb8dGBbW4vWXxJkzUM6C9w fXq3H/qKi2j8fIJy8bgd/6VI22L589o= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MQq3CbMm; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rxNaw0xdtXNmDaM3E8zO6BC1jw3LHCvkg9NX/222Fno=; b=MQq3CbMm0q4bCyWXjzX8474UxY sCM+NdW28tNCxVit1lDbA82ZHyjSM9fidMkGWVxUNfoPY5QNTfKxgYS25Rlrmk8tWTCwGjOqtREIP 9+IpTpVeyHXk9imwuybpiqKlumEsBQdv5lHwQsFqon1kVXe8IKZHUMtreh7oJkMYzho6AvZ6cdBK8 0wfulkr4Pv2h9cU0aO4vCu3Lmo4WFs+fd0j9aYAuEfTlwA2YDcx8U7DL18QolV/fkiaR6fKbHYuy7 Ihb6lEnNGcMdRl76WbIVw7fOIN7+Ji6RV/iF905xCp+34fiY3q74iMZtUA5fnh/SFOmu6g+UbdM/z D1UquHBA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY9-00Ffjp-OO; Wed, 02 Aug 2023 15:14:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH v6 17/38] mips: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:45 +0100 Message-Id: <20230802151406.3735276-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 66FCC1C0036 X-Rspam-User: X-Stat-Signature: dt1eiq4fzmenjf6q5bdtn8quzbwkfn44 X-Rspamd-Server: rspam03 X-HE-Tag: 1690989277-366413 X-HE-Meta: U2FsdGVkX19vTXJ2bW+kcG5haxFVF0NMOHmEPTAOCuZHa1fLsetiZ6hkCyzacC5JDXjelZMwC4B3y7DQEY+9WG/gdjlU4sap4vmYhNWE/MN35u5WSGFyhLOyM+yh7JY2zD+rqc6ljY/mE3qesBZGLy4u2P7uXzY3O1Cn+QUD0dVR3kBeylEXwvKvD4ElqzH0rYR/AWM1uvcKxneki6SnEmombdwgoU1QQs7ISCV/28Xf2poDs/EsDpatqeauwE9ZU+9kZZgQYBiZXLv5mWYvozMyjIW5BsYvdj2pcW944q5AqahQSWRGEBN68DDgUhIZvW3TPy8PVSiLIYXefz9xZy3c2AjGYUaXW4UGoYYy2guPF+aD1VVhTXfMiW6PPODxikR0pgJisB+KxsuNW7kGj3hKjis9BCIsA9c6Ijhu8olswCkBnGbToVezq3e3rrjL3SKeKYQqGlNmEsOPKal3rljp/z1HH1nED9y0B1ji6oXUPKtl1CMKkR1jSiCQ3LlNy6JcmTpj5XRRzgVNmPo/7IOO9Xy6sxIjwV4e0zFogyVzaM6Gr2vJUdG3wNkc9MYPiQOEjR4uCU1On8yqh/F5NjbgdGzQ0lY164cSKpfivgTe6vV+J0ZbVRdAGBE6fMmQBDR/xKgexcyGnCJWnxWuv0FjIO5Yv7VQ7H0ZB+lzT0XSfjl2ssO7k2O2/wB73WAbZzNAWJ4WVB1NKggw1sLe+++PaGZqvRGIP0RUSmCGw5G156YXpZGUyiRLYGkENiRIIaj/Ir0g+vdPO0lU1VsvMvOegmoCLPDl5hBERXU0bndWOo4KwpxpiYP/5qY6YwYeOH10uSjCANsg+TD/PN/OatReeY8oSYFL2aGs69BAo4wurMt1FyVJNwqa1A38UDgdLJs57PJfGqkpFjHKCaDtPhVtXxaF14BXSl2LqlDx3+7MDXaGCDum9diT1ZAR+GmUBCRMqx4GS1tjkU6eVSB Z0taCDdr nGJY2doqe2eu0NXBPr9eo2pBtZWowfJ3kKKHdjP0ZfQJVgdRytzlyqzgNTweMBIyoa/6ObC6tnwkKSANCvq1tRvHZaTCBRRU+KaRt5+3c0xGvnHOZwKm5pSQCU7muYYt0laykbWlFKQTq0/AwCfdfYEqwdW5aphJky1KI4vWBlyaSh/vWepHDNz6j788zaNEf6CV4wEmdpODrxL9gpzxOG3TW1hEkd+58CraqF1wC/T6XAW012nW4ymXX7A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename _PFN_SHIFT to PFN_PTE_SHIFT. Convert a few places to call set_pte() instead of set_pte_at(). Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org --- arch/mips/bcm47xx/prom.c | 2 +- arch/mips/include/asm/cacheflush.h | 32 +++++++++----- arch/mips/include/asm/pgtable-32.h | 10 ++--- arch/mips/include/asm/pgtable-64.h | 6 +-- arch/mips/include/asm/pgtable-bits.h | 6 +-- arch/mips/include/asm/pgtable.h | 63 ++++++++++++++++++---------- arch/mips/mm/c-r4k.c | 5 ++- arch/mips/mm/cache.c | 56 ++++++++++++------------- arch/mips/mm/init.c | 21 ++++++---- arch/mips/mm/pgtable-32.c | 2 +- arch/mips/mm/pgtable-64.c | 2 +- arch/mips/mm/tlbex.c | 2 +- 12 files changed, 121 insertions(+), 86 deletions(-) diff --git a/arch/mips/bcm47xx/prom.c b/arch/mips/bcm47xx/prom.c index a9bea411d928..99a1ba5394e0 100644 --- a/arch/mips/bcm47xx/prom.c +++ b/arch/mips/bcm47xx/prom.c @@ -116,7 +116,7 @@ void __init prom_init(void) #if defined(CONFIG_BCM47XX_BCMA) && defined(CONFIG_HIGHMEM) #define EXTVBASE 0xc0000000 -#define ENTRYLO(x) ((pte_val(pfn_pte((x) >> _PFN_SHIFT, PAGE_KERNEL_UNCACHED)) >> 6) | 1) +#define ENTRYLO(x) ((pte_val(pfn_pte((x) >> PFN_PTE_SHIFT, PAGE_KERNEL_UNCACHED)) >> 6) | 1) #include diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index d8d3f80f9fc0..0f389bc7cb90 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -36,12 +36,12 @@ */ #define PG_dcache_dirty PG_arch_1 -#define Page_dcache_dirty(page) \ - test_bit(PG_dcache_dirty, &(page)->flags) -#define SetPageDcacheDirty(page) \ - set_bit(PG_dcache_dirty, &(page)->flags) -#define ClearPageDcacheDirty(page) \ - clear_bit(PG_dcache_dirty, &(page)->flags) +#define folio_test_dcache_dirty(folio) \ + test_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_set_dcache_dirty(folio) \ + set_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_clear_dcache_dirty(folio) \ + clear_bit(PG_dcache_dirty, &(folio)->flags) extern void (*flush_cache_all)(void); extern void (*__flush_cache_all)(void); @@ -50,15 +50,24 @@ extern void (*flush_cache_mm)(struct mm_struct *mm); extern void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn); -extern void __flush_dcache_page(struct page *page); +extern void __flush_dcache_pages(struct page *page, unsigned int nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (cpu_has_dc_aliases) + __flush_dcache_pages(&folio->page, folio_nr_pages(folio)); + else if (!cpu_has_ic_fills_f_dc) + folio_set_dcache_dirty(folio); +} +#define flush_dcache_folio flush_dcache_folio + static inline void flush_dcache_page(struct page *page) { if (cpu_has_dc_aliases) - __flush_dcache_page(page); + __flush_dcache_pages(page, 1); else if (!cpu_has_ic_fills_f_dc) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) @@ -73,10 +82,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) { } +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h index ba0016709a1a..0e196650f4f4 100644 --- a/arch/mips/include/asm/pgtable-32.h +++ b/arch/mips/include/asm/pgtable-32.h @@ -153,7 +153,7 @@ static inline void pmd_clear(pmd_t *pmdp) #if defined(CONFIG_XPA) #define MAX_POSSIBLE_PHYSMEM_BITS 40 -#define pte_pfn(x) (((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT)) +#define pte_pfn(x) (((unsigned long)((x).pte_high >> PFN_PTE_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT)) static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) { @@ -161,7 +161,7 @@ pfn_pte(unsigned long pfn, pgprot_t prot) pte.pte_low = (pfn >> _PAGE_PRESENT_SHIFT) | (pgprot_val(prot) & ~_PFNX_MASK); - pte.pte_high = (pfn << _PFN_SHIFT) | + pte.pte_high = (pfn << PFN_PTE_SHIFT) | (pgprot_val(prot) & ~_PFN_MASK); return pte; } @@ -184,9 +184,9 @@ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) #else #define MAX_POSSIBLE_PHYSMEM_BITS 32 -#define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((unsigned long long)(pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((unsigned long long)(pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)((x).pte >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((unsigned long long)(pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((unsigned long long)(pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) #endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */ #define pte_page(x) pfn_to_page(pte_pfn(x)) diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 98e24e3e7f2b..20ca48c1b606 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -298,9 +298,9 @@ static inline void pud_clear(pud_t *pudp) #define pte_page(x) pfn_to_page(pte_pfn(x)) -#define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)((x).pte >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) #ifndef __PAGETABLE_PMD_FOLDED static inline pmd_t *pud_pgtable(pud_t pud) diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h index 1c576679aa87..421e78c30253 100644 --- a/arch/mips/include/asm/pgtable-bits.h +++ b/arch/mips/include/asm/pgtable-bits.h @@ -182,10 +182,10 @@ enum pgtable_bits { #if defined(CONFIG_CPU_R3K_TLB) # define _CACHE_UNCACHED (1 << _CACHE_UNCACHED_SHIFT) # define _CACHE_MASK _CACHE_UNCACHED -# define _PFN_SHIFT PAGE_SHIFT +# define PFN_PTE_SHIFT PAGE_SHIFT #else # define _CACHE_MASK (7 << _CACHE_SHIFT) -# define _PFN_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3) +# define PFN_PTE_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3) #endif #ifndef _PAGE_NO_EXEC @@ -195,7 +195,7 @@ enum pgtable_bits { #define _PAGE_SILENT_READ _PAGE_VALID #define _PAGE_SILENT_WRITE _PAGE_DIRTY -#define _PFN_MASK (~((1 << (_PFN_SHIFT)) - 1)) +#define _PFN_MASK (~((1 << (PFN_PTE_SHIFT)) - 1)) /* * The final layouts of the PTE bits are: diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 574fa14ac8b2..cbb93a834f52 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -66,7 +66,7 @@ extern void paging_init(void); static inline unsigned long pmd_pfn(pmd_t pmd) { - return pmd_val(pmd) >> _PFN_SHIFT; + return pmd_val(pmd) >> PFN_PTE_SHIFT; } #ifndef CONFIG_MIPS_HUGE_TLB_SUPPORT @@ -105,9 +105,6 @@ do { \ } \ } while(0) -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); - #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) #ifdef CONFIG_XPA @@ -157,7 +154,7 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt null.pte_low = null.pte_high = _PAGE_GLOBAL; } - set_pte_at(mm, addr, ptep, null); + set_pte(ptep, null); htw_start(); } #else @@ -196,28 +193,41 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt #if !defined(CONFIG_CPU_R3K_TLB) /* Preserve global status for the pair */ if (pte_val(*ptep_buddy(ptep)) & _PAGE_GLOBAL) - set_pte_at(mm, addr, ptep, __pte(_PAGE_GLOBAL)); + set_pte(ptep, __pte(_PAGE_GLOBAL)); else #endif - set_pte_at(mm, addr, ptep, __pte(0)); + set_pte(ptep, __pte(0)); htw_start(); } #endif -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + bool do_sync = false; - if (!pte_present(pteval)) - goto cache_sync_done; + for (i = 0; i < nr; i++) { + if (!pte_present(pte)) + continue; + if (pte_present(ptep[i]) && + (pte_pfn(ptep[i]) == pte_pfn(pte))) + continue; + do_sync = true; + } - if (pte_present(*ptep) && (pte_pfn(*ptep) == pte_pfn(pteval))) - goto cache_sync_done; + if (do_sync) + __update_cache(addr, pte); - __update_cache(addr, pteval); -cache_sync_done: - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); + } } +#define set_ptes set_ptes /* * (pmds are folded into puds so this doesn't get actually called, @@ -486,7 +496,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, pte_t entry, int dirty) { if (!pte_same(*ptep, entry)) - set_pte_at(vma->vm_mm, address, ptep, entry); + set_pte(ptep, entry); /* * update_mmu_cache will unconditionally execute, handling both * the case that the PTE changed and the spurious fault case. @@ -568,12 +578,21 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) -{ - pte_t pte = *ptep; - __update_tlb(vma, address, pte); +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) +{ + for (;;) { + pte_t pte = *ptep; + __update_tlb(vma, address, pte); + if (--nr == 0) + break; + ptep++; + address += PAGE_SIZE; + } } +#define update_mmu_cache(vma, address, ptep) \ + update_mmu_cache_range(NULL, vma, address, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index 4b6554b48923..187d1c16361c 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -568,13 +568,14 @@ static inline void local_r4k_flush_cache_page(void *args) if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) vaddr = NULL; else { + struct folio *folio = page_folio(page); /* * Use kmap_coherent or kmap_atomic to do flushes for * another ASID than the current one. */ map_coherent = (cpu_has_dc_aliases && - page_mapcount(page) && - !Page_dcache_dirty(page)); + folio_mapped(folio) && + !folio_test_dcache_dirty(folio)); if (map_coherent) vaddr = kmap_coherent(page, addr); else diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index d21cf8c6cf6c..02042100e267 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -99,13 +99,15 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes, return 0; } -void __flush_dcache_page(struct page *page) +void __flush_dcache_pages(struct page *page, unsigned int nr) { - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = page_folio(page); + struct address_space *mapping = folio_flush_mapping(folio); unsigned long addr; + unsigned int i; if (mapping && !mapping_mapped(mapping)) { - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); return; } @@ -114,25 +116,21 @@ void __flush_dcache_page(struct page *page) * case is for exec env/arg pages and those are %99 certainly going to * get faulted into the tlb (and thus flushed) anyways. */ - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - flush_data_cache_page(addr); - - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + for (i = 0; i < nr; i++) { + addr = (unsigned long)kmap_local_page(page + i); + flush_data_cache_page(addr); + kunmap_local((void *)addr); + } } - -EXPORT_SYMBOL(__flush_dcache_page); +EXPORT_SYMBOL(__flush_dcache_pages); void __flush_anon_page(struct page *page, unsigned long vmaddr) { unsigned long addr = (unsigned long) page_address(page); + struct folio *folio = page_folio(page); if (pages_do_alias(addr, vmaddr)) { - if (page_mapcount(page) && !Page_dcache_dirty(page)) { + if (folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -147,27 +145,29 @@ EXPORT_SYMBOL(__flush_anon_page); void __update_cache(unsigned long address, pte_t pte) { - struct page *page; + struct folio *folio; unsigned long pfn, addr; int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc; + unsigned int i; pfn = pte_pfn(pte); if (unlikely(!pfn_valid(pfn))) return; - page = pfn_to_page(pfn); - if (Page_dcache_dirty(page)) { - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - if (exec || pages_do_alias(addr, address & PAGE_MASK)) - flush_data_cache_page(addr); - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + folio = page_folio(pfn_to_page(pfn)); + address &= PAGE_MASK; + address -= offset_in_folio(folio, pfn << PAGE_SHIFT); + + if (folio_test_dcache_dirty(folio)) { + for (i = 0; i < folio_nr_pages(folio); i++) { + addr = (unsigned long)kmap_local_folio(folio, i); - ClearPageDcacheDirty(page); + if (exec || pages_do_alias(addr, address)) + flush_data_cache_page(addr); + kunmap_local((void *)addr); + address += PAGE_SIZE; + } + folio_clear_dcache_dirty(folio); } } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5a8002839550..5dcb525a8995 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -88,7 +88,7 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot) pte_t pte; int tlbidx; - BUG_ON(Page_dcache_dirty(page)); + BUG_ON(folio_test_dcache_dirty(page_folio(page))); preempt_disable(); pagefault_disable(); @@ -169,11 +169,12 @@ void kunmap_coherent(void) void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); if (cpu_has_dc_aliases && - page_mapcount(from) && !Page_dcache_dirty(from)) { + folio_mapped(src) && !folio_test_dcache_dirty(src)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(); @@ -194,15 +195,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } if (vma->vm_flags & VM_EXEC) flush_cache_page(vma, vaddr, page_to_pfn(page)); @@ -212,15 +215,17 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } } EXPORT_SYMBOL_GPL(copy_from_user_page); @@ -448,10 +453,10 @@ static inline void __init mem_init_free_highmem(void) void __init mem_init(void) { /* - * When _PFN_SHIFT is greater than PAGE_SHIFT we won't have enough PTE + * When PFN_PTE_SHIFT is greater than PAGE_SHIFT we won't have enough PTE * bits to hold a full 32b physical address on MIPS32 systems. */ - BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (_PFN_SHIFT > PAGE_SHIFT)); + BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (PFN_PTE_SHIFT > PAGE_SHIFT)); #ifdef CONFIG_HIGHMEM max_mapnr = highend_pfn ? highend_pfn : max_low_pfn; diff --git a/arch/mips/mm/pgtable-32.c b/arch/mips/mm/pgtable-32.c index f57fb69472f8..84dd5136d53a 100644 --- a/arch/mips/mm/pgtable-32.c +++ b/arch/mips/mm/pgtable-32.c @@ -35,7 +35,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/mips/mm/pgtable-64.c b/arch/mips/mm/pgtable-64.c index b4386a0e2ef8..c76d21f7dffb 100644 --- a/arch/mips/mm/pgtable-64.c +++ b/arch/mips/mm/pgtable-64.c @@ -93,7 +93,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index 8d514a9082c6..b4e1c783e617 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -253,7 +253,7 @@ static void output_pgtable_bits_defines(void) pr_define("_PAGE_GLOBAL_SHIFT %d\n", _PAGE_GLOBAL_SHIFT); pr_define("_PAGE_VALID_SHIFT %d\n", _PAGE_VALID_SHIFT); pr_define("_PAGE_DIRTY_SHIFT %d\n", _PAGE_DIRTY_SHIFT); - pr_define("_PFN_SHIFT %d\n", _PFN_SHIFT); + pr_define("PFN_PTE_SHIFT %d\n", PFN_PTE_SHIFT); pr_debug("\n"); } From patchwork Wed Aug 2 15:13:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A64B9C04FE1 for ; Wed, 2 Aug 2023 15:14:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6978528018F; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2AE7B280143; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8595280143; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 75A3D280192 for ; Wed, 2 Aug 2023 11:14:14 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 223EB1C922D for ; Wed, 2 Aug 2023 15:14:14 +0000 (UTC) X-FDA: 81079510428.27.D986857 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 113F91C0022 for ; Wed, 2 Aug 2023 15:14:11 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=I44KS4FB; dmarc=none; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h5KotGOxfSMXY8fe7KxTuI9F43npGySISQvI5VQ05oU=; b=jK+3j7xLl8Lnu736zmmx/TlsLijwEPZWaR6lNCQyw1z3fc9AQicoSjr2PAlzsf8qvi7uhP G4SvTt3+HnDQr6CGX4j/TuIXvZB5IGCoiyA7El+VfZz8blItLs3yfBaUu0LfwasyK5NrJc alRJm8YmuNhvSPR33erZQeCfdg6Ycec= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=I44KS4FB; dmarc=none; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989252; a=rsa-sha256; cv=none; b=bMnffCO0wy0XLDPfOAkkKMCdBPt8AM7gjDi62sSDuIwpw5p1xkI2WC2GfU0x/ZI+aQJv9W eXj/EQXzNCqlb3hU8P4g3g9Mw9FCKG3ET32TD7LLwMumUMPsFG16iUYCykUmS2YwJ9lv2A F6vAzlFH5IKwRp+xOdMx1l/rRba/gjM= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=h5KotGOxfSMXY8fe7KxTuI9F43npGySISQvI5VQ05oU=; b=I44KS4FB452HPp48QodUbfZi7K VFo8WExJRs/Uj2pef3hcImNYHq2mc2hOhnIj3QwRCEicZD7sYFdGy6obWtCrypWZcfxTU6NTTUyn0 PKgPzw85VzYKgKHwi4uL8AHfm7mTNtvJE0z6IVJgcTDdn+v9d15d/lFKNeK0HlX0ATOAlEhjwZweW uryt+CqRccf7pFhmw9+Q6ORzk0k23sgLPP19MvGPE3+QNtJI342iLNq/5Y7H4uh6itb5I/D7U1XIW w6qmkunGbZ2Nmv8xasLFTnAKWQKDYfwsIcNrA+aa/wZokglL+vYUXOABMuISBZl3zCvKJyJqBd7bK asc+Jqgg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDY9-00Ffju-SI; Wed, 02 Aug 2023 15:14:09 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Dinh Nguyen Subject: [PATCH v6 18/38] nios2: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:46 +0100 Message-Id: <20230802151406.3735276-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 113F91C0022 X-Stat-Signature: mi7yz8jusyi8c67hh4y76yc899nq9p16 X-HE-Tag: 1690989251-343879 X-HE-Meta: U2FsdGVkX199RO6jJaz2aJQsNfEyj6oZ6VlPJT6ICnjG3OCbqPtHF0w2WYdl4+wBESUmP+4yL0afnhboozC3pA85qgL9dCkjVqppFhl8640ZkXtJSRMnw9yw+qtr0e9IutsCGFdU4ChN6Ywi7vwPjgD03t/wF/JLbpqxmUd5X75+iSjDnfLbjrnljYkmOQqMWyY4v0tBt9qCl249rRZeJVmR9VOSe+3pLezLQh6re+QsEBOF29TZBh1kWMpJPkQ174sgKnP6Cbmj1iKi6vrUR2PcC0taFB8q3pTOJ2TtabEL6wXFiKNTaAl5PPJIPtlgrkmiS3T4Bn8Uz5cc61uNLmKw1RJA6nj00drhL6j7pP8VVfYGJfkOzDkCPfauFG5KoWc5W7n3rz44yqM5c/dDMGBL/jtQu20/+sfzSvWiwNl6pqTr9vUR6jOVBuMfPAS6guU2oHP4F1z2MuCyYd7vsuuDJ8A9HcBE5ZnlqlqN1DShWKVc9SFOKSrbxXG//u+2p312+FIVole0s+MuaoYnr7tdW/aMEysg5U0XhsdptpbEjNtSe2KshqGfcpt3IjIPTrvkj7NVSpJrZBnGbTBVfS74zBQBGx3pYyXBLIxwZFjP81X8ypqwLegvpxIS3PGFnJUk4ImBCZdFIJLdM3uIq01pWBOfCt3MqTORsBgzP2fNRgHx5VOIz/0POcFOS4T9S4uungqdBk3riPTYeORrKrBqm/L7j0xlNf3DCHjjbUuQo0jN72pkrDSXQtsDHi1TDPaWdC+bgihb70YuUN6LhcviCDvQo/AD/Rs97qVj7S8ggApC7bGl8GNzYzSNJxGJnF1ukaQYvnPT18GgpH0+CbvkhUONk+wJ2yImdfB/ksbMK8Tvu2E8BcZRtNpHadn2psXPngGD+IY7leH2FVXmGc3kcBfVkQMJjn+m0Gne8tdVpzCrp9Y6WAy4Fo4f5dK0KTDQIc9gNE/WSXCie9B aEM48yhl vCODuaL8gAvGjb4lbbL5m7b44xeUyRhAtMDtnm0zNdptLPrz5/iMMSRxhWh5pacK+yAJcSx6A+QZA/AeqNKgr9OD7yUDr2oUTcZ9VUXDFC1FJM9SNQzPh+O1Jj+9TDcxR3xWsrzEN58Tek4dvhkKNvLrUt8KsjycjWCX2RPO7hlVGY48O/nJghrsDmb7+tV3WUbFDXaOeRSxUJAB5PoCY21PN0IJq6fz/MbNm1ROHKFPP/BHsVr9Dk3cNMA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Dinh Nguyen --- arch/nios2/include/asm/cacheflush.h | 6 ++- arch/nios2/include/asm/pgtable.h | 28 ++++++---- arch/nios2/mm/cacheflush.c | 79 ++++++++++++++++------------- 3 files changed, 67 insertions(+), 46 deletions(-) diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index d0b71dd71287..8624ca83cffe 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -29,9 +29,13 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio extern void flush_icache_range(unsigned long start, unsigned long end); -extern void flush_icache_page(struct vm_area_struct *vma, struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 0f5c2564e9f5..be6bf3e0bd7a 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -178,14 +178,21 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - unsigned long paddr = (unsigned long)page_to_virt(pte_page(pteval)); - - flush_dcache_range(paddr, paddr + PAGE_SIZE); - set_pte(ptep, pteval); + unsigned long paddr = (unsigned long)page_to_virt(pte_page(pte)); + + flush_dcache_range(paddr, paddr + nr * PAGE_SIZE); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1; + } } +#define set_ptes set_ptes static inline int pmd_none(pmd_t pmd) { @@ -202,7 +209,7 @@ static inline void pte_clear(struct mm_struct *mm, pte_val(null) = (addr >> PAGE_SHIFT) & 0xf; - set_pte_at(mm, addr, ptep, null); + set_pte(ptep, null); } /* @@ -273,7 +280,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) extern void __init paging_init(void); extern void __init mmu_init(void); -extern void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte); +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #endif /* _ASM_NIOS2_PGTABLE_H */ diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 6aa9257c3ede..28b805f465a8 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -71,26 +71,26 @@ static void __flush_icache(unsigned long start, unsigned long end) __asm__ __volatile(" flushp\n"); } -static void flush_aliases(struct address_space *mapping, struct page *page) +static void flush_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; + struct vm_area_struct *vma; pgoff_t pgoff; + unsigned long nr = folio_nr_pages(folio); - pgoff = page->index; + pgoff = folio->index; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + nr - 1) { + unsigned long start; - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, - page_to_pfn(page)); + start = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + flush_cache_range(vma, start, start + nr * PAGE_SIZE); } flush_dcache_mmap_unlock(mapping); } @@ -138,10 +138,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, __flush_icache(start, end); } -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { unsigned long start = (unsigned long) page_address(page); - unsigned long end = start + PAGE_SIZE; + unsigned long end = start + nr * PAGE_SIZE; __flush_dcache(start, end); __flush_icache(start, end); @@ -158,19 +159,19 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, __flush_icache(start, end); } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +static void __flush_dcache_folio(struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - unsigned long start = (unsigned long)page_address(page); + unsigned long start = (unsigned long)folio_address(folio); - __flush_dcache(start, start + PAGE_SIZE); + __flush_dcache(start, start + folio_size(folio)); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -178,32 +179,38 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); /* Flush this page if there are aliases. */ if (mapping && !mapping_mapped(mapping)) { - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(folio); if (mapping) { - unsigned long start = (unsigned long)page_address(page); - flush_aliases(mapping, page); - flush_icache_range(start, start + PAGE_SIZE); + unsigned long start = (unsigned long)folio_address(folio); + flush_aliases(mapping, folio); + flush_icache_range(start, start + folio_size(folio)); } - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; struct address_space *mapping; reload_tlb_page(vma, address, pte); @@ -215,19 +222,19 @@ void update_mmu_cache(struct vm_area_struct *vma, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(folio); - if(mapping) - { - flush_aliases(mapping, page); + mapping = folio_flush_mapping(folio); + if (mapping) { + flush_aliases(mapping, folio); if (vma->vm_flags & VM_EXEC) - flush_icache_page(vma, page); + flush_icache_pages(vma, &folio->page, + folio_nr_pages(folio)); } } From patchwork Wed Aug 2 15:13:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42B9DC001DF for ; Wed, 2 Aug 2023 15:15:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB2D12801B0; Wed, 2 Aug 2023 11:14:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B8A442801AA; Wed, 2 Aug 2023 11:14:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A507F2801B0; Wed, 2 Aug 2023 11:14:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8AAFE2801AA for ; Wed, 2 Aug 2023 11:14:35 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 654941405E2 for ; Wed, 2 Aug 2023 15:14:35 +0000 (UTC) X-FDA: 81079511310.06.60CAC05 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 70A5E1C0026 for ; Wed, 2 Aug 2023 15:14:32 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dy7B6L2U; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989272; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hgxbRqC0royFvXDvAgcwEvAWh4810YGjFn56dm1qu5Q=; b=2hXR0b3sFCu/YCghOHtkbzHySGB+ExlliRQFehYrcX94fCHI35X8Ez8nkWKoLsIi1Cz7CH Zp5Xl8lf6MneBU+//lau+f0Cp/aJ+MlRIGbg7CLfgCrNZh+GwsiBYWYGaRKjT2md+j/Cua Vn7z88OJJ3CZDApE4Y1CbTYJlwE7T20= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989272; a=rsa-sha256; cv=none; b=XQT9PVVxNvSlZi5vMaC+glwPk5LvCwKPy4JhvHb5QBCseGjEQSZfJ3OJnH4cOPJcbJEdMr kDAVQQgCUFMCithh2O89Khgf0J0Qy+lLhgr9kd4Lc4slkDr2uhQvNmW3vSnuwhMB2yCFzg Rh6TKxKpDOrmWLuekOIVyArMp75Nuo8= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dy7B6L2U; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hgxbRqC0royFvXDvAgcwEvAWh4810YGjFn56dm1qu5Q=; b=dy7B6L2Uktly3yVdII9xfIHzld pOOK5mwHaUrrszwyAz7o54cfZ/ywCxjEXJWjiQ8wtsWMw4SlJUnsZ0NSm0R528zAw2gHXjdVAq1RT x2nkWMde+2js61sxAsvzwZX9rTDXw1xrFGs64JAWLUgZOyVQrPMVtZ0z04s3oMU8034nMixXXmANy 0gP+1Yw9l8QqA6iaQvG4txUwMOYvPALHyiaFMehFkTsCa8lwfDCpZF1x6AAjiE9rUbfYvI5ernLvk Bsap6LZ87zpsd5pdkMdOc0y5B6aUqo9ZZHtAFwk+phV5LRybQnr2+9vvYbX5Yuu2s/CuEk9uIh4mH PIWQ0sgA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYA-00Ffk7-0Z; Wed, 02 Aug 2023 15:14:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Jonas Bonn , Stefan Kristiansson , Stafford Horne , linux-openrisc@vger.kernel.org Subject: [PATCH v6 19/38] openrisc: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:47 +0100 Message-Id: <20230802151406.3735276-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 70A5E1C0026 X-Rspam-User: X-Stat-Signature: sqk7tsudrppn3qf1e4739rxnuxs1jmgc X-Rspamd-Server: rspam03 X-HE-Tag: 1690989272-271697 X-HE-Meta: U2FsdGVkX185qEzpCDUseo0RCR/ZT/MziFZ5C794rv9jLgOO9PJu6ZdYzMoNcln6uzArOBSIZNgkLKJgcelU0Gkdnkl7Phuc9yms5GLSnq3M8sBC2YcHQqPitbmi/SRUwQRb5e0iDHiVamDBY8Zx4X6LzEu/pQWVkawL5FtytgVEvXmGFXadM1cECtf46hcnnghhWa2bbTIfDbAZDdobms+1Dk72JiPCg4+pAC77OcJHNod0npl45RQ9Pabta1l7gOe5OEQ6PrXduXeyd6s1BOHWlldAtCaFpsdd8dOKNZYvTWMgBZF/ujaNWpd9futFfJKKpuAI8c6+V8rsKw6Nq3lL7QaUUoe5JqaqQUbqGmpUdSQAyRmgSswrJ6SUQtJsnzHkCdYU3ZdWx0m3sSdpLYDm4EX7xGr6dB2wy/G22deeuTljAwdgQSmGIkcRj0m3bi1cnqluJcacqn9mhufD/r5n599XaVemEyQZg00/qFb2oNlHtdyi5jIN2RG/YjdBuYPs5P76/27Ixp+GXq+ezCUNAFhakMWEhhYSAX/WCQBYuIBerb9z6mcHjvSPFy0scppPrRHBTiK9OIFIM5pH4VRlj6porFU7+suhHK8csmiv08ZYWt3yxBzp0u58YntwwN31bdMqAxPTlFOfH84LrpxIchGBetcpsMkaRGhpoUOQcmUJVGbygKDrpdAi0E+PjPFKdf6EiuxPZ90oVt5j0+0otkvrLTJ4Va/mq7PpFRyOW3HFkuAd49lIFQ0IiKZLvhC1p83GDJeieJtKfW7adpM/bNz0OQTzIVZe2BkK3dHjVNn3TE1tNR/QsXcJSiqorGe8l6eWC8NayJTjtqn6ItMgq/8aq1LpKcT8Be7Q6PJCyhTF26vG/5jVmbSmIfR14h4ksLkitGDZ1KYrd2psR9BlSbRUIhwyToU0smcEHq3om7mB9fcZU7R34ky3YRk+kzdXyZYcAZq4nLBZ9D4 K54qy79T SthvLHFUO0XaUqR5ZAsYbvQn67Xm2fDM44QLmWNNKm8fxPEI8hqYtDIy4b+OJFq+UZ04GGOIy+Y+FVlek6wZL8xbrKdlRMLM4dqfxsQG5z4EoCpE0NasUW86+119YywprtoQbVbkOSTKJwlUjjgHhoh3o7gy6NWnR4QybBUo4dGObvItyyZJ+uRCZ4kAXg+S3DMtp/FpnkgR4IPuacLhlE1WyGEK4WUlkMldBm/fjE4jgLLnM/mITD6bH5uQY1up9vMpncsfJkm1PSqI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Jonas Bonn Cc: Stefan Kristiansson Cc: Stafford Horne Cc: linux-openrisc@vger.kernel.org --- arch/openrisc/include/asm/cacheflush.h | 8 +++++++- arch/openrisc/include/asm/pgtable.h | 15 ++++++++++----- arch/openrisc/mm/cache.c | 12 ++++++++---- 3 files changed, 25 insertions(+), 10 deletions(-) diff --git a/arch/openrisc/include/asm/cacheflush.h b/arch/openrisc/include/asm/cacheflush.h index eeac40d4a854..984c331ff5f4 100644 --- a/arch/openrisc/include/asm/cacheflush.h +++ b/arch/openrisc/include/asm/cacheflush.h @@ -56,10 +56,16 @@ static inline void sync_icache_dcache(struct page *page) */ #define PG_dc_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_dc_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - clear_bit(PG_dc_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_icache_user_page(vma, page, addr, len) \ diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h index 3eb9b9555d0d..7bdf1bb0d177 100644 --- a/arch/openrisc/include/asm/pgtable.h +++ b/arch/openrisc/include/asm/pgtable.h @@ -46,7 +46,7 @@ extern void paging_init(void); * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) @@ -357,6 +357,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) #define __pmd_offset(address) \ (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_pfn(x) ((unsigned long)(((x).pte)) >> PAGE_SHIFT) #define pfn_pte(pfn, prot) __pte((((pfn) << PAGE_SHIFT)) | pgprot_val(prot)) @@ -379,13 +380,17 @@ static inline void update_tlb(struct vm_area_struct *vma, extern void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { - update_tlb(vma, address, pte); - update_cache(vma, address, pte); + update_tlb(vma, address, ptep); + update_cache(vma, address, ptep); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + /* __PHX__ FIXME, SWAP, this probably doesn't work */ /* diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c index 534a52ec5e66..eb43b73f3855 100644 --- a/arch/openrisc/mm/cache.c +++ b/arch/openrisc/mm/cache.c @@ -43,15 +43,19 @@ void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte) { unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT; - struct page *page = pfn_to_page(pfn); - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); /* * Since icaches do not snoop for updated data on OpenRISC, we * must write back and invalidate any dirty pages manually. We * can skip data pages, since they will not end up in icaches. */ - if ((vma->vm_flags & VM_EXEC) && dirty) - sync_icache_dcache(page); + if ((vma->vm_flags & VM_EXEC) && dirty) { + unsigned int nr = folio_nr_pages(folio); + + while (nr--) + sync_icache_dcache(folio_page(folio, nr)); + } } From patchwork Wed Aug 2 15:13:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C237C04E69 for ; Wed, 2 Aug 2023 15:14:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F0889280195; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C4EC2280197; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CA9E280190; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5F2D3280195 for ; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 36FF3B15D3 for ; Wed, 2 Aug 2023 15:14:15 +0000 (UTC) X-FDA: 81079510470.12.F33E101 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id DB574140032 for ; Wed, 2 Aug 2023 15:14:12 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=f9xsucpS; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989253; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xd+rUk+JorrUD2fntdyOGw757rx3p3JMpmCuxu2/HT0=; b=JKk+VoqwB+L1e2J06u5o7/jbPwFFmg5EN2Lhn6PPeLLrGiOs7PhjYhcQWiKjU0AvtK7OmD MS4zXMbddEzriwKw6aXQi0IMxxSbAzUeK07zUa53pHsyqHgxP8dmar92DVELg3h0lvbo3x ZEdN4j2J3HXoTOcyID3ImPTL8VgF8+4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=f9xsucpS; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989253; a=rsa-sha256; cv=none; b=lRBpyUPRxp9rdvk08C+bLn7uF9pd1bjUnM4qiPW9ISxahgtTLNeiNHx4tMd5d5G0S0jyvM 6sv0HEIcQesfWW2KPGowitJvPEu5Mbwj9E8GBi9nVJAwkagdRJHxtXbAs6I7t9dJhvKRL1 E5RzfPFtiAe3ksg0sXclu7B9XQ1I+7Y= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Xd+rUk+JorrUD2fntdyOGw757rx3p3JMpmCuxu2/HT0=; b=f9xsucpSyz6ryOrM5zQqqRhfqa +3sAFzDTVEfnykYvhr8ndbPq8z1HFUnJYrOR5gYjiQoIUOAkQqDcxqBDJUIKRXsZzxz8QWnKw87rq dPQC1Kh8N6OCNRFrm7UNsDiaDMuXpihpN0+soarb0NPmcGhsMnx/SPLd4M73Jo5l30pOoiX1/CXY1 OgEXpW690lxpVfCzEj5N5+tbOpIBfIVUtYImUmoI5hpUKxFeDKVWdDLeltduEEMwqAzFnZGr2IQBz XOTkoYxkcI70XvTkV/u5yqGSV+wDFNuQMscjEYMaCRhhYp+UuzjVmz86fxNl306/5pCE5awoa1Jbu nlkiCkPA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYA-00FfkH-8O; Wed, 02 Aug 2023 15:14:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH v6 20/38] parisc: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:48 +0100 Message-Id: <20230802151406.3735276-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: DB574140032 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: p7rfc4htqxbffudpoynmj4xgson7194j X-HE-Tag: 1690989252-937458 X-HE-Meta: U2FsdGVkX1+1+s+b2mTmuYCjzWYxhpS0fD9KcnHeHkFYbfACNXp8/l21nQaBBUiNP8OQvUpe/zDVbT7ktrWw3hAF8wzmK6YoQ1dLnF0dd2dcMhBhaVTlwSFVXlphGdseECW2HJ07KFQVjHfc8WeZyAqEMqjUgAYMkRRwFuySwrFpG/XoLJS+MV7N95kHG9wKufeZguL8pOoa+zqubx0SXv0at0+5ESZgxBO7Hm8+eCmgEee1qxxpDiUKUq6vsVxo6iv/oaVt3M913vPNOVxKYU0F9biHxnietbznlwzVLLq9rxS/M0uXaPGpKVkggnbB8f6cTJGGbW7186k+AXyZy2OwTdrFiHbpqDnu8IPAXq6ql+XxzBorEoLxR8OR5JuvqH5ngjjtYEVThslo6vp71MT7OTrBByZH01bC/cONUA7SawWld1UK1PWNja4EhcO4XJUbGYpdyGgX00iVg6arf8CtixjEQplEDM80HnMKCmMKIA0tTrkCRZT1hp24viMVaQQd7WizgmnfsDwk60gQs/JVUNd1gwZeAJnx+VTofUz3oRx3pgWMwvgu/jSYjySS+kDlBHke7LW4j6v6W+/ltJdc6RFEg7saidrTubk/IRpy0RFr53ONULC64cZrz5iWDYBn9+fXG4NPxvmIbMPpAnQjX2cDa0JLUIEWpjK2U0n2xm2oSFzubA4VvTshJFkl36ilXZ45Eqo2XktWzH3/6sz5QG3IEvST1NoXwnX6WfHh7IrvxtHpvR8W9UjT2YhHzBl3S5A05rDUz+opAyNfwUrjsxucWS7MnjIsgSLai8E6vcATPc73kn787/ui6MVglM29YhpVfQKV9Er/Z3ZJsBcR8pury4WzKf62Ts7pvdiNYYi029AlsqMiMQYefiu29pfE6x9F2k0mqhQJH2lM2Hed/2xGHVra7C4Q3PpnkERvcwduQn6TmGSx0IwwraE9LEZxgy/c1DbUhaZ/e6o bJP9656T sip7GYrQAQ/RZB89J2Jf33KzdRwwuS8vAYKvu7+ULIzRKtaAJksiXCxRIqbz6A4UDzrNzEH1Y+yRvuGFcuagyTe73YBwkbeZD3zZK1IfcOPDor5mxNfhLDUKkoRBU2vDLEBmwxi9B+aE+Pp38h77rYK4/en8eU9mMYDJrIKSdgfrIQy16ybuyV+9uistyckpQdsiYfpSRC/bLbbDo3SLS84AV3TbbeV1pB83XjJQYT3NtMUbjGxXp9c8y7W4ydy94jNoS0OHYzuU7co/o0Hsu2/M6hLNnOGkyw83Fq+uuWBwyMB8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org --- arch/parisc/include/asm/cacheflush.h | 14 ++-- arch/parisc/include/asm/pgtable.h | 37 +++++---- arch/parisc/kernel/cache.c | 107 ++++++++++++++++++--------- 3 files changed, 105 insertions(+), 53 deletions(-) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index c8b6928cee1e..b77c3e0c37d3 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -43,8 +43,13 @@ void invalidate_kernel_vmap_range(void *vaddr, int size); #define flush_cache_vmap(start, end) flush_cache_all() #define flush_cache_vunmap(start, end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) @@ -53,10 +58,9 @@ void flush_dcache_page(struct page *page); #define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \ xa_unlock_irqrestore(&mapping->i_pages, flags) -#define flush_icache_page(vma,page) do { \ - flush_kernel_dcache_page_addr(page_address(page)); \ - flush_kernel_icache_page(page_address(page)); \ -} while (0) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index 5656395c95ee..ce38bb375b60 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -73,15 +73,6 @@ extern void __update_cache(pte_t pte); mb(); \ } while(0) -#define set_pte_at(mm, addr, pteptr, pteval) \ - do { \ - if (pte_present(pteval) && \ - pte_user(pteval)) \ - __update_cache(pteval); \ - *(pteptr) = (pteval); \ - purge_tlb_entries(mm, addr); \ - } while (0) - #endif /* !__ASSEMBLY__ */ #define pte_ERROR(e) \ @@ -285,7 +276,7 @@ extern unsigned long *empty_zero_page; #define pte_none(x) (pte_val(x) == 0) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_user(x) (pte_val(x) & _PAGE_USER) -#define pte_clear(mm, addr, xp) set_pte_at(mm, addr, xp, __pte(0)) +#define pte_clear(mm, addr, xp) set_pte(xp, __pte(0)) #define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK) #define pmd_address(x) ((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT) @@ -391,11 +382,29 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) extern void paging_init (void); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + if (pte_present(pte) && pte_user(pte)) + __update_cache(pte); + for (;;) { + *ptep = pte; + purge_tlb_entries(mm, addr); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_PTE_SHIFT; + addr += PAGE_SIZE; + } +} +#define set_ptes set_ptes + /* Used for deferring calls to flush_dcache_page() */ #define PG_dcache_dirty PG_arch_1 -#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep) +#define update_mmu_cache_range(vmf, vma, addr, ptep, nr) __update_cache(*ptep) +#define update_mmu_cache(vma, addr, ptep) __update_cache(*ptep) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that @@ -450,7 +459,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned if (!pte_young(pte)) { return 0; } - set_pte_at(vma->vm_mm, addr, ptep, pte_mkold(pte)); + set_pte(ptep, pte_mkold(pte)); return 1; } @@ -460,14 +469,14 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t old_pte; old_pte = *ptep; - set_pte_at(mm, addr, ptep, __pte(0)); + set_pte(ptep, __pte(0)); return old_pte; } static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - set_pte_at(mm, addr, ptep, pte_wrprotect(*ptep)); + set_pte(ptep, pte_wrprotect(*ptep)); } #define pte_same(A,B) (pte_val(A) == pte_val(B)) diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index b55b35c89d6a..442109a48940 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -94,11 +94,11 @@ static inline void flush_data_cache(void) /* Kernel virtual address of pfn. */ #define pfn_va(pfn) __va(PFN_PHYS(pfn)) -void -__update_cache(pte_t pte) +void __update_cache(pte_t pte) { unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; + unsigned int nr; /* We don't have pte special. As a result, we can be called with an invalid pfn and we don't need to flush the kernel dcache page. @@ -106,13 +106,17 @@ __update_cache(pte_t pte) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page_mapping_file(page) && - test_bit(PG_dcache_dirty, &page->flags)) { - flush_kernel_dcache_page_addr(pfn_va(pfn)); - clear_bit(PG_dcache_dirty, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + if (folio_flush_mapping(folio) && + test_bit(PG_dcache_dirty, &folio->flags)) { + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); + clear_bit(PG_dcache_dirty, &folio->flags); } else if (parisc_requires_coherency()) - flush_kernel_dcache_page_addr(pfn_va(pfn)); + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); } void @@ -366,6 +370,20 @@ static void flush_user_cache_page(struct vm_area_struct *vma, unsigned long vmad preempt_enable(); } +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) +{ + void *kaddr = page_address(page); + + for (;;) { + flush_kernel_dcache_page_addr(kaddr); + flush_kernel_icache_page(kaddr); + if (--nr == 0) + break; + kaddr += PAGE_SIZE; + } +} + static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) { pte_t *ptep = NULL; @@ -394,27 +412,30 @@ static inline bool pte_needs_flush(pte_t pte) == (_PAGE_PRESENT | _PAGE_ACCESSED); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); - struct vm_area_struct *mpnt; - unsigned long offset; + struct address_space *mapping = folio_flush_mapping(folio); + struct vm_area_struct *vma; unsigned long addr, old_addr = 0; + void *kaddr; unsigned long count = 0; - unsigned long flags; + unsigned long i, nr, flags; pgoff_t pgoff; if (mapping && !mapping_mapped(mapping)) { - set_bit(PG_dcache_dirty, &page->flags); + set_bit(PG_dcache_dirty, &folio->flags); return; } - flush_kernel_dcache_page_addr(page_address(page)); + nr = folio_nr_pages(folio); + kaddr = folio_address(folio); + for (i = 0; i < nr; i++) + flush_kernel_dcache_page_addr(kaddr + i * PAGE_SIZE); if (!mapping) return; - pgoff = page->index; + pgoff = folio->index; /* * We have carefully arranged in arch_get_unmapped_area() that @@ -424,20 +445,33 @@ void flush_dcache_page(struct page *page) * on machines that support equivalent aliasing */ flush_dcache_mmap_lock_irqsave(mapping, flags); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - addr = mpnt->vm_start + offset; - if (parisc_requires_coherency()) { - bool needs_flush = false; - pte_t *ptep; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + nr - 1) { + unsigned long offset = pgoff - vma->vm_pgoff; + unsigned long pfn = folio_pfn(folio); + + addr = vma->vm_start; + nr = folio_nr_pages(folio); + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + addr += offset * PAGE_SIZE; + } + if (addr + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - addr) / PAGE_SIZE; - ptep = get_ptep(mpnt->vm_mm, addr); - if (ptep) { - needs_flush = pte_needs_flush(*ptep); + if (parisc_requires_coherency()) { + for (i = 0; i < nr; i++) { + pte_t *ptep = get_ptep(vma->vm_mm, + addr + i * PAGE_SIZE); + if (!ptep) + continue; + if (pte_needs_flush(*ptep)) + flush_user_cache_page(vma, + addr + i * PAGE_SIZE); + /* Optimise accesses to the same table? */ pte_unmap(ptep); } - if (needs_flush) - flush_user_cache_page(mpnt, addr); } else { /* * The TLB is the engine of coherence on parisc: @@ -450,27 +484,32 @@ void flush_dcache_page(struct page *page) * in (until the user or kernel specifically * accesses it, of course) */ - flush_tlb_page(mpnt, addr); + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); if (old_addr == 0 || (old_addr & (SHM_COLOUR - 1)) != (addr & (SHM_COLOUR - 1))) { - __flush_cache_page(mpnt, addr, page_to_phys(page)); + for (i = 0; i < nr; i++) + __flush_cache_page(vma, + addr + i * PAGE_SIZE, + (pfn + i) * PAGE_SIZE); /* * Software is allowed to have any number * of private mappings to a page. */ - if (!(mpnt->vm_flags & VM_SHARED)) + if (!(vma->vm_flags & VM_SHARED)) continue; if (old_addr) pr_err("INEQUIVALENT ALIASES 0x%lx and 0x%lx in file %pD\n", - old_addr, addr, mpnt->vm_file); - old_addr = addr; + old_addr, addr, vma->vm_file); + if (nr == folio_nr_pages(folio)) + old_addr = addr; } } WARN_ON(++count == 4096); } flush_dcache_mmap_unlock_irqrestore(mapping, flags); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* Defined in arch/parisc/kernel/pacache.S */ EXPORT_SYMBOL(flush_kernel_dcache_range_asm); From patchwork Wed Aug 2 15:13:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9B39C07E8C for ; Wed, 2 Aug 2023 15:14:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DBED5280193; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A9EE328019A; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57057280196; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 07F4C280199 for ; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C7CAD1A0F34 for ; Wed, 2 Aug 2023 15:14:15 +0000 (UTC) X-FDA: 81079510470.06.AA8C09B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id B59B780026 for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TVpBfmHA; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989254; a=rsa-sha256; cv=none; b=TV5TWDFNVdHPA8RjELNfBPtXLHxcCHHQBLKYJG70jW5w2Lj6t6vggvLLSf2tqK4O9xGDsF iaRPtPoVublTA3IPBjDj1xdG4WqmpIaCipFVHvLyxBmz0bC8EuMjhlUUH5lsMeJ1oRsVx6 XQ/uRVmw+tg7ivkbUVSO5U1oVhhftKc= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TVpBfmHA; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989254; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hFcMK8y9e2dM1e6cfV13VqfPuKy2Og6E4JHV/5A20Gw=; b=WxG55RM2Wreqti031+1FmQ/C1DaEdnSpbiFmdFukSHNzVRe6HMKLTfjz7FjzS4fXsjpWXE 5PfkSlhGdW6cDK5W2p6fGm7n/5kXFeCxbr2VJmxuT4Zv3oHECpF7EFLR/G/GceeK2xmIF3 uUnqm0JVCo+Olx9vIvAxctKIHgZGd50= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hFcMK8y9e2dM1e6cfV13VqfPuKy2Og6E4JHV/5A20Gw=; b=TVpBfmHAAaVFsy3AoHuelRjQKT 8cbf0nJoKD+JTY6keBHeZdj290h+bga6DGmguxhl2UBdmbo7YxATrIM1lcZrnmQUhY/pjxyJY4tOL 9zAfM1DiWocT+G02BOnSZUPhAb6Otp4kwqU8JExZBkNyRwYxO4OZ0BjP744clRMctP1d96IyumKok qertveKNIUk7O9hhLkQP5xNH2EeN9m5yJlNy5T15l8pLsZ78PIDIheRkgOYbU0oCUw6denQ391KNr QsA1upcZCcMmvI9OQ32e7ato168wsqn8euPy8VWb2f3NSftkbqtiwbtQ3+MNUCiB+p32ZhEbWgmii 7CmiJRBg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYA-00FfkJ-Bi; Wed, 02 Aug 2023 15:14:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 21/38] powerpc: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:49 +0100 Message-Id: <20230802151406.3735276-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: B59B780026 X-Stat-Signature: cegchybedoxf5hcqtuhoout8hfhreeuu X-Rspam-User: X-HE-Tag: 1690989253-301658 X-HE-Meta: U2FsdGVkX19QLHDQN4S/7HcFxR6f939zDLr3HDlMOnsN26I402mC0GuVKUxfZ859hk/lsULZaibhKcEoUmaGhnTqCEqMCK2+bsvNxhtURADkNQ9lx+NO2wX8zdu+DDo/UW5qjVPHGgtYh5uZxg5LPhkR/ZRZU/MCntuL4vWdwbdOUJwaqINkx0sWSd005Qh30bJFqmuF6Fy/abnqN57FuzsaOS+R8nulhWZydODMRth20o9SPyN2LzAe94baVt/K90tCEuBrdr3k+029HISodAM6tA09O7MywgTQ3EMqVZpvGg3xj7v5zXsXnma3TAagiF2KRp7SqdTiELofizs055nq65dIUzwGu61ic+d+jHPQWNtylcjtJxEpEGxzHgVfh7slc53YRW3d4YU6gOaoJ/sWMcpX6yi7URHb+0WylCxHKLtKUBXCte+4q7kW4Sw5/PuKe4+2UhNTpv7NagiIGpAW0E58wzSoH4utd8GDn3zV9duC4aDD6VNmo6wIrb8izinNFCs4gSWUREkn9HAFVTL2kyEv6bwEp6BudwJjbW4xbDRBxJf/YZMto47PtwzAxHvOG3S/wO+4lu1qXvNdltL5KbXTBwLOBfw5CPCCpGb5/ss0VXE2VmMiLZV46GI1WoU9yGiwkPLKSyKHbtxFB6N8E+bfkoAA7tDigZsFLmdp7Na6MvFW5X/R8mj5w2oGCQENOmMKkmV2HpB/5P5uGAj468PyOevmZZNAy1gPI+pOKYOXhYd15UOX3Tk/qZfLg6VDqhi169SSP6SiRhN8ZC9EhDwRRkUINRF575lR8C/V1j7a59Ea60r8I5IEqmRIeg6ISNRJw0bRGEbqFhDN4MvAKYCPqcyYWBfStzEVx8zCp35wsp6h+kdnVL8sp9As0S0Q1I99iH46janXyUD48luOoYdvenR2XGVngt+1vdd22OVAFsg3Yr5JGprpvwfS0tho2CwA228OZ92uIhZ Ijf8TbN3 V0UsEDRvcQdkaSPx3CrCjK8cRZ9NdZrl6U9FbFwy3lh5+bh8OzEys9yAwFhL3+GeAFLjxeJsPdowuEOzV21xivXbgSYHiOLJKG6SrKSqYvGY5zfH1DKoVoAePcXo8/MaHSy7p/oTJ79xLgVh/NTGtBwsgcx5FJ3rCp9AMBh/RYNvYiyUctImYpDbraB/QDyEBbhAK/unagZAhrCu0Q0z75MJ1BNhlP+zAqtBFye8bnzo/zJgdg83ESdRw48lAqPun2xvDuAt8+/ktTXNRDe36mGMiS38UgLQPzXrOEfBOJTiurLZUqPMxuc4gPEAz/6G2IC1ktOStrGINKMFw4GXf5SpWfsC3oMxZJjC+gHIiT25T8V1wvuf01aqSDB9u6zD1CnaM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: linuxppc-dev@lists.ozlabs.org --- arch/powerpc/include/asm/book3s/32/pgtable.h | 5 -- arch/powerpc/include/asm/book3s/64/pgtable.h | 6 +-- arch/powerpc/include/asm/book3s/pgtable.h | 11 ++-- arch/powerpc/include/asm/cacheflush.h | 14 ++++-- arch/powerpc/include/asm/kvm_ppc.h | 10 ++-- arch/powerpc/include/asm/nohash/pgtable.h | 16 ++---- arch/powerpc/include/asm/pgtable.h | 12 +++++ arch/powerpc/mm/book3s64/hash_utils.c | 11 ++-- arch/powerpc/mm/cacheflush.c | 40 +++++---------- arch/powerpc/mm/nohash/e500_hugetlbpage.c | 3 +- arch/powerpc/mm/pgtable.c | 53 ++++++++++++-------- 11 files changed, 88 insertions(+), 93 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h index 7bf1fe7297c6..5f12b9382909 100644 --- a/arch/powerpc/include/asm/book3s/32/pgtable.h +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h @@ -462,11 +462,6 @@ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) pgprot_val(pgprot)); } -static inline unsigned long pte_pfn(pte_t pte) -{ - return pte_val(pte) >> PTE_RPN_SHIFT; -} - /* Generic modifiers for PTE bits */ static inline pte_t pte_wrprotect(pte_t pte) { diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index a8204566cfd0..8269b231c533 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -104,6 +104,7 @@ * and every thing below PAGE_SHIFT; */ #define PTE_RPN_MASK (((1UL << _PAGE_PA_MAX) - 1) & (PAGE_MASK)) +#define PTE_RPN_SHIFT PAGE_SHIFT /* * set of bits not changed in pmd_modify. Even though we have hash specific bits * in here, on radix we expect them to be zero. @@ -569,11 +570,6 @@ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) return __pte(((pte_basic_t)pfn << PAGE_SHIFT) | pgprot_val(pgprot) | _PAGE_PTE); } -static inline unsigned long pte_pfn(pte_t pte) -{ - return (pte_val(pte) & PTE_RPN_MASK) >> PAGE_SHIFT; -} - /* Generic modifiers for PTE bits */ static inline pte_t pte_wrprotect(pte_t pte) { diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h index d18b748ea3ae..3b7bd36a2321 100644 --- a/arch/powerpc/include/asm/book3s/pgtable.h +++ b/arch/powerpc/include/asm/book3s/pgtable.h @@ -9,13 +9,6 @@ #endif #ifndef __ASSEMBLY__ -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - - #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty); @@ -36,7 +29,9 @@ void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * corresponding HPTE into the hash table ahead of time, instead of * waiting for the inevitable extra hash-table miss exception. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { if (IS_ENABLED(CONFIG_PPC32) && !mmu_has_feature(MMU_FTR_HPTE_TABLE)) return; diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h index 7564dd4fd12b..ef7d2de33b89 100644 --- a/arch/powerpc/include/asm/cacheflush.h +++ b/arch/powerpc/include/asm/cacheflush.h @@ -35,13 +35,19 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) * It just marks the page as not i-cache clean. We do the i-cache * flush later when the page is given to a user process, if necessary. */ -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) return; /* avoid an atomic op if possible */ - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } void flush_icache_range(unsigned long start, unsigned long stop); @@ -51,7 +57,7 @@ void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); #define flush_icache_user_page flush_icache_user_page -void flush_dcache_icache_page(struct page *page); +void flush_dcache_icache_folio(struct folio *folio); /** * flush_dcache_range(): Write any modified data cache blocks out to memory and diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index d16d80ad2ae4..b4da8514af43 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -894,7 +894,7 @@ void kvmppc_init_lpid(unsigned long nr_lpids); static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) { - struct page *page; + struct folio *folio; /* * We can only access pages that the kernel maps * as memory. Bail out for unmapped ones. @@ -903,10 +903,10 @@ static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) return; /* Clear i-cache for new pages */ - page = pfn_to_page(pfn); - if (!test_bit(PG_dcache_clean, &page->flags)) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index a6caaaab6f92..56ea48276356 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -101,8 +101,6 @@ static inline bool pte_access_permitted(pte_t pte, bool write) static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) { return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) | pgprot_val(pgprot)); } -static inline unsigned long pte_pfn(pte_t pte) { - return pte_val(pte) >> PTE_RPN_SHIFT; } /* Generic modifiers for PTE bits */ static inline pte_t pte_exprotect(pte_t pte) @@ -166,12 +164,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return __pte(pte_val(pte) & ~_PAGE_SWP_EXCLUSIVE); } -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - /* This low level function performs the actual PTE insertion * Setting the PTE depends on the MMU type and other factors. It's * an horrible mess that I'm not going to try to clean up now but @@ -282,10 +274,12 @@ static inline int pud_huge(pud_t pud) * for the page which has just been mapped in. */ #if defined(CONFIG_PPC_E500) && defined(CONFIG_HUGETLB_PAGE) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); #else -static inline -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) {} +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) {} #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index a4893b17705a..11675d97e723 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -41,6 +41,12 @@ struct mm_struct; #ifndef __ASSEMBLY__ +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr); +#define set_ptes set_ptes +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) + #ifndef MAX_PTRS_PER_PGD #define MAX_PTRS_PER_PGD PTRS_PER_PGD #endif @@ -48,6 +54,12 @@ struct mm_struct; /* Keep these as a macros to avoid include dependency mess */ #define pte_page(x) pfn_to_page(pte_pfn(x)) #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot)) + +static inline unsigned long pte_pfn(pte_t pte) +{ + return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT; +} + /* * Select all bits except the pfn */ diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index fedffe3ae136..ad2afa08e62e 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -1307,18 +1307,19 @@ void hash__early_init_mmu_secondary(void) */ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap) { - struct page *page; + struct folio *folio; if (!pfn_valid(pte_pfn(pte))) return pp; - page = pte_page(pte); + folio = page_folio(pte_page(pte)); /* page is dirty */ - if (!test_bit(PG_dcache_clean, &page->flags) && !PageReserved(page)) { + if (!test_bit(PG_dcache_clean, &folio->flags) && + !folio_test_reserved(folio)) { if (trap == INTERRUPT_INST_STORAGE) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } else pp |= HPTE_R_N; } diff --git a/arch/powerpc/mm/cacheflush.c b/arch/powerpc/mm/cacheflush.c index 0e9b4879c0f9..8760d2223abe 100644 --- a/arch/powerpc/mm/cacheflush.c +++ b/arch/powerpc/mm/cacheflush.c @@ -148,44 +148,30 @@ static void __flush_dcache_icache(void *p) invalidate_icache_range(addr, addr + PAGE_SIZE); } -static void flush_dcache_icache_hugepage(struct page *page) +void flush_dcache_icache_folio(struct folio *folio) { - int i; - int nr = compound_nr(page); + unsigned int i, nr = folio_nr_pages(folio); - if (!PageHighMem(page)) { + if (flush_coherent_icache()) + return; + + if (!folio_test_highmem(folio)) { + void *addr = folio_address(folio); for (i = 0; i < nr; i++) - __flush_dcache_icache(lowmem_page_address(page + i)); - } else { + __flush_dcache_icache(addr + i * PAGE_SIZE); + } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { for (i = 0; i < nr; i++) { - void *start = kmap_local_page(page + i); + void *start = kmap_local_folio(folio, i * PAGE_SIZE); __flush_dcache_icache(start); kunmap_local(start); } - } -} - -void flush_dcache_icache_page(struct page *page) -{ - if (flush_coherent_icache()) - return; - - if (PageCompound(page)) - return flush_dcache_icache_hugepage(page); - - if (!PageHighMem(page)) { - __flush_dcache_icache(lowmem_page_address(page)); - } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { - void *start = kmap_local_page(page); - - __flush_dcache_icache(start); - kunmap_local(start); } else { - flush_dcache_icache_phys(page_to_phys(page)); + unsigned long pfn = folio_pfn(folio); + for (i = 0; i < nr; i++) + flush_dcache_icache_phys((pfn + i) * PAGE_SIZE); } } -EXPORT_SYMBOL(flush_dcache_icache_page); void clear_user_page(void *page, unsigned long vaddr, struct page *pg) { diff --git a/arch/powerpc/mm/nohash/e500_hugetlbpage.c b/arch/powerpc/mm/nohash/e500_hugetlbpage.c index 58c8d9849cb1..6b30e40d4590 100644 --- a/arch/powerpc/mm/nohash/e500_hugetlbpage.c +++ b/arch/powerpc/mm/nohash/e500_hugetlbpage.c @@ -178,7 +178,8 @@ book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte) * * This must always be called with the pte lock held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { if (is_vm_hugetlb_page(vma)) book3e_hugetlb_preload(vma, address, *ptep); diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index a3dcdb2d5b4b..3f86fd217690 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -58,7 +58,7 @@ static inline int pte_looks_normal(pte_t pte) return 0; } -static struct page *maybe_pte_to_page(pte_t pte) +static struct folio *maybe_pte_to_folio(pte_t pte) { unsigned long pfn = pte_pfn(pte); struct page *page; @@ -68,7 +68,7 @@ static struct page *maybe_pte_to_page(pte_t pte) page = pfn_to_page(pfn); if (PageReserved(page)) return NULL; - return page; + return page_folio(page); } #ifdef CONFIG_PPC_BOOK3S @@ -84,12 +84,12 @@ static pte_t set_pte_filter_hash(pte_t pte) pte = __pte(pte_val(pte) & ~_PAGE_HPTEFLAGS); if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE) || cpu_has_feature(CPU_FTR_NOEXECUTE))) { - struct page *pg = maybe_pte_to_page(pte); - if (!pg) + struct folio *folio = maybe_pte_to_folio(pte); + if (!folio) return pte; - if (!test_bit(PG_dcache_clean, &pg->flags)) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } return pte; @@ -107,7 +107,7 @@ static pte_t set_pte_filter_hash(pte_t pte) { return pte; } */ static inline pte_t set_pte_filter(pte_t pte) { - struct page *pg; + struct folio *folio; if (radix_enabled()) return pte; @@ -120,18 +120,18 @@ static inline pte_t set_pte_filter(pte_t pte) return pte; /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) return pte; /* If the page clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) return pte; /* If it's an exec fault, we flush the cache and make it clean */ if (is_exec_fault()) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); return pte; } @@ -142,7 +142,7 @@ static inline pte_t set_pte_filter(pte_t pte) static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, int dirty) { - struct page *pg; + struct folio *folio; if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) return pte; @@ -168,17 +168,17 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, #endif /* CONFIG_DEBUG_VM */ /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) goto bail; /* If the page is already clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) goto bail; /* Clean the page and set PG_dcache_clean */ - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); bail: return pte_mkexec(pte); @@ -187,8 +187,8 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, /* * set_pte stores a linux PTE into the linux page table. */ -void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr) { /* * Make sure hardware valid bit is not set. We don't do @@ -203,7 +203,16 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte = set_pte_filter(pte); /* Perform the setting of the PTE */ - __set_pte_at(mm, addr, ptep, pte, 0); + arch_enter_lazy_mmu_mode(); + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + (1UL << PTE_RPN_SHIFT)); + addr += PAGE_SIZE; + } + arch_leave_lazy_mmu_mode(); } void unmap_kernel_page(unsigned long va) From patchwork Wed Aug 2 15:13:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 004F8C07E8A for ; Wed, 2 Aug 2023 15:14:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 91FA228019E; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 806D428019D; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CDFC280197; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 16DBC28019B for ; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D5B2B160F40 for ; Wed, 2 Aug 2023 15:14:16 +0000 (UTC) X-FDA: 81079510512.25.9BEA9AE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id AB36B100023 for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=P1Ugc6QH; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989253; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lra2fB+YHc96ACY3dbv8lvpjxWPteFT2BS95kIPe3b4=; b=s4LbafcfedzRFke/vXDPmdCTHeYKKS9jMV8vagx/N0nK0viaJno4gEQaVzQe9tNYjgrhmd CbXU7NKPXUniGWzzjTvbN4F/lpjnvuvsAMfEnRjnZQ7p5o0byIP1QqBilSqS+quN7aZo8+ zJyAFeX/Z6SeCqUgb4oVonC6uda7ugY= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=P1Ugc6QH; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989253; a=rsa-sha256; cv=none; b=dfWduMP0c8nQ8kLN8jnJQhzsxmrok6C/qf+bcBrl/ghFTOJsfS5TeCKt0eewhG/KIyt0u2 rIQ50cyyJPca4fauClWLWIt0EvCSPhmfPC0a4RHmcoIKVWzEN9ZxmHcFAHLqZwtoXW/YhC NyM1jMbBwKVAt0wvDvtParfyrC+jCqk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=lra2fB+YHc96ACY3dbv8lvpjxWPteFT2BS95kIPe3b4=; b=P1Ugc6QH9gTChMuLT0QfxQ1dLa KQ8+MhNqQRsyoiYOO3L2MeR7Vwv4GnSHDnHekG4/+BaQ52HuJOihetjTh9d9fyGIhHoGKBGu3DOEp BvJrgdL8iQqEdBf1RIp/c7jRrYxuOZ/nL0u/baPpyRwMKvAqjvrmCYFGRCoWXMvfHacuXFCvuFTLi gswKMWoFTcoAJIrye1XJn+DStzq2iD3qaZKE88tOGAVmAAyJdivf7JHXK95tMvz4Aoj4hmkwov7Ez Wwyuib95p9kokZVaEyzQ2h4gExZ1gUlxzHDJ5ZQOQAqVvBJWEfewZ4Eto4TQNdKpGumOGB5l2IYZZ pvhFqDhw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYA-00FfkP-GG; Wed, 02 Aug 2023 15:14:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alexandre Ghiti , Mike Rapoport , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [PATCH v6 22/38] riscv: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:50 +0100 Message-Id: <20230802151406.3735276-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: AB36B100023 X-Stat-Signature: xpwtpypgox8exiohbq1bu7sdoqjscde9 X-Rspam-User: X-HE-Tag: 1690989253-836098 X-HE-Meta: U2FsdGVkX18cJAAj25q2Ih7Imbb98cpsgUsDWGw+/3KMvARvlk87RN7MwcfXVotyOnm0H/j/hCY2KcHYZEjAV+1rPC3wKnc5XZTVDthbIDpePBbojW8bo2K2O/Ef+FXsLea53h1jhl/HPaR/mPX1tXRKMTKoDBl0Biw+LKNcXw29BSkxbv4z5O+BUnDihn7RTebRQ0oHH96VaRxSNkFa4WUyP5Uc0JH0Icth/T83mZQ7VadE0Ci4JPKTf7TUwYXEyIpnI+AmuQk1RItzvx4HnGJWYmtkiRuVU9Aed2WRt6zib8Ut3uSTYyiCj0LA0LAI0UtbPdE39Iw06vIm0Y+wRLD/fZHU7HKAHSA3EkNHTrDbXlJcM+cxoo3Cmmgadetku4Bu1MZsEI3uYEU9105ponDRfVAt/qpMMQ7KUKDVBKVTW4I5EDSy7SIFl517SS0dvEMer3klrDRZLSRFYr4jJ3dn84D6P/fC11xnELiwHyYXZGw1LHpzu3Gq2ISvh0Dc37+iMaDf7jUWNaLM9uP6G+VDtqxJHUprl4/h+ZAvw89K+kJUFqrmnZIEmm+vDgFVQXpQs1WDb0/10bJ7hgKUEhm1MuUS++Fqa4F4IELGM8gXUcOKLSnnTCxkGeaATU7qxQvRObH6sf9Sois7RUBam8+vji4wSSFCFbzUnhfh6rp7xTTKyBoQ2WUk1ydnJHoVcVT97LsKprz34Yhv+hUO8Jm45DwHP5mSSeEvqcv6k5A4XTgBlTLSgJy2ZPaEIjxsj8H/Fa3cQwnm39ZgDBRxWg3I6UbWyGA7kPhBdRDYgXdwCJRQEIldeaYQ1VD4rhVBJXSLYVY7rpP5/jsA8/q9oh7tsHh1zfs+ivY5enHm620lKxFz/aDBcnkPwy4C7co9/d+InCNHmmO7nQegeY0H3SlXEn+WLyZSoUhnlyUab/tkik27KUlO9NXCIHnEkny2NS0weGhp532RWFF1uGh 8lz6YVhj R/Nvfx3F8bFmVZHR08TMWfthbAVZNRIMKPtS9d/VcicY8vDcvzwS1GGhFXN1ALH+IL9Z8ytjWYqRD90n5ixo4QwNHb80E6ZYQEEJNsn34iTtkpdBVlyxRC2sFAylcxdAFfbZ1mH4ADYiDsRHVVXHNXhAxIET602r/oXo3GMohrse3d8yUyA1YMSt4ROl2gLXgpDwdKr6iz7Pn6XGt6OaFEZR4Ti7k+5UDe5uHLUD+HkHuBK+yTKcm99B3eH4hB3SWKGX3fc+rbHHXt4m7AlHyczMStx0Dt3SwhJDoDWE6O/8ybCxLEU9lDe1j4gZlf0ktsFxu6QEs3ipD07t8X7DDYShY59K64+r0LJozGCj0sOFubccGtyTXNkp1WFrwdJBtTT8GGayAwHMScagYXtOgdLcijyccu6Oz920jXull/By2jIQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Alexandre Ghiti Acked-by: Mike Rapoport (IBM) Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: linux-riscv@lists.infradead.org --- arch/riscv/include/asm/cacheflush.h | 19 +++++++-------- arch/riscv/include/asm/pgtable.h | 37 +++++++++++++++++++---------- arch/riscv/mm/cacheflush.c | 13 +++------- 3 files changed, 36 insertions(+), 33 deletions(-) diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 8091b8bf4883..0d8c92c5dfb7 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -15,20 +15,19 @@ static inline void local_flush_icache_all(void) #define PG_dcache_clean PG_arch_1 -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in flush_icache_pte()). - */ - if (PageHuge(page)) - page = compound_head(page); - - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); } +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + /* * RISC-V doesn't have an instruction to flush parts of the instruction cache, * so instead we just flush the whole thing. diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 01e4aabc8898..ac42e9121e52 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -445,8 +445,9 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) /* Commit new configuration to MMU hardware */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { /* * The kernel assumes that TLBs don't cache invalid entries, but @@ -455,8 +456,11 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, * Relying on flush_tlb_fix_spurious_fault would suffice, but * the extra traps reduce performance. So, eagerly SFENCE.VMA. */ - local_flush_tlb_page(address); + while (nr--) + local_flush_tlb_page(address + nr * PAGE_SIZE); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -487,8 +491,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) void flush_icache_pte(pte_t pte); -static inline void __set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void __set_pte_at(pte_t *ptep, pte_t pteval) { if (pte_present(pteval) && pte_exec(pteval)) flush_icache_pte(pteval); @@ -496,17 +499,25 @@ static inline void __set_pte_at(struct mm_struct *mm, set_pte(ptep, pteval); } -static inline void set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { - page_table_check_ptes_set(mm, ptep, pteval, 1); - __set_pte_at(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, ptep, pteval, nr); + + for (;;) { + __set_pte_at(ptep, pteval); + if (--nr == 0) + break; + ptep++; + pte_val(pteval) += 1 << _PAGE_PFN_SHIFT; + } } +#define set_ptes set_ptes static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - __set_pte_at(mm, addr, ptep, __pte(0)); + __set_pte_at(ptep, __pte(0)); } #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS @@ -515,7 +526,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, pte_t entry, int dirty) { if (!pte_same(*ptep, entry)) - set_pte_at(vma->vm_mm, address, ptep, entry); + __set_pte_at(ptep, entry); /* * update_mmu_cache will unconditionally execute, handling both * the case that the PTE changed and the spurious fault case. @@ -688,14 +699,14 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { page_table_check_pmd_set(mm, pmdp, pmd); - return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)); + return __set_pte_at((pte_t *)pmdp, pmd_pte(pmd)); } static inline void set_pud_at(struct mm_struct *mm, unsigned long addr, pud_t *pudp, pud_t pud) { page_table_check_pud_set(mm, pudp, pud); - return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)); + return __set_pte_at((pte_t *)pudp, pud_pte(pud)); } #ifdef CONFIG_PAGE_TABLE_CHECK diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c index fbc59b3f69f2..f1387272a551 100644 --- a/arch/riscv/mm/cacheflush.c +++ b/arch/riscv/mm/cacheflush.c @@ -82,18 +82,11 @@ void flush_icache_mm(struct mm_struct *mm, bool local) #ifdef CONFIG_MMU void flush_icache_pte(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { + if (!test_bit(PG_dcache_clean, &folio->flags)) { flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } #endif /* CONFIG_MMU */ From patchwork Wed Aug 2 15:13:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E701EC04E69 for ; Wed, 2 Aug 2023 15:14:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 15EF628019A; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C030D280196; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CD4628019B; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1133B28019A for ; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A787440FAC for ; Wed, 2 Aug 2023 15:14:15 +0000 (UTC) X-FDA: 81079510470.07.32AEDB9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id C0AC980016 for ; Wed, 2 Aug 2023 15:14:12 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Lg1xzCUy; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9qyEM7+DnFqlFc74V3faMXypJWL5ynd3t/43lgDYBHw=; b=w3I8ucH9LnNQj7xnQlDtwlfGwG+7vIFRMQ2PhhW63h1ZcrjW6V0PATVvkDmlwzLPKM6Jdt gZ5uUokGI5QfjsYdWNITnZzCdhVQugow0HP9Nvx918TVe05No3jcMyhy8Y/4b98jAVmNMG jUwGhgB3F69il7s8g8F7keWNEcHn+vk= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Lg1xzCUy; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989252; a=rsa-sha256; cv=none; b=i01ym4nWR95AKMLNxS78hoOjzC8VENk0xWxup/ciMUBDHnwGuJSMcfRVYl1wnExeLZ4zUs c8GbNKuajnSNPKYZMFJBRSSdxY+/9z+tLV2D1tkbtShIWrRx81gwvHdMB0AtuMSf7k8z6O nmKzjB1g5o6kHN6Ze2rXJQsHpzA5rbA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9qyEM7+DnFqlFc74V3faMXypJWL5ynd3t/43lgDYBHw=; b=Lg1xzCUywUYblSsgkydYQMoLj5 UggqhWH1R2z7tL3VZi0kexW4aiNwI6o3kvhV7Ratz9RRsvrDHVpmfwEgnfSOp4CEtx/YdNnGMJ/dP hgQfbxhlT4WquR74cvQLzYBsWSqHOEzEhXhtfqZ985Eaowptz5N8+ADvY6xFJALSqrba1to1snHhY kzYcmDEu7nO8GdFkkDyD5Mf7nE4Hp7EZvA/oqS5zB/6yLmLsDG7mkf5gnIqzmTubs79iu/0ZvgMP5 i024SM6MGqKapd1m4IjpWOoVaH3CUFjDgFKQRJGG821tjzwMWDoCy9xzbar7I64LtY0IhfzayEKQt S42oF4wA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYA-00FfkY-LY; Wed, 02 Aug 2023 15:14:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Gerald Schaefer , Mike Rapoport , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org Subject: [PATCH v6 23/38] s390: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:51 +0100 Message-Id: <20230802151406.3735276-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C0AC980016 X-Stat-Signature: k7ausjeon9j58t1x1j8r8iibc5mbe6k9 X-HE-Tag: 1690989252-204002 X-HE-Meta: U2FsdGVkX1/1ilkCSsHSEsTwIGMUU4RTcxxN9kTYGrfA440Ah4A29TI68hUJwpbJ4cBzf1qDb12+oUsWDS9SoRxwuKMDSkGL+Af6oS4D6b5BtUv3FIyGrxyp4e/0LP36pL+Ff0JgEF/AEgckMuiVtYWxJMoCbglt0qsnagO5eSbgyX4B4/jDlj360B6ldR+Y3yZjLVFycgcXnyOLcAEtWqltHkSMdeek+5HlL1XP42u93RPApgp3PnkSnoOp40uotYtp7vcD9vnR63GIcM9NiHgIbcC898NcCgKMCQAZqsVax/pbj9yeGsPO0vDWb3V6icimaiq1BxnN39CeJsN/eGDaU5K2L7uo1i5/HBUa3DL6FIjNZ5BgGxPUR9RqP4ziKpmiqv+eeoaqsyzwSRjoJQf25/Ht6v5fb0QEJoVbLgvQfBg567RLSBTCGS6Kn2PIkBh7gXJ+4GO9J//OwVNokkJ810zqkEVVFRHWT9LKXUFHhzFRBApqW44dseoXaLeZL1GG/57GM73Qt3Wo2euBPjC2PlmqkfifytZvIe42OWFn7m+a4rXxlFNUCI6LMM9/cxYt2oY9pYCQqFfVpwYf+o7I6Ypsu9rcC2WbNSBkA97G0azZ8Rbc2ow+qjTnmy+71qGRBtN4NsvbFouTspsE80eVL5audlJFKVcjfvNLRue7Z8ETGwKi3L3eBWp7V8TdG/HeJUKnKEFfPUUj91T53zoQFVifUDc624+iFLaQXxegsfOJ2a64KzcpDoERHu4kW1Rv4Z3uQOHhcujBSSZsqdm98kgbo3BbldsHyk0CAJsXs30jBVY7yzZ2LdC7cwPXjg/ERrFd8xygfP3C8jzGSWd/fvVbnD/jPyAisywMidHXGYfYwSlUe6Pjqi/65OeORprkgkVH8k4NqsZR1/Z9hgPuFieiF9BtI2C255jLsgN9wvtnQEfXJB4zWA6hxAELX4rebaIJMSTwc87UJBQ OMj4tOBw 0ptWW/JMsdfPHHT6u9edl4NoGvDcaK1NMoaN39ueInchXTQ59dEVPkVgcppDjKULi832pX867ANSLcRF2NEXF5LAMOu69aBcczE4YXshBcbEsfbU55i4Ij5TuXdwoOhEpxgcgCXVYyQclRr3wdz5TzXel4NLx1qbLv6wugVnfto6gy2kL+hRe6invQLcVFqxVEe56/IqE09cHh8uH9bpW1+1kgtqnxgM+bQl3ULobuX77vCXnDlW5ReRRd8mMwvWkCpzElKe3HP8WUgA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Gerald Schaefer Acked-by: Mike Rapoport (IBM) Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: linux-s390@vger.kernel.org --- arch/s390/include/asm/pgtable.h | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 30909fe27c24..d28d2e5e68ee 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -47,6 +47,7 @@ static inline void update_page_count(int level, long count) * tables contain all the necessary information. */ #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vmf, vma, addr, ptep, nr) do { } while (0) #define update_mmu_cache_pmd(vma, address, ptep) do { } while (0) /* @@ -1314,20 +1315,34 @@ pgprot_t pgprot_writecombine(pgprot_t prot); pgprot_t pgprot_writethrough(pgprot_t prot); /* - * Certain architectures need to do special things when PTEs - * within a page table are directly modified. Thus, the following - * hook is made available. + * Set multiple PTEs to consecutive pages with a single call. All PTEs + * are within the same folio, PMD and VMA. */ -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t entry) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t entry, unsigned int nr) { if (pte_present(entry)) entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED)); - if (mm_has_pgste(mm)) - ptep_set_pte_at(mm, addr, ptep, entry); - else - set_pte(ptep, entry); + if (mm_has_pgste(mm)) { + for (;;) { + ptep_set_pte_at(mm, addr, ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + addr += PAGE_SIZE; + } + } else { + for (;;) { + set_pte(ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + } + } } +#define set_ptes set_ptes /* * Conversion functions: convert a page and protection to a page entry, From patchwork Wed Aug 2 15:13:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 761A1C41513 for ; Wed, 2 Aug 2023 15:15:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23DF32801A8; Wed, 2 Aug 2023 11:14:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1EC902801AA; Wed, 2 Aug 2023 11:14:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0658E2801A8; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BA8092801AA for ; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 833321C958F for ; Wed, 2 Aug 2023 15:14:22 +0000 (UTC) X-FDA: 81079510764.12.3C9CBA7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 73AD0A0022 for ; Wed, 2 Aug 2023 15:14:17 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=F077rsVS; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989257; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ghZvHfDO8tkFb7YnGHvHWml1qPuqtCvEx26ttdNhqbE=; b=JvH5KuR3OW+fuqSZRdMbEPr+mOarpzO4+d2VSWjw4g0nB5JvC+cciHU4FeQdMxRFXRAdFr KvJ+AOYu7jfdCEC5rUzqv9grCYxzpW+HgsrHMf3pIcSmVCGFC+DcRuVi5ynEjXlS4VnVWJ iT5mVZ/GLL7yHjdF3O8pxgY/tToFA8o= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=F077rsVS; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989257; a=rsa-sha256; cv=none; b=APeYx5QINEVMOQ+TcTk5a8C7BVt1pxE/0ZWyBag+NZ5VN8cVb56Z+VnNyWx7hLR6AhNUwF jD8Kp50tlIZUD109t4oeS247h4UzD/LxxcnEthwAheQQuz/Ui9EjQBUhmHcmwSnrQ3QGA9 tDDSEbrncQY9GVzrWzy10iTZVc8/VCo= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ghZvHfDO8tkFb7YnGHvHWml1qPuqtCvEx26ttdNhqbE=; b=F077rsVSNudRTERPtJJe734vVB Tn6yz4NAL0f00Jlf4sapZB2bGNn16cobOQSNz+O9kJc6cnd1LaS5Epixx0MaOZekvY87IjvJRUnud QuoEu6dfjp5CcZPcY3DTNboBLZ3w/BkD+1E+7pPfNQ3UkA0D0hBxAcvw94A+23JTNlXVU0PCIkfhK YX8fjf7FlnaHfTcLybxESPMG1kBPFW4KrD0pIWTZbJGFXuSbG/xSWiCzijy3qqIFy/PvJiJni5f5/ ja3i7pJTInHrGCyne6E2pofKcfhwdNZXJgE1KtQ7P/v0bgluiCnp0DNHjwLmzq30JbJiy9JR4/hTT OZLWQOWA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYA-00Ffkh-Oe; Wed, 02 Aug 2023 15:14:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , linux-sh@vger.kernel.org Subject: [PATCH v6 24/38] sh: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:52 +0100 Message-Id: <20230802151406.3735276-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 73AD0A0022 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: zax1s9fboctd8a7rnaf3yntmbqu5pt8n X-HE-Tag: 1690989257-592412 X-HE-Meta: U2FsdGVkX19UxDzImZ0DDUNKGGTiS2Vo4DEFvlps++TRzumYi1jVgt72d5BEK9MRinx6ZmXXAfPr8T18xtv6+p1Ltw26VXhSShDYEliHW3CukKhOC+Nyprhrab7KNUk0H3t31sup573lK+F81WRnq+6FbqKvFZ9Gx6I5xAW3aoJSTLRQE8+M9AD75eKOC3FOHuEF8El9YI283SEu4k6n12OGgen7Bh6A6+dv3LZvyp8cCU2PEXRIdYuid9RAcx8pX+0LRe7YZgB5IGHscPY0n0GLfmJDazRRzSUFjJkeR2L7YupZnonhBFYWRiJmKnCSZP9VDZ94/4+nkohlY1lg3+qO6t9tilvDKnT++3jQ851rRLZHjlQ7N6RiPnvql6jZH0sGJOnxppMF0Ai5dwNrKnI1+HJPO71tDH+v5REB16TWTu2ppeti3O+Yc48nW+IjhN3FdsZIgjAN55TXQNPC8TGpe/9cZDxIQjNtbmwOWwX229+FhBOHzNg5kZZqRQXaVA+UeRCeVN71XKeEG2mFGfF0K7LbWKPd/kKd4urVOgXTLZ67wBQAVKhWV4sAtTs+ms6aRACYZsvjcvv2VizdzlsMfOWn9nRPHfGteoR2s4+TofI2gFq+JS+g6mdxSFZ3SQV5ZfepA4ZW6ErK5uxLaQh39bNuDj0ARQdBx0vEB6G25NDkcfdNAMBjUBdejw/8tGpWCOG3YfqXqCpnxdAmc/1We6ecH5DWHe0OTXy7eVU1EVvpcng5AyhHQRxpJOmHBFrlLk9zy7s+hUHvOPygZlY22zUMR7RC3Gw5Vv/UNHgTu3ZbHTW8+44wfE63BJgLU/qlTTB/McbCbkBvQMO4WpV8QdajjzSBHbOo73F0MDz2pz3q7TXZV6ovokAbH3AVhlcaN2Wqw5NZoiUo38b+Fj8pVOa+dLwLLYXJtlwJXLr90+jwnyBEtn73/H0Ry9e5u5zWYJ5WSIdjSNvdf6D /05awlbR 3Z9cT8JHMp+hSvNakqK6HPaemPZPUqiHWyjuIPK/OVKwvP+9EE/vCgo0GZ38ZMeavyPn6/W1Vy8BXHtGk9bB8CDPN7kH3xsOep4WwfFqAfIUjVrX2HlHoWuLcddUsJPwPYWPKGID0o7DbXz8tKsg7Ou/cuT/ls98FSPqX36E6i7QRhSS9fpnFzFASBWQpuiW0GBYmX+DXy1Ha8DiJEM2eh7iHYDxVuzY97b2T3nFfMqrtkiVsDgg3hNz+agxIdQCiWaAUfephJRd4I9wqbBgZv0tM1HPE1KPVqvBetAzYawy5RE4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clean flag from being per-page to per-folio. Flush the entire folio containing the pages in flush_icache_pages() for ease of implementation. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Yoshinori Sato Cc: Rich Felker Cc: John Paul Adrian Glaubitz Cc: linux-sh@vger.kernel.org --- arch/sh/include/asm/cacheflush.h | 21 ++++++++----- arch/sh/include/asm/pgtable.h | 7 +++-- arch/sh/include/asm/pgtable_32.h | 5 ++- arch/sh/mm/cache-j2.c | 4 +-- arch/sh/mm/cache-sh4.c | 26 +++++++++++----- arch/sh/mm/cache-sh7705.c | 26 ++++++++++------ arch/sh/mm/cache.c | 52 ++++++++++++++++++-------------- arch/sh/mm/kmap.c | 3 +- 8 files changed, 89 insertions(+), 55 deletions(-) diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 481a664287e2..9fceef6f3e00 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -13,9 +13,9 @@ * - flush_cache_page(mm, vmaddr, pfn) flushes a single page * - flush_cache_range(vma, start, end) flushes a range of pages * - * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache + * - flush_dcache_folio(folio) flushes(wback&invalidates) a folio for dcache * - flush_icache_range(start, end) flushes(invalidates) a range for icache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) pages for icache * - flush_cache_sigtramp(vaddr) flushes the signal trampoline */ extern void (*local_flush_cache_all)(void *args); @@ -23,9 +23,9 @@ extern void (*local_flush_cache_mm)(void *args); extern void (*local_flush_cache_dup_mm)(void *args); extern void (*local_flush_cache_page)(void *args); extern void (*local_flush_cache_range)(void *args); -extern void (*local_flush_dcache_page)(void *args); +extern void (*local_flush_dcache_folio)(void *args); extern void (*local_flush_icache_range)(void *args); -extern void (*local_flush_icache_page)(void *args); +extern void (*local_flush_icache_folio)(void *args); extern void (*local_flush_cache_sigtramp)(void *args); static inline void cache_noop(void *args) { } @@ -42,11 +42,18 @@ extern void flush_cache_page(struct vm_area_struct *vma, extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range -extern void flush_icache_page(struct vm_area_struct *vma, - struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index 3ce30becf6df..729f5c6225fb 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -102,13 +102,16 @@ extern void __update_cache(struct vm_area_struct *vma, extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); -static inline void -update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; __update_cache(vma, address, pte); __update_tlb(vma, address, pte); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h index 21952b094650..676f3d4ef6ce 100644 --- a/arch/sh/include/asm/pgtable_32.h +++ b/arch/sh/include/asm/pgtable_32.h @@ -307,14 +307,13 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define set_pte(pteptr, pteval) (*(pteptr) = pteval) #endif -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) */ #define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pfn_pte(pfn, prot) \ __pte(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define pfn_pmd(pfn, prot) \ @@ -323,7 +322,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define pte_none(x) (!pte_val(x)) #define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) +#define pte_clear(mm, addr, ptep) set_pte(ptep, __pte(0)) #define pmd_none(x) (!pmd_val(x)) #define pmd_present(x) (pmd_val(x)) diff --git a/arch/sh/mm/cache-j2.c b/arch/sh/mm/cache-j2.c index f277862a11f5..9ac960214380 100644 --- a/arch/sh/mm/cache-j2.c +++ b/arch/sh/mm/cache-j2.c @@ -55,9 +55,9 @@ void __init j2_cache_init(void) local_flush_cache_dup_mm = j2_flush_both; local_flush_cache_page = j2_flush_both; local_flush_cache_range = j2_flush_both; - local_flush_dcache_page = j2_flush_dcache; + local_flush_dcache_folio = j2_flush_dcache; local_flush_icache_range = j2_flush_icache; - local_flush_icache_page = j2_flush_icache; + local_flush_icache_folio = j2_flush_icache; local_flush_cache_sigtramp = j2_flush_icache; pr_info("Initial J2 CCR is %.8x\n", __raw_readl(j2_ccr_base)); diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c index 72c2e1b46c08..862046f26981 100644 --- a/arch/sh/mm/cache-sh4.c +++ b/arch/sh/mm/cache-sh4.c @@ -107,19 +107,29 @@ static inline void flush_cache_one(unsigned long start, unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh4_flush_dcache_page(void *arg) +static void sh4_flush_dcache_folio(void *arg) { - struct page *page = arg; - unsigned long addr = (unsigned long)page_address(page); + struct folio *folio = arg; #ifndef CONFIG_SMP - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); else #endif - flush_cache_one(CACHE_OC_ADDRESS_ARRAY | - (addr & shm_align_mask), page_to_phys(page)); + { + unsigned long pfn = folio_pfn(folio); + unsigned long addr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + flush_cache_one(CACHE_OC_ADDRESS_ARRAY | + (addr & shm_align_mask), + pfn * PAGE_SIZE); + addr += PAGE_SIZE; + pfn++; + } + } wmb(); } @@ -379,7 +389,7 @@ void __init sh4_cache_init(void) __raw_readl(CCN_PRR)); local_flush_icache_range = sh4_flush_icache_range; - local_flush_dcache_page = sh4_flush_dcache_page; + local_flush_dcache_folio = sh4_flush_dcache_folio; local_flush_cache_all = sh4_flush_cache_all; local_flush_cache_mm = sh4_flush_cache_mm; local_flush_cache_dup_mm = sh4_flush_cache_mm; diff --git a/arch/sh/mm/cache-sh7705.c b/arch/sh/mm/cache-sh7705.c index 9b63a53a5e46..b509a407588f 100644 --- a/arch/sh/mm/cache-sh7705.c +++ b/arch/sh/mm/cache-sh7705.c @@ -132,15 +132,20 @@ static void __flush_dcache_page(unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh7705_flush_dcache_page(void *arg) +static void sh7705_flush_dcache_folio(void *arg) { - struct page *page = arg; - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = arg; + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); - else - __flush_dcache_page(__pa(page_address(page))); + clear_bit(PG_dcache_clean, &folio->flags); + else { + unsigned long pfn = folio_pfn(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_dcache_page((pfn + i) * PAGE_SIZE); + } } static void sh7705_flush_cache_all(void *args) @@ -176,19 +181,20 @@ static void sh7705_flush_cache_page(void *args) * Not entirely sure why this is necessary on SH3 with 32K cache but * without it we get occasional "Memory fault" when loading a program. */ -static void sh7705_flush_icache_page(void *page) +static void sh7705_flush_icache_folio(void *arg) { - __flush_purge_region(page_address(page), PAGE_SIZE); + struct folio *folio = arg; + __flush_purge_region(folio_address(folio), folio_size(folio)); } void __init sh7705_cache_init(void) { local_flush_icache_range = sh7705_flush_icache_range; - local_flush_dcache_page = sh7705_flush_dcache_page; + local_flush_dcache_folio = sh7705_flush_dcache_folio; local_flush_cache_all = sh7705_flush_cache_all; local_flush_cache_mm = sh7705_flush_cache_all; local_flush_cache_dup_mm = sh7705_flush_cache_all; local_flush_cache_range = sh7705_flush_cache_all; local_flush_cache_page = sh7705_flush_cache_page; - local_flush_icache_page = sh7705_flush_icache_page; + local_flush_icache_folio = sh7705_flush_icache_folio; } diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 3aef78ceb820..9bcaa5619eab 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -20,9 +20,9 @@ void (*local_flush_cache_mm)(void *args) = cache_noop; void (*local_flush_cache_dup_mm)(void *args) = cache_noop; void (*local_flush_cache_page)(void *args) = cache_noop; void (*local_flush_cache_range)(void *args) = cache_noop; -void (*local_flush_dcache_page)(void *args) = cache_noop; +void (*local_flush_dcache_folio)(void *args) = cache_noop; void (*local_flush_icache_range)(void *args) = cache_noop; -void (*local_flush_icache_page)(void *args) = cache_noop; +void (*local_flush_icache_folio)(void *args) = cache_noop; void (*local_flush_cache_sigtramp)(void *args) = cache_noop; void (*__flush_wback_region)(void *start, int size); @@ -61,15 +61,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + struct folio *folio = page_folio(page); + + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(vto); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } if (vma->vm_flags & VM_EXEC) @@ -80,27 +82,30 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + test_bit(PG_dcache_clean, &folio->flags)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(vfrom); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } } void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); - if (boot_cpu_data.dcache.n_aliases && page_mapcount(from) && - test_bit(PG_dcache_clean, &from->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(src) && + test_bit(PG_dcache_clean, &src->flags)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(vfrom); @@ -136,27 +141,28 @@ EXPORT_SYMBOL(clear_user_highpage); void __update_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte) { - struct page *page; unsigned long pfn = pte_pfn(pte); if (!boot_cpu_data.dcache.n_aliases) return; - page = pfn_to_page(pfn); if (pfn_valid(pfn)) { - int dirty = !test_and_set_bit(PG_dcache_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dcache_clean, &folio->flags); if (dirty) - __flush_purge_region(page_address(page), PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } void __flush_anon_page(struct page *page, unsigned long vmaddr) { + struct folio *folio = page_folio(page); unsigned long addr = (unsigned long) page_address(page); if (pages_do_alias(addr, vmaddr)) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -164,7 +170,8 @@ void __flush_anon_page(struct page *page, unsigned long vmaddr) /* __flush_purge_region((void *)kaddr, PAGE_SIZE); */ kunmap_coherent(kaddr); } else - __flush_purge_region((void *)addr, PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } @@ -215,11 +222,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, } EXPORT_SYMBOL(flush_cache_range); -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - cacheop_on_each_cpu(local_flush_dcache_page, page, 1); + cacheop_on_each_cpu(local_flush_dcache_folio, folio, 1); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void flush_icache_range(unsigned long start, unsigned long end) { @@ -233,10 +240,11 @@ void flush_icache_range(unsigned long start, unsigned long end) } EXPORT_SYMBOL(flush_icache_range); -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { - /* Nothing uses the VMA, so just pass the struct page along */ - cacheop_on_each_cpu(local_flush_icache_page, page, 1); + /* Nothing uses the VMA, so just pass the folio along */ + cacheop_on_each_cpu(local_flush_icache_folio, page_folio(page), 1); } void flush_cache_sigtramp(unsigned long address) diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c index 73fd7cc99430..fa50e8f6e7a9 100644 --- a/arch/sh/mm/kmap.c +++ b/arch/sh/mm/kmap.c @@ -27,10 +27,11 @@ void __init kmap_coherent_init(void) void *kmap_coherent(struct page *page, unsigned long addr) { + struct folio *folio = page_folio(page); enum fixed_addresses idx; unsigned long vaddr; - BUG_ON(!test_bit(PG_dcache_clean, &page->flags)); + BUG_ON(!test_bit(PG_dcache_clean, &folio->flags)); preempt_disable(); pagefault_disable(); From patchwork Wed Aug 2 15:13:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC07DC04A6A for ; Wed, 2 Aug 2023 15:14:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56CE028019B; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D4512801A2; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C9222801A1; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D7B972801A0 for ; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id AF6CB16074C for ; Wed, 2 Aug 2023 15:14:18 +0000 (UTC) X-FDA: 81079510596.04.D5ED0AC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 050EC40006 for ; Wed, 2 Aug 2023 15:14:16 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MLqcg1D9; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989257; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eKfshkFukdi/+/58GjvlUWlADdkMvY8HIixSMpGbnWo=; b=0R75ok4hox7UhWbRX0CnMV8frgFVHbUSgkjBa4vnPdI5vEAxSLAebFfj7l9CAIz6hVaHCU nabt8PdsAyQq8La/XUCdPAX1zBmIgXK3H7Y8sV/3iCYyKvtyCDQBeGyZ9oCSbtzkFl9jXw BOlAQdODt0DIlIBlwalh5G3JgIw32Nw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989257; a=rsa-sha256; cv=none; b=0BFvwjvxfRcbtFREZwFZUP5YhUT7qYPZ3SvBwrU6c70fH0Jhv9P7eh5Yi8Lbltu3IYUL9B OIY499P/XRqnnzVxDHKUmRHLYEtJCL0aE5qFGCJ45m2/xXg3HyPoeKbuiFA2Kp0p00qjaI /y83Z7aX0b/ppU/jB4PsSYMprv3PrOM= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MLqcg1D9; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eKfshkFukdi/+/58GjvlUWlADdkMvY8HIixSMpGbnWo=; b=MLqcg1D98xvlGhipw6sUTh0JzN KUGkWju62mogqpcZMKoxUp1hI9Ou6o+dyVI0S36JjiQ0lxqZM4JzGJuGGe2TDbOEMBTQdDi+t3L55 RPv5LKZltAsWGtDwJXx84Ef+lm+cTZna2hUwwMTucrox+VqpybGO+VcKe7C/I9/kY/+8K0iEvHtuM tHD45W4AdieH9MBzHDTThyFXZqkLZrBOfkgvBPcaioOftNm+FVAoFB1lA9qv8IiP8LVLhh5nvxeC0 Xg76v3PXsUHaljyEZiqyDhF6K5xadOTgBKp9VBsVBsV7xex/8IvbcYcC6+VY/lxfaYy6VTpBdlYav 3b1FPTHA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYA-00Ffkk-SE; Wed, 02 Aug 2023 15:14:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v6 25/38] sparc32: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:53 +0100 Message-Id: <20230802151406.3735276-26-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: pc3h8m41pwn71oa3b4mmdj7gueutum3o X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 050EC40006 X-Rspam-User: X-HE-Tag: 1690989256-798430 X-HE-Meta: U2FsdGVkX19IHJTZwT9dpD+6AqQjnK1bf0lC15onaWLKhq0s9qI1cSxMJN3fRIRBNge5HA68g82WBc9KwcbpvUZZ8i8qUTxKWxr5RkBmPeFwLmLQbsoEJMCJuk05TcJq8YiZ2Nxy2xJO2OtFAmjnxJK0SDdmnS4B20H+vebilfMCt22fl8SVPMPua/S89HGZS9f1UGJBwiyi6sh4Se0e26ARC7+gypQITiDxS6Qmx0w5qHSCIiwo+W8YTOUjCY5qEeN87UKSP9GV01ZJI2LpfNXA3JONMW6mXs017/G8lS0lGF2Vp29Qht/Aa2RFxrU0ZNTE4+A2n8XcGSaDtJp28yh8vYv4euadLw1Euvr0BBUq6L+zZunT2TsQeZ9X4OWGzbV9HcGDVxwzWqORmmpCW/PmK5yhmf+5TE2P6PdIUfvb5WxFydLdkfezgREWWptVK3wA0I9PdzITwb2MZe9q9+x5Tq9fyv17hsi6ikzFsGpfMZSs2oDwoYuOb4YdaNnIvC4PhGTFPsdoCpr5bP+BjN6hVTkw2ZTwR9bFW/ZUQV7UXSqKWuQUMqwkGcRN7WsmxtuIQoy5XRgSh5rxPrQCoL9dgyZ5hhETFpcaBnanc1kPQEbT7ihDGhawXUNkCXZ9ZZdqMwZVbpGR32rG07OoNVvfHJ18nDTxEQkZQlj19u/kJIMp55qTJIB+02Mf9o3MmmMbRAePqKcSYUBHLaVpM7C7UJL0FzFk0yDbZ2TD/zrddyfuffq87wUtgy45bda8P6QjD+RlkAHMgDx5vIoE70eQ/DbVTbSkgFw/8ByKnq5MiJdUgJ0RaTkWRNbTNROZZycB2yL5LsEWwBf1+lDNM8BEADC9xMI2IOnZHqPdm0LM+SpxfmhqboVNbkFUobwe0Mqr2hA3A4Rx2C3+i4oZ1IGAFJqwujkKxBpaMuS0IQ1Hc0kF/mK9vusSmBrbsCmY0M6pVTKxE/4ZqJQ9xa9 Zoko90uS okV6toS5D/DM8S7Qrs91xkyrF7al1j+joMADpdJEGhjzQ4gChjyhn8AQPg+W6OdwRUuG0X3u55Mi3j/6KgU4upq05sAcCcHBs1SAE70UIsC3D5+DaEhuhneizKhrJa7FE5Fo5K1aaPpAkN5B9qXNaS2SoBbv+Gk/qTYwkvdccmjJcn5vRo6QesPqADPeLWsDAN7NcD03SnmXsMoIolurEeKJ+PkIJU4/pCEtj2EmST2wgMp/rItQ+Rec4KeLvGsR+hHmm3qq88p+y/9AZwH2We/ldI2FIPSDulmMw4lEG/pSMytU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_32.h | 10 ++++++++-- arch/sparc/include/asm/pgtable_32.h | 8 ++++---- arch/sparc/mm/init_32.c | 13 +++++++++++-- 3 files changed, 23 insertions(+), 8 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index adb6991d0455..c8dd971f0e88 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -2,6 +2,7 @@ #ifndef _SPARC_CACHEFLUSH_H #define _SPARC_CACHEFLUSH_H +#include #include #define flush_cache_all() \ @@ -16,6 +17,7 @@ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) #define flush_icache_page(vma, pg) do { } while (0) +#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ @@ -35,11 +37,15 @@ #define flush_page_for_dma(addr) \ sparc32_cachetlb_ops->page_for_dma(addr) -struct page; void sparc_flush_page_to_ram(struct page *page); +void sparc_flush_folio_to_ram(struct folio *folio); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) sparc_flush_page_to_ram(page) +#define flush_dcache_folio(folio) sparc_flush_folio_to_ram(folio) +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index d4330e3c57a6..315d316614ca 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -101,8 +101,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) srmmu_swap((unsigned long *)ptep, pte_val(pteval)); } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - static inline int srmmu_device_memory(unsigned long x) { return ((x & 0xF0000000) != 0); @@ -256,6 +254,7 @@ static inline pte_t pte_mkyoung(pte_t pte) return __pte(pte_val(pte) | SRMMU_REF); } +#define PFN_PTE_SHIFT (PAGE_SHIFT - 4) #define pfn_pte(pfn, prot) mk_pte(pfn_to_page(pfn), prot) static inline unsigned long pte_pfn(pte_t pte) @@ -268,7 +267,7 @@ static inline unsigned long pte_pfn(pte_t pte) */ return ~0UL; } - return (pte_val(pte) & SRMMU_PTE_PMASK) >> (PAGE_SHIFT-4); + return (pte_val(pte) & SRMMU_PTE_PMASK) >> PFN_PTE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -318,6 +317,7 @@ void mmu_info(struct seq_file *m); #define FAULT_CODE_USER 0x4 #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vmf, vma, address, ptep, nr) do { } while (0) void srmmu_mapiorange(unsigned int bus, unsigned long xpa, unsigned long xva, unsigned int len); @@ -422,7 +422,7 @@ static inline int io_remap_pfn_range(struct vm_area_struct *vma, ({ \ int __changed = !pte_same(*(__ptep), __entry); \ if (__changed) { \ - set_pte_at((__vma)->vm_mm, (__address), __ptep, __entry); \ + set_pte(__ptep, __entry); \ flush_tlb_page(__vma, __address); \ } \ __changed; \ diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 9c0ea457bdf0..d96a14ffceeb 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -297,11 +297,20 @@ void sparc_flush_page_to_ram(struct page *page) { unsigned long vaddr = (unsigned long)page_address(page); - if (vaddr) - __flush_page_to_ram(vaddr); + __flush_page_to_ram(vaddr); } EXPORT_SYMBOL(sparc_flush_page_to_ram); +void sparc_flush_folio_to_ram(struct folio *folio) +{ + unsigned long vaddr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_page_to_ram(vaddr + i * PAGE_SIZE); +} +EXPORT_SYMBOL(sparc_flush_folio_to_ram); + static const pgprot_t protection_map[16] = { [VM_NONE] = PAGE_NONE, [VM_READ] = PAGE_READONLY, From patchwork Wed Aug 2 15:13:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00501C001DF for ; Wed, 2 Aug 2023 15:15:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E56612801A3; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D86DB2801AB; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A02392801A8; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 847BB2801A3 for ; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 42AEDA0EB2 for ; Wed, 2 Aug 2023 15:14:22 +0000 (UTC) X-FDA: 81079510764.17.9B7AC9D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 3EA9E18001A for ; Wed, 2 Aug 2023 15:14:19 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Wg3QAG+P; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989260; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xHJ3sahjoDc/Kc2Nk4ZhiFpGfJHUDLlPdJINUDN/DHI=; b=75vf+dBXd5jWK0++FxgA5QGi+YR39C61r0HVZMi45IYkW65mIb9QiAMo6WRlHQBSEB4Mvy jor5JQjT4FEv8w/kcyxkToyQYXvwsuI2x12cy3eZFiNkskMk9W5WWwrmroXLDjJ378zsU6 Gh/EmUiky1LdOSIf1VhUpmxvbqfW+m8= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Wg3QAG+P; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989260; a=rsa-sha256; cv=none; b=uD92DVJIOXXXt4ANjlcCnVZJGKY2R2+WylO1CoZkdzQy5/H2r8ktNPL2cLqyT9bHkUoO+N v1RkhVoKz1f9hgrTppHgzeJW9YYsBrjBu/NJyinwX/ZnaeArOx26bGHKe2zuCFn493Lesm I+tvQsndZozMTAm+jiVsG4PFp1LwAPc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xHJ3sahjoDc/Kc2Nk4ZhiFpGfJHUDLlPdJINUDN/DHI=; b=Wg3QAG+PamdrbtozW/nuOKVAVE Uhsr+9akMRePKzrzFkzSNQTekSLb+4x5ks75d7umQupNFhEQZ2+X3NcRb2OXYlMygwheO4gDZgpUj BFxsY8JEL5s2P2hdBqwBlvYzOUO7clv9v1uw9ZQucLsjvhCumTWG3vr2KZcgz21lxYkwa69Zs/b+i Q7TdD8NB8s1MPAhS+DkpS2mN7M+YKl4B9OmgksEbrvK4b5Az9eQS3KtbZC4q95tKhRuXjASOA+u5Y xHZ13/HTR3pQo/enqII9lVt7diyuqSrmaRnyY2jDwV/AUJ2RB1g9NX/rV0CTx1W7dnB0O8dKo9OeP QkY73Adg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYA-00Ffkr-VE; Wed, 02 Aug 2023 15:14:10 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v6 26/38] sparc64: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:54 +0100 Message-Id: <20230802151406.3735276-27-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3EA9E18001A X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: wzzjkp8j34z4pmsbuk1xo95645psmzy8 X-HE-Tag: 1690989259-382751 X-HE-Meta: U2FsdGVkX19XSpEz1WMhUTIFDjKH8qTw1bz5IreNk0eO9Vg7dVH/ZbcZHSQqd8A59/b23re5V2LkMuCqBz+nPZLMj8+/t1EtZH4wdeu7c+baodffhpMOlKBL8qBAfdAURoOR6TEkZjhxPj3QpAHoSQapGPE+6oXqNNDeM6oRV9DMcQQLRsiO9OoS4YWCUusryP8/ADg7qfJG7SpY0ifk+WBt2FdzHjSIfGOYbhJp6b3b8kNh/b/mRjcoSwWAMYdhbZFMnQsjNPzVKGPZZD1eMmYp0v+phvFVNGUiZUov4nGlhDGM1lJN1mojmKKh9dcrmbP9fw0/cA/xNYltQjrQKL+IP9GxbFud2A65uGPxCQx4IR0TrH5WksagJUdrF+jDPLnjLbX3SvwnaeUj2iCzPZMktB6WWkHH32g8L6ZAAJ3JjzWBrOsZ5B0RrTVZWjUvBMzbE214q3t6EjDYz72IRQPUKPQn4s6hFXjlDVX8/J5KNRjPnF0gGx5aNQU01b/wpOgxAF3mrSUshcs6kF6C2TVU6mCCeS4Z4pXQmr8CeSBDO7ntpbKczhD0tjzAURB0dx7RK6f9cotT9rMXzLhCSfxUCacI4XRtzluPXshGJB3eMpLXb9heBIjBCR+Cw2sWjxoUz4WGLQqfdH2VgY36mD9h6kg0aPDTMuPoQdN7L34CmiIkrXkpUbJenja+Y5HlebeOqAvJTwFF3e6u6bK0g4xwtF1xLjGs4UHPPbVes5V2fgfdAUh3Zh/2iy8s0G7VvW6JEWw8e2wxiWvw8Xq2U9xKPG+IFNoqMH5o9qtz6qCeXQgdvIwFrL8JmlAYSv0HpTK+B2F1Q126xiabpWqvWN+HlW5FZ6RzqmEjIU3hsFdes4VKfP1aNI0dh1mrNrrlOoS531X9TCsress1oxEJssXBoXx2jV4VN14hY04g3ti620FzNw99jNwtDa9JDpXnSMOpNGoFIiBUpZMPK96 i/FbBLl/ zNGU/DoAwh5lRLNvkJjS0SS2X9hOovQy8ZqKpTDJ5MVOYs3HXHz5M7Zdu5oFFGIKkF94BSEkrfnms8xqO5qo3U6yTfyU1vsddle5UidUJJjU4Irtl3EPS8+W2dnWSGlzkOE5fTRAsPOMCmmp18mEpjuiqNwOaL9ojkiVwB2/QAmDpzaRifw0YqFnc+mU4KQH0p9vl47QvvvdHC9eAW2bRUtgkKkkSbVj4/fA6YxNWYxcbzkFJ/vepuFq/uzQMddlfc8fW4L9Ne94cqu/lFaTb0xqa/UtsUcCLyZUOdZlEE87nkgI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Convert the PG_dcache_dirty flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org Reported-by: Guenter Roeck Signed-off-by: Mike Rapoport (IBM) Tested-by: Guenter Roeck --- arch/sparc/include/asm/cacheflush_64.h | 18 ++++-- arch/sparc/include/asm/pgtable_64.h | 29 +++++++--- arch/sparc/kernel/smp_64.c | 56 +++++++++++------- arch/sparc/mm/init_64.c | 78 +++++++++++++++----------- arch/sparc/mm/tlb.c | 5 +- 5 files changed, 119 insertions(+), 67 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index b9341836597e..a9a719f04d06 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -35,20 +35,26 @@ void flush_icache_range(unsigned long start, unsigned long end); void __flush_icache_page(unsigned long); void __flush_dcache_page(void *addr, int flush_icache); -void flush_dcache_page_impl(struct page *page); +void flush_dcache_folio_impl(struct folio *folio); #ifdef CONFIG_SMP -void smp_flush_dcache_page_impl(struct page *page, int cpu); -void flush_dcache_page_all(struct mm_struct *mm, struct page *page); +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu); +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio); #else -#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) -#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) +#define smp_flush_dcache_folio_impl(folio, cpu) flush_dcache_folio_impl(folio) +#define flush_dcache_folio_all(mm, folio) flush_dcache_folio_impl(folio) #endif void __flush_dcache_range(unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_icache_page(vma, pg) do { } while(0) +#define flush_icache_pages(vma, pg, nr) do { } while(0) void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 5563efa1a19f..09aa37cc4469 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -86,6 +86,7 @@ extern unsigned long VMALLOC_END; #define vmemmap ((struct page *)VMEMMAP_BASE) #include +#include bool kern_addr_valid(unsigned long addr); @@ -927,8 +928,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, maybe_tlb_batch_add(mm, addr, ptep, orig, fullmm, PAGE_SHIFT); } -#define set_pte_at(mm,addr,ptep,pte) \ - __set_pte_at((mm), (addr), (ptep), (pte), 0) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + arch_enter_lazy_mmu_mode(); + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + addr += PAGE_SIZE; + } + arch_leave_lazy_mmu_mode(); +} +#define set_ptes set_ptes #define pte_clear(mm,addr,ptep) \ set_pte_at((mm), (addr), (ptep), __pte(0UL)) @@ -947,8 +961,8 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, \ if (pfn_valid(this_pfn) && \ (((old_addr) ^ (new_addr)) & (1 << 13))) \ - flush_dcache_page_all(current->mm, \ - pfn_to_page(this_pfn)); \ + flush_dcache_folio_all(current->mm, \ + page_folio(pfn_to_page(this_pfn))); \ } \ newpte; \ }) @@ -963,7 +977,10 @@ struct seq_file; void mmu_info(struct seq_file *); struct vm_area_struct; -void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); +void update_mmu_cache_range(struct vm_fault *, struct vm_area_struct *, + unsigned long addr, pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(NULL, vma, addr, ptep, 1) #ifdef CONFIG_TRANSPARENT_HUGEPAGE void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd); @@ -1121,8 +1138,6 @@ static inline bool pte_access_permitted(pte_t pte, bool write) } #define pte_access_permitted pte_access_permitted -#include - /* We provide our own get_unmapped_area to cope with VA holes and * SHM area cache aliasing for userland. */ diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index e5964d1d8b37..f3969a3600db 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -921,20 +921,26 @@ extern unsigned long xcall_flush_dcache_page_cheetah; #endif extern unsigned long xcall_flush_dcache_page_spitfire; -static inline void __local_flush_dcache_page(struct page *page) +static inline void __local_flush_dcache_folio(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + unsigned long pfn = folio_pfn(folio) + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } -void smp_flush_dcache_page_impl(struct page *page, int cpu) +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu) { int this_cpu; @@ -948,14 +954,14 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) this_cpu = get_cpu(); if (cpu == this_cpu) { - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); } else if (cpu_online(cpu)) { - void *pg_addr = page_address(page); + void *pg_addr = folio_address(folio); u64 data0 = 0; if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -963,18 +969,23 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpumask_of(cpu)); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpumask_of(cpu)); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } } put_cpu(); } -void flush_dcache_page_all(struct mm_struct *mm, struct page *page) +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio) { void *pg_addr; u64 data0; @@ -988,10 +999,10 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) atomic_inc(&dcpage_flushes); #endif data0 = 0; - pg_addr = page_address(page); + pg_addr = folio_address(folio); if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -999,13 +1010,18 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpu_online_mask); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpu_online_mask); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); preempt_enable(); } diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 0d7fd793924c..680ef206565c 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -195,21 +195,26 @@ atomic_t dcpage_flushes_xcall = ATOMIC_INIT(0); #endif #endif -inline void flush_dcache_page_impl(struct page *page) +inline void flush_dcache_folio_impl(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + BUG_ON(tlb_type == hypervisor); #ifdef CONFIG_DEBUG_DCFLUSH atomic_inc(&dcpage_flushes); #endif #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), - ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, + ((tlb_type == spitfire) && + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } @@ -218,10 +223,10 @@ inline void flush_dcache_page_impl(struct page *page) #define PG_dcache_cpu_mask \ ((1UL<flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) +#define dcache_dirty_cpu(folio) \ + (((folio)->flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) -static inline void set_dcache_dirty(struct page *page, int this_cpu) +static inline void set_dcache_dirty(struct folio *folio, int this_cpu) { unsigned long mask = this_cpu; unsigned long non_cpu_bits; @@ -238,11 +243,11 @@ static inline void set_dcache_dirty(struct page *page, int this_cpu) "bne,pn %%xcc, 1b\n\t" " nop" : /* no outputs */ - : "r" (mask), "r" (non_cpu_bits), "r" (&page->flags) + : "r" (mask), "r" (non_cpu_bits), "r" (&folio->flags) : "g1", "g7"); } -static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) +static inline void clear_dcache_dirty_cpu(struct folio *folio, unsigned long cpu) { unsigned long mask = (1UL << PG_dcache_dirty); @@ -260,7 +265,7 @@ static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) " nop\n" "2:" : /* no outputs */ - : "r" (cpu), "r" (mask), "r" (&page->flags), + : "r" (cpu), "r" (mask), "r" (&folio->flags), "i" (PG_dcache_cpu_mask), "i" (PG_dcache_cpu_shift) : "g1", "g7"); @@ -284,9 +289,10 @@ static void flush_dcache(unsigned long pfn) page = pfn_to_page(pfn); if (page) { + struct folio *folio = page_folio(page); unsigned long pg_flags; - pg_flags = page->flags; + pg_flags = folio->flags; if (pg_flags & (1UL << PG_dcache_dirty)) { int cpu = ((pg_flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask); @@ -296,11 +302,11 @@ static void flush_dcache(unsigned long pfn) * in the SMP case. */ if (cpu == this_cpu) - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); else - smp_flush_dcache_page_impl(page, cpu); + smp_flush_dcache_folio_impl(folio, cpu); - clear_dcache_dirty_cpu(page, cpu); + clear_dcache_dirty_cpu(folio, cpu); put_cpu(); } @@ -388,12 +394,14 @@ bool __init arch_hugetlb_valid_size(unsigned long size) } #endif /* CONFIG_HUGETLB_PAGE */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { struct mm_struct *mm; unsigned long flags; bool is_huge_tsb; pte_t pte = *ptep; + unsigned int i; if (tlb_type != hypervisor) { unsigned long pfn = pte_pfn(pte); @@ -440,15 +448,21 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * } } #endif - if (!is_huge_tsb) - __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, - address, pte_val(pte)); + if (!is_huge_tsb) { + for (i = 0; i < nr; i++) { + __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, + address, pte_val(pte)); + address += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } + } spin_unlock_irqrestore(&mm->context.lock, flags); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { + unsigned long pfn = folio_pfn(folio); struct address_space *mapping; int this_cpu; @@ -459,35 +473,35 @@ void flush_dcache_page(struct page *page) * is merely the zero page. The 'bigcore' testcase in GDB * causes this case to run millions of times. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; this_cpu = get_cpu(); - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) { - int dirty = test_bit(PG_dcache_dirty, &page->flags); + bool dirty = test_bit(PG_dcache_dirty, &folio->flags); if (dirty) { - int dirty_cpu = dcache_dirty_cpu(page); + int dirty_cpu = dcache_dirty_cpu(folio); if (dirty_cpu == this_cpu) goto out; - smp_flush_dcache_page_impl(page, dirty_cpu); + smp_flush_dcache_folio_impl(folio, dirty_cpu); } - set_dcache_dirty(page, this_cpu); + set_dcache_dirty(folio, this_cpu); } else { /* We could delay the flush for the !page_mapping * case too. But that case is for exec env/arg * pages and those are %99 certainly going to get * faulted into the tlb (and thus flushed) anyways. */ - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); } out: put_cpu(); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void __kprobes flush_icache_range(unsigned long start, unsigned long end) { @@ -2280,10 +2294,10 @@ void __init paging_init(void) setup_page_offset(); /* These build time checkes make sure that the dcache_dirty_cpu() - * page->flags usage will work. + * folio->flags usage will work. * * When a page gets marked as dcache-dirty, we store the - * cpu number starting at bit 32 in the page->flags. Also, + * cpu number starting at bit 32 in the folio->flags. Also, * functions like clear_dcache_dirty_cpu use the cpu mask * in 13-bit signed-immediate instruction fields. */ diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 7ecf8556947a..0d41c94ec3ac 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -118,6 +118,7 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, unsigned long paddr, pfn = pte_pfn(orig); struct address_space *mapping; struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) goto no_cache_flush; @@ -127,13 +128,13 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, goto no_cache_flush; /* A real file page? */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) goto no_cache_flush; paddr = (unsigned long) page_address(page); if ((paddr ^ vaddr) & (1 << 13)) - flush_dcache_page_all(mm, page); + flush_dcache_folio_all(mm, folio); } no_cache_flush: From patchwork Wed Aug 2 15:13:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC7EFC04FE0 for ; Wed, 2 Aug 2023 15:14:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E264928019D; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D879028019B; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF25128019F; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7D3CA28019B for ; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3A0D4B1FE7 for ; Wed, 2 Aug 2023 15:14:17 +0000 (UTC) X-FDA: 81079510554.22.154533D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 5E2D518002E for ; Wed, 2 Aug 2023 15:14:14 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UOIVkEMN; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989254; a=rsa-sha256; cv=none; b=yBPlCs2oQWiyjmqhk12vDR2Ye0J3XzC+95OKD6rNc1IQT1xFWeaMbB3FJ7o9qHn77hOzZl T1S4wWiD6stsoMo9LpnKOh9x1FQnP4BX5Y3hyJ0nSQV805K8iFZPn5pJrc6ZvD7InSPG5B wyjyekFpFsclXPVRqsc2xyQyN9J7pcw= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UOIVkEMN; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989254; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4kjTMLR4yS/6wlkWfd894MUQRkbdXeC/iqVMekxk9eo=; b=8duS6pmWPSNO4c4PlaWDBz5tapQbwQ9QU7fgg3WJRG6frkcWOLhmKaiy/U8fIrffm9vGou 1mwtfy4SRkMeexPUREz6lFOy6DD6o5aF7PRZmSCKiUq6SrJ0d84bSv7N/U6fxRT2E9lqcZ oaOujSDdEj4VAx7oxDC14oiT/lc3MnU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4kjTMLR4yS/6wlkWfd894MUQRkbdXeC/iqVMekxk9eo=; b=UOIVkEMNZdOCGJnBsxzHwsX40o rr8/nyntKl6RORHkEVc1Y7ojv/VnFqITY+WiWPjPixGsd1y2B5+NOHhmwf+2hoVCzXh4QzqCB51Jj S4sAYQbo/cu4/lif1d8gBk+xM5TeGeQ7HZ0MQXLvNvnLFEmz/KPVffPBW9bKSD+uPgF+m+QoBoTDx 2kBNipEaB9ZEhOUlwI68RO5gDnOB1PMs9QWwKD0CUH1tSy//cA/SA+GLemgzGXwDKcJj2uRKJLBmf MZXx5Fw05aEfsxobMifW/lp5H8loR+8MXo8bAWCO7qbbTqc+kZfnueekrMvjcRkRcBPjxVf43ti3N TknOGAaw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYB-00Ffku-2a; Wed, 02 Aug 2023 15:14:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Richard Weinberger , Anton Ivanov , Johannes Berg , linux-um@lists.infradead.org Subject: [PATCH v6 27/38] um: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:55 +0100 Message-Id: <20230802151406.3735276-28-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 5E2D518002E X-Stat-Signature: wnprhu78ss339gij13qcsrb6714xjudz X-Rspam-User: X-HE-Tag: 1690989254-619363 X-HE-Meta: U2FsdGVkX1+P/HDWVCZ/Op5vch0b5n9ud38aSir7vlaqArUpRH8XcXsKTYeH9EGcPOPst9ku1KC+QaTRRQwQf3wwb0SRjoEc+13EgFsdEMRgj5MY2z4O6RiM+5M1RnMShsDrkBbQs04p0B/baeEAQMDyvvmL2VKvilFYVb9tpzGcQEm5dIxDxFLWmkJ5MwqLMRAua/Wh0o/L8jAAPW0auO/+WBKxgn0OqM92TtMSlFpm9ux6aqmv3nQA2PiOBI4Uo5wDnzkA2L8X8HB3IChzNDcNksczlKL+M7Hs6z4m3y010S+QHpTV/cB3ulrX9Fc4+isK65Hj3ALjPHpGIBbFH0IO60HV7kJxJPDO/RGh1kZPAIlF/ChzuI0Uwrc2QnlLYwBe7SBJSPBZ/ge/RUIFYA8kvPP7bSxKgUhtwWxiyxawX60bHI4P3qvZobXW9Rs+HCINCprllp0lKsrbhGZ7KDUhuHAeiarDa06Zl6zI2qdxvYQ61Vn0yj/LZ6cuQXuZnaN3qbHeIxGxxSDMsipVsFqn6+1Evv/osCdUhAdsyJYwKSfsbP8xYCTbO9/2J3mixxqZu4+id8pKCYkjz3a3xfqPpiTDFvoWS5LSqdp20vG7bOeBGs+IMjLIycbPYCRFH6GZl3ePhUIB9QFZdsZu1RBC2KTU1yJ5HG6t3eJsOku9BW/JqcY2R9ISDEB4idMX/x0SlaJjJiGMbcBD280fzWk7k8yAx7F/M2SG8hTLth4QNrZFs7kAaavHjn6sRxNLLt9zElfd+OkcSCQ4TJ+JCFckgeV3pxhwjaRbJUpeou0PWFjaQIuKu6Fo79/lwFX1ychiNk8ohY33YhDl//D0NUqHwnXtD2oPTrIsc9guGBBg2b4Z3p6xXFyJ9UzVH5KzLVUu+kUrgi10S8sWLOpdlFHWDh7/SQCoyzFRav2lrxXLfrUnFsuUf+Jp2xtgunkMq+4Kvt+o8ALkI+vT341 B08WP4iI oolxyivW3+wa5O3O+DjsZrpxZshOyK3gla9xuG5ALpbDxUPimP43CGarOHuty4UO2ebUpH2+nNgIlDzLcQ3OS4+9w5Il3yC/8aW8GjWlO2/OxUBCepDrtfMDuy18iDUxp3PUwZENXvcEhr5EQr8T2zFh3VHSI/deWQC+zKG49/3DGDrKCAslGffMZLrupqR2AW0Rg4nPojMUMECpsSYWsbXiGaMs3N3KE4/pZC2nGw7mDCGFLFp288Ce6rsvypBOsERzoJLZYGBCbAHlER/rmYexTZiVNcogFfZ6DaoqtZA3Jvu3mq0xztKVEihdMJcBs5retwQAbVEydH9ms9hsk3SdeDQNy/SLfaxAbSBPU+6xGf/8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Richard Weinberger Cc: Anton Ivanov Cc: Johannes Berg Cc: linux-um@lists.infradead.org --- arch/um/include/asm/pgtable.h | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index a70d1618eb35..44f6c76167d9 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -242,11 +242,7 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *pteptr, pte_t pteval) -{ - set_pte(pteptr, pteval); -} +#define PFN_PTE_SHIFT PAGE_SHIFT #define __HAVE_ARCH_PTE_SAME static inline int pte_same(pte_t pte_a, pte_t pte_b) @@ -290,6 +286,7 @@ struct mm_struct; extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); #define update_mmu_cache(vma,address,ptep) do {} while (0) +#define update_mmu_cache_range(vmf, vma, address, ptep, nr) do {} while (0) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that From patchwork Wed Aug 2 15:13:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7EFAC0729B for ; Wed, 2 Aug 2023 15:14:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 218FE280197; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E6F828019F; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2287280197; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8DB28280197 for ; Wed, 2 Aug 2023 11:14:17 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2C938140EE1 for ; Wed, 2 Aug 2023 15:14:17 +0000 (UTC) X-FDA: 81079510554.21.6E3F664 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 1381D40007 for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tZtdOJth; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989254; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=65FxR5dx/xsh1AVUxUI8FQp36yuMGxHGkyHQLalbY28=; b=q25QdEV7pD9Z4aO5jPSkJL4jOFA5/YseYCzaOGckd73Jq8+s/aiVTgUBDsHj7EMulNu/tr Zy0Zu2Sia5qInKB3Uo2rrbL+r8xVCO77ud6ezama6GYSvaa7tYB5LAuGh7Uu+gR1PFpQ7+ QYCWAr0Yx/Jgfg4HRt5Y+l5+bKgilWo= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tZtdOJth; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989254; a=rsa-sha256; cv=none; b=i7yHzvxC5iChj+28lhfSdTtetT9ydgOXU8lWuN5xzd2lWEoa+BNe86iv/cogMFsA3q2G76 +FUZaBwq8+KALOcP3G9TGIUjV1uie4Rr5lk8lAkTl7E4QMSxSc60W73r9TveMkWpLH1wFL kBMM2crVipJiX1a05lHj58C/lKmq1/g= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=65FxR5dx/xsh1AVUxUI8FQp36yuMGxHGkyHQLalbY28=; b=tZtdOJthWmLoEATzjw35d9/P1c BBAS+XgTGJndzoeG5ay3Zd09iCt2Ja+Bm71bnu0Vn0PxBY4VMOca0xZB5pVjFtbFMTg0/QukYhXwN RSp6pEtsSGQQ0iRUZF7V0XBf7RIYZsjqph3R1p0Yv5tSIcYM5MUTKtcD2jyMY+h2iXnOL4StTKTIi qLfdfcLbhvu5EM/8gm/vMliZvINnzizo17mZFeNEIaavns6hBklrHpvksa34eDGLwqOLB6KDwYuK8 6YCUu3SqftxvMma7n297vFwCX8aU5zKMfaY39W3Eu2JFKI7T3DsOcW+M3v7kNn8AjvuQtMqtGmlyc 6A2Q4yNA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYB-00Ffl1-4u; Wed, 02 Aug 2023 15:14:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v6 28/38] x86: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:56 +0100 Message-Id: <20230802151406.3735276-29-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1381D40007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: bgmbwa8c3dnwsrbqz1bfhu64j45ce8jh X-HE-Tag: 1690989253-19441 X-HE-Meta: U2FsdGVkX19RQBy9zsAsMHqHqU4RPn4C2VkjAKynJVZGOTBCYKCqrMN/kd2RfHgvcAdSo45y/yfC+ETITOQvMga0HVFrOJa+QJGGJCfKdlj4vaNhslpYY8aaFOKDyUFclpAhjizbN6+x933qmCYvLSmGfR1RbNUhPT11wy2N6H2aPsqLJwfFiMOmK2QZjV2RUsulbZTCzO8YMfdxwm4BFco9YBgRvRJnByl2TNTm2b9e87w4hvIk7ERN6NiQwEIqEecq7+ycphqYVz+iOrO30pHkQZBxzKjAThKujdfAXGgCoU3SoHqre0ekBZkGfgsStXB3ennUNpyvmRZtWBH164iogqCStObv8Ns1abg07dSTx7p9qqU/TNpbyw4oK8OiqqIiD1b0gVdF4xODZImVkj+YNoiQ6JomxjcdTeTY+Uk6DCCH7zRirYJ2J9yU5Oatdl5IvoIMW6WJ6l2Dzq3i+8GBl2hOPSaG3zRWn7Ha1h2bjlfoMBr39fpkKHnJMzBz+RnYVaChoEKXFJQj9ugRCBy8F4PQzWxFUOgKKscpKKuZYgwICWGrt/CCd0xlzi/aykMHK5aDW6ymZ/hzA1t/vLVgg0Ds+cUGe4FJSHSpDX+4k3o3PxVYqcgwbRDc5c+RDDzN1o3BRjIGIZcLUnN7qICwcqhCsmwVEPO9jaXkeAJIkodAD5uY8KruguaAXUot2ZYh2MedhmvVp7Jxw1tpY1LybR6J4FK39taAFDAZuUdgqiROWblXp+MhY8YQxlzXGTrTmZ9wgbmd6sqS625PTwfbou2LN0BBWYZGC9ALY641Ip0gFntBU08LtERvyI4z+dcsRPG/QLN1lRZ/QiVA5b+gLK2cMUEuLxKQ8lTScDITPckXya2V4otKjiRwnkAaCuUWTa0QXDe2HP48RJ4l1sjIbWL9N8gU1grS0TE+t5vLLRpcaxED3fjtJXS5z4osSvc3VnRiyK21Q4deDwa XEsYlnnx /k+iNdrre/KOmTkDNR52Ij80Oslh8/qk4+JHUh3PCT+soTk70bhbJ0rfXjKP9iMBR6c8UEgEEQmeDKnqXMKBY1cDO+YDkSXxpB1uF9mDd44+n5CtwrCpHMMvO/0MVmdskbmcjv1tg1rRmR6ymbiEaeSjIqLmjHwBdH3PxntKYN6OhiNJrVeEkldX5nMLUIvlDGl6fuEkgtAFT5wcC0hLDVs3VVTXVotBhm0yFMzf6ui/rQhKJZqwvxRujFzjXmsqozrXnyJbjfGsNxuCikeYi/Wnoq3rzITSF3e3w+4dqds/8USIQTuNJbvXnf/7lsRRTAX7yjLvBtkS9jphj6J/gzxcf8JQk2fgz0Unc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT and a noop update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" --- arch/x86/include/asm/pgtable.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index cd0b6337d03c..dbf8af70b7c2 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -185,6 +185,8 @@ static inline int pte_special(pte_t pte) static inline u64 protnone_mask(u64 val); +#define PFN_PTE_SHIFT PAGE_SHIFT + static inline unsigned long pte_pfn(pte_t pte) { phys_addr_t pfn = pte_val(pte); @@ -1020,13 +1022,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) return res; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, ptep, pte, 1); - set_pte(ptep, pte); -} - static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { @@ -1292,6 +1287,11 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { } +static inline void update_mmu_cache_range(struct vm_fault *vmf, + struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) +{ +} static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { From patchwork Wed Aug 2 15:13:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D04DCC07E8B for ; Wed, 2 Aug 2023 15:14:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AADDC28019C; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8ACAD280197; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39C92280193; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id F19C7280196 for ; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AA4D1A06A3 for ; Wed, 2 Aug 2023 15:14:15 +0000 (UTC) X-FDA: 81079510470.06.051D81E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id D9E2040005 for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TlCsPOfk; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989254; a=rsa-sha256; cv=none; b=p8Zkjrr231Y4nRU+zqybFmJiyFlHJW6k2d7EJagugVrCFbIDG+MPeNqFLKI7ykUW2c4IlW cVJrH+s94I3uJebD3OWDe2OMmNq2lo5pDwnWaiofmWZc+bMfAA2wQeDbLJTwmIHDw3WojQ CSgeAyp37ZBTv+96u25yZwkNj0SIhuk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TlCsPOfk; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989254; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w5jjc+QVsIh+j5vOtRTYmxquHLNkP8D12I9UGj1E7Eg=; b=r433sMQY7v5MNMmEY4yWk3jV1YAZeybd7PWMVBvcK8FcJsQISDN73KBu3kEHgtnBvi0dZk rZjJW3m5ZVqC5cyZlAGIANaTVWHZDwiDIWvzETVNTA5Do8kA7hU0nGpAtwXnyrN7Ro7hj2 frGqHseRUsI7RDzAyvSiRxNeVdT6I/0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=w5jjc+QVsIh+j5vOtRTYmxquHLNkP8D12I9UGj1E7Eg=; b=TlCsPOfk/qQ96o19eJTkNo/cmP oolOZfqX8FAl78ArzCEwq6A8K9XiVBkg2F9i7ch5PpW9qszzatA3yMoHSowaSc+9H4Itt+TbeSFEI i4qfRCWkMe81KooG9M8Mn+OjZ6XcgYVC809BLb9dX28bVnBtH4r1o8ZY8VY2DjuFdAC9VkB9JhPFm 1RP2mAejSN6bLfWFoAL9XYencUCQnYxx+LVvVOYsrcyTS2g+tFP3FHod0MGn+XBAAhVZM1A3HCRVO E6WtV5jtRqHyVQA/+Eh/bXvkg+0cCwPFRSV6yzfhKeaZJsH6Pz3WzHI3cBr8woXECSiAHCuieH/7b fSw0JZXw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYB-00Ffl3-7V; Wed, 02 Aug 2023 15:14:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH v6 29/38] xtensa: Implement the new page table range API Date: Wed, 2 Aug 2023 16:13:57 +0100 Message-Id: <20230802151406.3735276-30-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: D9E2040005 X-Stat-Signature: o6a3emtwacwy1zpxs3yhpckibdtfdo8y X-Rspam-User: X-HE-Tag: 1690989253-331218 X-HE-Meta: U2FsdGVkX18KbcsX+EgnvjvOvIahpS8dGuLrWSwV1H4RQb4ldQEq1u2LqTRS9BCPw2p7q84F/ZXs34zevEvQD6tO6BToF+Wi0LyZ/Nw+2gZhEv0RZz7mG/+SKuBRLh35J8MYnkUUt6q30PoZUkHvfmp+nw92kBoddfprCYvSen8XtBCo/W6fkwbPnGdcm84o0kv/E2+HLZ63NRmfabxDjmzw6XrYqNkOKLhRQS180YVpWMiKTHy+2OdwIcFlUdZA191yloDQfIV+HUpPUB658+OVKMryweuKPJHg2lmIw+snpUX/x1SBeFmjmd91pGSkoF/X1Y3IHg0aislwS//Oa5Q6g05h93xLsg9ss+aB3nhuFMpyibFN6JzxtvhodAMaIQYucypdstRnipL+c0SrP5B/yDvC3CoGhVUC3g+Xb/r834bcAzXsm/VHlx5OPZmndliXqK3/55bKqSB//dOG8RUu4Nr8fLRpLyjfL1pC41D7+WaYhQH/mEuwXNpPjxyi4RSRG6fl4SLWO7JayndfPgsU8bU2lnw++6ovQh/SH2Dohzs1H+15Uv1fA5h36waVecFCxyhL9QAcMVxRnPtHTPwf1PvNouhCiGZHRT8MnlYTLOHaUX3+KY5m6UfBwrmTPjxitkNCxYm12SQK8dJaRcnh6XwbNvyrSkO4vC+ha4di2Sw53l6JJl+lzYU4vKrv6tEH78oaG5EXkA/Mir4Z5CvWsMyu/psZ/zmqVDokptAzAK0OX6tJIflPAUD1WbMgWkz5xVNXPxfLQf8fsoO5vaR0gG3k3HP5Xbasnmk/UBUsWIzwx2/sVRJciosvThX+TYY8Hgxnchmgz0vKkVJ2s16Rk9BdFmuMuoxl7ZG504mfhDFB6GviqwR5lzbnfFm/+ra+7gvjjaz4qwgkbwSPQythh3Z3Qaoam+4YRZ3C9Yk2tdqklhNoeyLcpVHl9FpVc+ml0QQ9pEil2fk8UZx Vfg8ycUc PdTZ2jtiKWOqVaOfI/M709YhzgNA4Yak0/FX6BESD6Yh45USPApcFXXnbQAfLNOTui1rhbIt9p7NM8Ix1LWPVFTS/htxGOLfTZC0dvelRTi6aOt9eYH+I8tveZBb+HXAfWPw1Or7OawQbrBjsZnBUTaNedD2xN4iVZVcpmd8b4voaCfRaTm62/zEhLX9IFRhJaflD7RO0MCHV41CkuTNidL6xcogO3MEQe/0U8PAODS6x13CX6DRGpa41MDuwgK1okhNPDzKAtrBNFze/Yz62TUJ5sJuSxeCbZ202p/3pJSDRqyzKRy25sTTruK9iNcfu/BBH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Cc: Max Filippov Cc: linux-xtensa@linux-xtensa.org --- arch/xtensa/include/asm/cacheflush.h | 9 ++- arch/xtensa/include/asm/pgtable.h | 18 +++--- arch/xtensa/mm/cache.c | 83 ++++++++++++++++------------ 3 files changed, 63 insertions(+), 47 deletions(-) diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 7b4359312c25..35153f6725e4 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -119,8 +119,14 @@ void flush_cache_page(struct vm_area_struct*, #define flush_cache_vmap(start,end) flush_cache_all() #define flush_cache_vunmap(start,end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} void local_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); @@ -156,6 +162,7 @@ void local_flush_cache_page(struct vm_area_struct *vma, /* This is not required, see Documentation/core-api/cachetlb.rst */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index fc7a14884c6c..ef79cb6c20dc 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -274,6 +274,7 @@ static inline pte_t pte_mkwrite(pte_t pte) * and a page entry and page directory to the page they refer to. */ +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define pte_same(a,b) (pte_val(a) == pte_val(b)) #define pte_page(x) pfn_to_page(pte_pfn(x)) @@ -301,15 +302,9 @@ static inline void update_pte(pte_t *ptep, pte_t pteval) struct mm_struct; -static inline void -set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) -{ - update_pte(ptep, pteval); -} - -static inline void set_pte(pte_t *ptep, pte_t pteval) +static inline void set_pte(pte_t *ptep, pte_t pte) { - update_pte(ptep, pteval); + update_pte(ptep, pte); } static inline void @@ -407,8 +402,11 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) #else -extern void update_mmu_cache(struct vm_area_struct * vma, - unsigned long address, pte_t *ptep); +struct vm_fault; +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, address, ptep) \ + update_mmu_cache_range(NULL, vma, address, ptep, 1) typedef pte_t *pte_addr_t; diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c index 19e5a478a7e8..7ec66a79f472 100644 --- a/arch/xtensa/mm/cache.c +++ b/arch/xtensa/mm/cache.c @@ -121,9 +121,9 @@ EXPORT_SYMBOL(copy_user_highpage); * */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); /* * If we have a mapping but the page is not mapped to user-space @@ -132,14 +132,14 @@ void flush_dcache_page(struct page *page) */ if (mapping && !mapping_mapped(mapping)) { - if (!test_bit(PG_arch_1, &page->flags)) - set_bit(PG_arch_1, &page->flags); + if (!test_bit(PG_arch_1, &folio->flags)) + set_bit(PG_arch_1, &folio->flags); return; } else { - - unsigned long phys = page_to_phys(page); - unsigned long temp = page->index << PAGE_SHIFT; + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; + unsigned long temp = folio_pos(folio); + unsigned int i, nr = folio_nr_pages(folio); unsigned long alias = !(DCACHE_ALIAS_EQ(temp, phys)); unsigned long virt; @@ -154,22 +154,26 @@ void flush_dcache_page(struct page *page) return; preempt_disable(); - virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(virt, phys); + for (i = 0; i < nr; i++) { + virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(virt, phys); - virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); + virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); - if (alias) - __flush_invalidate_dcache_page_alias(virt, phys); + if (alias) + __flush_invalidate_dcache_page_alias(virt, phys); - if (mapping) - __invalidate_icache_page_alias(virt, phys); + if (mapping) + __invalidate_icache_page_alias(virt, phys); + phys += PAGE_SIZE; + temp += PAGE_SIZE; + } preempt_enable(); } /* There shouldn't be an entry in the cache for this page anymore. */ } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* * For now, flush the whole cache. FIXME?? @@ -207,45 +211,52 @@ EXPORT_SYMBOL(local_flush_cache_page); #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */ -void -update_mmu_cache(struct vm_area_struct * vma, unsigned long addr, pte_t *ptep) +void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; + unsigned int i; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); - /* Invalidate old entry in TLBs */ - - flush_tlb_page(vma, addr); + /* Invalidate old entries in TLBs */ + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); + nr = folio_nr_pages(folio); #if (DCACHE_WAY_SIZE > PAGE_SIZE) - if (!PageReserved(page) && test_bit(PG_arch_1, &page->flags)) { - unsigned long phys = page_to_phys(page); + if (!folio_test_reserved(folio) && test_bit(PG_arch_1, &folio->flags)) { + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; unsigned long tmp; preempt_disable(); - tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - __invalidate_icache_page_alias(tmp, phys); + for (i = 0; i < nr; i++) { + tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + __invalidate_icache_page_alias(tmp, phys); + phys += PAGE_SIZE; + } preempt_enable(); - clear_bit(PG_arch_1, &page->flags); + clear_bit(PG_arch_1, &folio->flags); } #else - if (!PageReserved(page) && !test_bit(PG_arch_1, &page->flags) + if (!folio_test_reserved(folio) && !test_bit(PG_arch_1, &folio->flags) && (vma->vm_flags & VM_EXEC) != 0) { - unsigned long paddr = (unsigned long)kmap_atomic(page); - __flush_dcache_page(paddr); - __invalidate_icache_page(paddr); - set_bit(PG_arch_1, &page->flags); - kunmap_atomic((void *)paddr); + for (i = 0; i < nr; i++) { + void *paddr = kmap_local_folio(folio, i * PAGE_SIZE); + __flush_dcache_page((unsigned long)paddr); + __invalidate_icache_page((unsigned long)paddr); + kunmap_local(paddr); + } + set_bit(PG_arch_1, &folio->flags); } #endif } From patchwork Wed Aug 2 15:13:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42F5FC001DF for ; Wed, 2 Aug 2023 15:14:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B2932801A0; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 260E628019B; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E215F28019B; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B9B6628019B for ; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6299B120F95 for ; Wed, 2 Aug 2023 15:14:18 +0000 (UTC) X-FDA: 81079510596.05.3171CC6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 8D35F180010 for ; Wed, 2 Aug 2023 15:14:16 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pdlgDleZ; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989256; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QeDCABsAWpOx649ozVbfmv9UCtQh/TSjAvda51zdKvQ=; b=fn4kCGCTJbGo9aDtKWkGeYTbFx3N5v948W2QkyNI6B2JnMQ7lkCnpWRrlDkGxoVJOsPKpM 0EhcI/IyBQ/ClYeYwIKo+b3M1PSfpQRUHfkuI/9tna9B0PlgTg+onlFEgFbXm680WRqC8v EWhF+YuxStr0ZfkzIh9BHx/MFm1ruRw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989256; a=rsa-sha256; cv=none; b=WZDUHmyYlZalZjuYrFwgrFOeKajr8jU1Z3RF870gtApCD9DrI74s7Dffx1favDuF6AtIN6 FYewZq2jv2sDQy6COGbYVGHoH56jq8lEIMYtWS5dQ649WAlUFkALHmQGSCwEN8zQly4FOz TbZSmP+zzQqnXLdCLSs1y2NJlRSvNi0= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pdlgDleZ; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QeDCABsAWpOx649ozVbfmv9UCtQh/TSjAvda51zdKvQ=; b=pdlgDleZUdpe0s2vgiqF1JRXew RxlXvTviSK63Rg5cFBeKiH5bUPEQucNliUguVeMyeUW6BBrpyGtKchmRvKBDrcryN9cBY1/PnvsB2 SXkF0Kcu4LH9AhS7mVjXmL1533HUtRgwR3/gmskJoj90e+5Kw21PpL6/TRpbmLpt6lKBPXeejcd5z 2mWnEuvGJdjhoc9F4U1oO00CLDJTYYcyOKIwOsU6jzQPd8GM/fMpzifpxCfkLVRKZ6CQmegeQsPDw XS5E3s5PQRJfLKaj66flU3uPUzSoFWUQ6E7mHR26uvkOiitMX9BkU8ZTqCrgPim8kyx+ynaJJpgKa y6uqTg0Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYB-00Ffl6-CQ; Wed, 02 Aug 2023 15:14:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Anshuman Khandual Subject: [PATCH v6 30/38] mm: Remove page_mapping_file() Date: Wed, 2 Aug 2023 16:13:58 +0100 Message-Id: <20230802151406.3735276-31-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8D35F180010 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 3h1yi96es1ec48akd7fcixt3qha7ou8d X-HE-Tag: 1690989256-675346 X-HE-Meta: U2FsdGVkX18we/0EDwuCtqo5iCjfIy3EMkJ2ATl/sxaxE+PcU/YdGTATc9IwJQJ2JrAdef3llZfO6zOBBJAvYDf0ZdHEihIgzva41uINtjg0huW61XTaBzYEmoG68unRjnkegOTHYHlzLfk3BNISxBYKxZB3SXgsqsqm5uPL4JkQvohfB7cqY9in1QtgpaeyRf0Y1JXxKlZZHb3kqmyYVaTrvtAe14V69/+/cfr5/JOQry2SEUlv/LLmJ8JPUPd/jlhyK63TUUm7p1t/ysdSEuoEH6OF3KldUANrkytpfnsBxQtBWlJuKrDuDJ3IaMMyIykqby5v9mKlaa1nWledZHHIGohfHXa3HC82HG7pi1P4MtJQqU2DDm28isD386dbnvcrCF2yx35L27JmtBT4bYyYYrDWVyAdELr1nBLqGigLGJyBNRMbyFQoWnfzF0MazmikVKhto3NPH08Ak3Ove4586nOvT3lV16Md5nqXsx7zx3i9+T8fkJFqitS3kL+KAxv0hpyxY02J7zBraeE8LHK1hHNSz7yA+4uLARWb3nlXt/QntDkwcdfjO/rS4WGcUisyIRjX3qA+MXGVe1t6SkQhTHXfLTvpkhww5nrT47PJCxsZZRKf/KhoXhPtsQwYrBH0og1qegbNmx49+rYwmLlZiHAifHoPk1pxvAon47wzExV8CUoNdNYtb/MppM+ugwc8gpSrCYV9IdA755zI8rX87KUGGZk02QowOShigtmRkgPkB+vD3JhF3Tf6aJL2gOipxnedDY3d9gmR3FpdoR1uhAjg8bQn9m417t3YoVe9vjDcSFrWdnUTsgPvxILZlomScLDkweHeP3LY6QpB39OuBntAdTo0cB8npdW/RoSOLPSienXtv1Gc1VZEK2pDzABdhWeNu9YE5Kd0x+yVgNv/sGUismzvuTFCBSt0HsZWScIQOA7KkY40Y49mrp3VMPadZNXXqJvLJ/G+qwI po4carTa V5KDQtgZ3vwNiUMOeZA39JMHAr3yN3vgNLw6z1VbAUG7Wxy9IjvHKnKVaNK1Ffps3CMfUgPORSXUgCVbXaxS5ApFUkjhqLRVNRoo+vGR/xzRfpk9K6vn0zh92hw4K/3pbjUyRLBh6urOphB/klq7WZS+JTu69Z9NVPeJbxLSRCBSZdVnlpGpro7cXsOR8CzAYOPTW2Es0DMdyNQoNdwYu3EUo2h4TVpdloEVkWwtpw70vE+DyZ7HepSBbxkR/qSy5ysHke2imvzqEqTCRMP/Lr7IRzcr2zkbEYxPR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function has no more users. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- include/linux/pagemap.h | 8 -------- 1 file changed, 8 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index bd522a64b714..6f8d6529b350 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -414,14 +414,6 @@ static inline struct address_space *page_file_mapping(struct page *page) return folio_file_mapping(page_folio(page)); } -/* - * For file cache pages, return the address_space, otherwise return NULL - */ -static inline struct address_space *page_mapping_file(struct page *page) -{ - return folio_flush_mapping(page_folio(page)); -} - /** * folio_inode - Get the host inode for this folio. * @folio: The folio. From patchwork Wed Aug 2 15:13:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01CA6C10F1A for ; Wed, 2 Aug 2023 15:14:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CD0D280199; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 65C43280197; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A18128019B; Wed, 2 Aug 2023 11:14:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C6910280198 for ; Wed, 2 Aug 2023 11:14:15 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8B3F61A0F27 for ; Wed, 2 Aug 2023 15:14:15 +0000 (UTC) X-FDA: 81079510470.07.7CF5EC6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 9F05CC0025 for ; Wed, 2 Aug 2023 15:14:13 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Nx+XGrXD; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989253; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Af0pKntcHen16/GbGLW0gVbMd8qvjKEK9cTFOJ9hItI=; b=5lGou9bPd2+b38MueBTD4s5ahqhSKBaX8OfXfCWKHlJPquGMpIQylRFoBQvTAu0lbW4jAZ fZftcx1DT6mUvu9MC9ec4lsVYF+BTpnO9JSaYtIMWCOukXfM9NUY9JN70EZpq6QP+CD0bt 7xa0bElRMDmsukrrLdk0GtfWKCHfjcQ= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Nx+XGrXD; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989253; a=rsa-sha256; cv=none; b=euXF/TE1g61Y3cmpjbQZr1K+qJPyig+PrtZcbt7W+ajYGLUNM3qR80EgShIj+o3eh8obws Qf8AOeC5rcLQD3H4TARnDxzC2GQaHLpnXMRR9R5WMhR5rEBKw2hn+68ou8rXNYZOLpgTJl 7BCOlWiGaYPAzuBblBMf6Qgwku3fYe8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Af0pKntcHen16/GbGLW0gVbMd8qvjKEK9cTFOJ9hItI=; b=Nx+XGrXDyX+7IPFW52InB8hd3v JBTMECGl2tPQnjUQrpbSpBhmHuRwof8MT8yNJwW+igL7NVON13q9w2RTGoaCVDH7tdNzDylHRjQ5N hlUB/tOx9jtbpRir9lF2urFyYk6gAya+RSA1kd2UCucnXUTM7n/EY8jofH6CUJQp6Qn31I/Tppsjh jp3m/RWgtNBkJBEPhAMtayZ0u70H6ss/kMBjkpf7ty23QlWM//Rdgy9rnCovJF/TzKylE07cXH2bC DCre6zz7LfJ9gQ6M78YbecEqZJ9VqMqFp4RVvwIs/AG2RnhmPgA6/r3KPkVGRHLpu0cbveBEZ1Q3G f/EbcVMw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYB-00FflE-GF; Wed, 02 Aug 2023 15:14:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 31/38] mm: Rationalise flush_icache_pages() and flush_icache_page() Date: Wed, 2 Aug 2023 16:13:59 +0100 Message-Id: <20230802151406.3735276-32-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: nhddm9c14bnwpsj8fwn5d43inhc9wj4w X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9F05CC0025 X-HE-Tag: 1690989253-726474 X-HE-Meta: U2FsdGVkX1+PzUGgcgP8F3HcPsa8wv4w4grQcg5g9e7dDqH2KAyCryscWnbBoqTG0cC00zwNLLUfj83tPwjayE+wXb1BA/9iVdrUiZ0ATUFZcuSEKxCLYqE+wDM3U93wxcbSZnsZVQM4eDO0+YccVwi19GwEGJ4z82JmERSmNo+Vxny7puYb06+2QNMWZ/o8taJisiF+QCzVcHhLKSrGq23l2DATzyER1ktEtOSr3nDo47Uaz/feq7pM2IEKdOo+uBpy9X3V6TaBfeQQiY1/fk46gz4Yckh9ZfUDXfQmzP8wn/hI3TxKMvhZmw0IO5o+HqRItMtU6IoHap+jxDLlPuLhjLWWM3REm2DhBi4f5eRXCqoAoXec9zOkqNaWgrYXAmHCLV8arLEvTpHkMr2BT1joomrNYXERjFyfadWc1mVMgA924Y91vcX6PstI3MgUd3SEwuk5OO9VG0HqIaLLnic8zm3bi52cEnCj834D3fvd1gW7iftOL3OIixpbCdtRyscBwbMSEPjWWCu7TGMbjfU5HsYHfvTc+yJAX8muhQo+DTXamyLKkDSFdXKEUFcwTho76PACBcb/bnz9HiQ5PqvB9ZyF1GbXpZ6KqdM7tuT8T+WwNbE85A+ztjQyuUtEMvBP/+QoKicXMFxrZ8Tk5k6opEhRCUcce5cI98+WD9XvVTTq6JfO24cFjeimixtVrMeqJ3nvAM8Ke8DhQrtQd6KkN+8169YxCLcDB0lDnIeuXgEZT4JusojVQ6TA/HQl8CW73gq1as43E+CmT1nrmBh0Sv0w3MPtTdpOz/SRlpvSlc1p98cB1qpLBFwocirAZAShuq7pD0BZt6uZ8fjAxc944LHM+Q68KcLt9IDa8S0yxDUMZqnUssp6xzkbqdeTHeWh8N6EviDYkCZfwr2EW0xqBgOR4mIHeIQtJMoP+krHUUiq2Y/vGnz1GBaX5LnB2hcoTqKuD+sMSXH98tb iGW5SZHb h5yq5SU0jSWHsgAZwxj2Yt5UWW6pKme6hR9DNgBOT71B+ts89J7N/X2Y+BKyzCm/SO859EtrBn+NIe+3wyJkRO8voiQY6B4+DHyYqts/SHqu/PdKQVsJ2UbS6qQMGGo8gFlpw9LQO3B8pYm+WD4NQ13I+F6wV5Hc6tPfr9StjgzgKOodrRPvHKBY9icRh2uiMCxmdg66W6x9SIgB1n3EwS0uDLIChoVftXmJ2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the default (no-op) implementation of flush_icache_pages() to from . Remove the flush_icache_page() wrapper from each architecture into . Signed-off-by: Matthew Wilcox (Oracle) --- arch/alpha/include/asm/cacheflush.h | 5 +---- arch/arc/include/asm/cacheflush.h | 9 --------- arch/arm/include/asm/cacheflush.h | 7 ------- arch/csky/abiv1/inc/abi/cacheflush.h | 1 - arch/csky/abiv2/inc/abi/cacheflush.h | 1 - arch/hexagon/include/asm/cacheflush.h | 2 +- arch/loongarch/include/asm/cacheflush.h | 2 -- arch/m68k/include/asm/cacheflush_mm.h | 1 - arch/mips/include/asm/cacheflush.h | 6 ------ arch/nios2/include/asm/cacheflush.h | 2 +- arch/parisc/include/asm/cacheflush.h | 2 +- arch/sh/include/asm/cacheflush.h | 2 +- arch/sparc/include/asm/cacheflush_32.h | 2 -- arch/sparc/include/asm/cacheflush_64.h | 3 --- arch/xtensa/include/asm/cacheflush.h | 4 ---- include/asm-generic/cacheflush.h | 12 ------------ include/linux/cacheflush.h | 9 +++++++++ 17 files changed, 14 insertions(+), 56 deletions(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 3956460e69e2..36a7e924c3b9 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -53,10 +53,6 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_user_page flush_icache_user_page #endif /* CONFIG_SMP */ -/* This is used only in __do_fault and do_swap_page. */ -#define flush_icache_page(vma, page) \ - flush_icache_user_page((vma), (page), 0, 0) - /* * Both implementations of flush_icache_user_page flush the entire * address space, so one call, no matter how many pages. @@ -66,6 +62,7 @@ static inline void flush_icache_pages(struct vm_area_struct *vma, { flush_icache_user_page(vma, page, 0, 0); } +#define flush_icache_pages flush_icache_pages #include diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index 04f65f588510..bd5b1a9a0544 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -18,15 +18,6 @@ #include #include -/* - * Semantically we need this because icache doesn't snoop dcache/dma. - * However ARC Cache flush requires paddr as well as vaddr, latter not available - * in the flush_icache_page() API. So we no-op it but do the equivalent work - * in update_mmu_cache() - */ -#define flush_icache_page(vma, page) -#define flush_icache_pages(vma, page, nr) - void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 841e268d2374..f6181f69577f 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -321,13 +321,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -/* - * We don't appear to need to do anything here. In fact, if we did, we'd - * duplicate cache flushing elsewhere performed by flush_dcache_page(). - */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, * vmalloc, ioremap etc) in kernel space for pages. On non-VIPT diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index 0d6cb65624c4..908d8b0bc4fd 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -45,7 +45,6 @@ extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, u #define flush_cache_vmap(start, end) cache_wbinv_all() #define flush_cache_vunmap(start, end) cache_wbinv_all() -#define flush_icache_page(vma, page) do {} while (0); #define flush_icache_range(start, end) cache_wbinv_range(start, end) #define flush_icache_mm_range(mm, start, end) cache_wbinv_range(start, end) #define flush_icache_deferred(mm) do {} while (0); diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index 9c728933a776..40be16907267 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -33,7 +33,6 @@ static inline void flush_dcache_page(struct page *page) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) #define flush_icache_range(start, end) cache_wbinv_range(start, end) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index dc3f500a5a01..bfff514a81c8 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -18,7 +18,7 @@ * - flush_cache_range(vma, start, end) flushes a range of pages * - flush_icache_range(start, end) flush a range of instructions * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) nr pages for icache * * Need to doublecheck which one is really needed for ptrace stuff to work. */ diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 88a44da50a3b..80bd74106985 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -46,8 +46,6 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) -#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 88eb85e81ef6..ed12358c4783 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -261,7 +261,6 @@ static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) #define flush_dcache_mmap_unlock(mapping) do { } while (0) #define flush_icache_pages(vma, page, nr) \ __flush_pages_to_ram(page_address(page), nr) -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index 0f389bc7cb90..f36c2519ed97 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -82,12 +82,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) - extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); extern void (*__flush_icache_user_range)(unsigned long start, diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index 8624ca83cffe..7c48c5213fb7 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -35,7 +35,7 @@ void flush_dcache_folio(struct folio *folio); extern void flush_icache_range(unsigned long start, unsigned long end); void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); +#define flush_icache_pages flush_icache_pages #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index b77c3e0c37d3..b4006f2a9705 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -60,7 +60,7 @@ static inline void flush_dcache_page(struct page *page) void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 9fceef6f3e00..878b6b551bd2 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -53,7 +53,7 @@ extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index c8dd971f0e88..f3b7270bf71b 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -16,8 +16,6 @@ #define flush_cache_page(vma,addr,pfn) \ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) -#define flush_icache_page(vma, pg) do { } while (0) -#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index a9a719f04d06..0e879004efff 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -53,9 +53,6 @@ static inline void flush_dcache_page(struct page *page) flush_dcache_folio(page_folio(page)); } -#define flush_icache_page(vma, pg) do { } while(0) -#define flush_icache_pages(vma, pg, nr) do { } while(0) - void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, unsigned long len, int write); diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 35153f6725e4..785a00ce83c1 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -160,10 +160,6 @@ void local_flush_cache_page(struct vm_area_struct *vma, __invalidate_icache_range(start,(end) - (start)); \ } while (0) -/* This is not required, see Documentation/core-api/cachetlb.rst */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 09d51a680765..84ec53ccc450 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -77,18 +77,6 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #define flush_icache_user_range flush_icache_range #endif -#ifndef flush_icache_page -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} - -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) -{ -} -#endif - #ifndef flush_icache_user_page static inline void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index 82136f3fcf54..55f297b2c23f 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -17,4 +17,13 @@ static inline void flush_dcache_folio(struct folio *folio) #define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ +#ifndef flush_icache_pages +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} +#endif + +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) + #endif /* _LINUX_CACHEFLUSH_H */ From patchwork Wed Aug 2 15:14:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 032A2C001DF for ; Wed, 2 Aug 2023 15:15:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 210092801A2; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F34352801A6; Wed, 2 Aug 2023 11:14:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5D2C2801A3; Wed, 2 Aug 2023 11:14:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B1F532801A2 for ; Wed, 2 Aug 2023 11:14:21 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8500E1409F6 for ; Wed, 2 Aug 2023 15:14:21 +0000 (UTC) X-FDA: 81079510722.25.79EC9CE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id A276C140017 for ; Wed, 2 Aug 2023 15:14:19 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=cQkkaX9A; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GMRZGVpnCMpBl9S2OQBEWBTvbiAjuCy5RfO1vOJbT6c=; b=51ROHPU29A0+l4tRSQgiLu8cyKzgVjv8Np9QV2b3jNMhndycW6SRTzSHqPskJj19eab1CC 5dg6KyFlPGtttbkE1PgoKUiB6aQ6CXZhI0qMNqZEu0zgsDl/2xMrXk1LzJ6WtfF65vO/KV s2zX+v1BUagtGkSxi1SfS4ZhIWZ1rUo= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=cQkkaX9A; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989259; a=rsa-sha256; cv=none; b=PHhFYnDhAJgjtsXV6R0Q548/kiq0Hl+h16rqt8JHz60XX96HdK77Dy9uUe/Rl35Yv3elTs 36bm5oOOuqxgM9w4vbrp2R/BPiMxEz6kUlDm+ox53rK38/0Ot3LGUIUb3iKsTopD79pZNo qZZpQ9xc/Siq9Ky0k0utOpDAfxpAFPY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GMRZGVpnCMpBl9S2OQBEWBTvbiAjuCy5RfO1vOJbT6c=; b=cQkkaX9AAfT+d7aTl5ZlUwKXon wtvoBRn9/aedjqXlIbvq18fj1I3TJw2+Zzf0Vr5WnotPRXSumcCXfibnTrESIbosE2vYswfRzJAVa 5H4yBKdcE98OVD6ZAzq2xzrvctgLTLNioAjG9q+h6IlZXBn2pr/rjY0+/1fkGPnTJNGjWwQwqp54d t4ypSDkHUZ53K1hqvfNIg17njSek0Kx0yJSgRxqmqJklyEUyXTmY4IFbNUXNPvbSrwInEp8yEwBZR c0y0PWV2FZa7GOSlSo0lATyq7uLmb+tmAxstsweGFfCCSbkM5AFqWZR7TQaEEkgi0+dczcR/0Sv1c 3m1xnWGA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYB-00FflP-LZ; Wed, 02 Aug 2023 15:14:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Anshuman Khandual Subject: [PATCH v6 32/38] mm: Tidy up set_ptes definition Date: Wed, 2 Aug 2023 16:14:00 +0100 Message-Id: <20230802151406.3735276-33-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A276C140017 X-Stat-Signature: n8j9yzrwq4aerc8y5ihpinixext8t1mg X-HE-Tag: 1690989259-742716 X-HE-Meta: U2FsdGVkX1/V4k6dUWs54q5lTXI+/DY/XIpX0cGJmSOepCBhZpNzQyiiOzE00YAG32p2QA7ka1JZkNaLEsAwzowG9GR8nfBow9zzmABzSH+BLGyVUptok90SNMYB3samwcM+D+Pnl+mv0W/nDz4Y8J/s21PMs4QwPsN8XXpywSgLOyIdsHVcRg/79d1bT7TthomIxw27kuJCOrDZ7ja77ul1qvx4G/ZSEYbI22IAlixU08EVkANvsPr6Wd585VvcsZM/zi6PNGJmihz7877sLvgxrH86rqkVpPaW+yTSLRMV4fDFRavYS5aRx5O4ErBaPGXOtCKqcQQCEQTjJgVwKycy9aWo4bNjgoX3w+JbPqkFfQap7BMrcA6TjkRFZVYR7pP2jeww4oJTbXdzuG0zRynf3q4RoGwDNsOB5hX6mJC48hbdYfE8NLsgFMKAqghe+NWinyYWppYRsfl1ax+EDni6SHk//sPLJEoiYfKZ/ee0ajiEcbosa/opEIXAFa4YK5IwfJKnSD6pz+b/cRjoAeJZa7Rgpvs0xu24Hx/79Ax2q759ZUlgvFdK+8Vw1VDu6hTrFQj5gn17IS4K1jxhf1wuIh4IqCn9lLYN22WLgRjMv03FeehuxdFo6bKFZLhSsfLIVaq8mBv6qC5PsnssW9qy28Z+OORqXpND+MbjVAzwJTBLQ/tFqFEe04PIBEBqRDfpV9AD0uDntzYzmwrEEdSnLOtss9oVik3UOVQ3GsrRofzVBCDsxVrNhdhCtQ4a717xgwsNxCmWoSdd3MXiQ31pbzcHVtbm6ndJUhTYdMga41Al0s/uheYa/iitIboJ/mgWR3w5G0ByE04tneXGzN+rpzRfgn62y9AZuRLyn0hUHrpQr2GVfMYZgalb6TxV3ATad+Bz5ivARgHbvv0fYRzLtxPhDR2kHPMbBEsQrOr8cfQMsmdr6KjVuTR1h9P1rmldefKpn0fcC7uubGL B/YB7DwJ VBNGLFMN/XT2nGCQVyEbTFQ1pq+oLvg+/jqPgbcbzzx/hOTVjc+6l8lPbgD/HTYr2ymxJNfNqkrCa1iQG4LktBDvRAbGIT7kIuITntMJax0eaXKKVWoqqNAhfqoZVpfgdDfnBWYY46iSJZK8moaGriAkRMSdxAAtR4oHSvt9zI3juuvtmdaX6z80jWrjJfA1vm+yHAeVIeC8QLYIouP5iBNCqdjoWJvTRmCKV1HDBE4HEW5TZKbPOqK+BMJFHAi4vK1DjkCWYN/0ySu3rAA0hK4OpqBTGvj/kEvPD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that all architectures are converted, we can remove the PFN_PTE_SHIFT ifdef and we can define set_pte_at() unconditionally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- include/linux/pgtable.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 3fde0d5d1c29..9df42e4721fc 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -204,7 +204,6 @@ static inline int pmd_young(pmd_t pmd) #endif #ifndef set_ptes -#ifdef PFN_PTE_SHIFT /** * set_ptes - Map consecutive pages to a contiguous range of addresses. * @mm: Address space to map the pages into. @@ -234,13 +233,8 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, } arch_leave_lazy_mmu_mode(); } -#ifndef set_pte_at -#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -#endif #endif -#else #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -#endif #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, From patchwork Wed Aug 2 15:14:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88ADCC04A6A for ; Wed, 2 Aug 2023 15:14:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E25AC28019F; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DABD32801A1; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B136028019F; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9BF6A28019B for ; Wed, 2 Aug 2023 11:14:18 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 717F880ECF for ; Wed, 2 Aug 2023 15:14:18 +0000 (UTC) X-FDA: 81079510596.04.F70B70C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id C5B39C0022 for ; Wed, 2 Aug 2023 15:14:16 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=StoT9OIc; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989256; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Lo+Ghw1793UV9pKFBxkz2iQJ13EHx7xbrSdwvVIFCO0=; b=M7pcoo/vCRbcTATtbBWmA/t6bG2VbOglvDCpZ7fSaeLdJ/XXBf8Ok7KzpJOc0ehLPdohbt C9/4/u1/tH9k3QOpnzKQ9dfLJgSV9HA1s6/PgP4vBEYGV9Nt05OXmPIHF5ZY8VQTauyRdW IT5VSpLZiQMmjuEiUBqnu6EozpPCbqo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989256; a=rsa-sha256; cv=none; b=hbAJnbyjgDDinFvYVrQiLawq8u4F1VNbIcAUjJ7553y6rcNrvllM8JJBfVzSN24XnWl5u7 G6MEJW6jDhGsZ9CtbIRZI9FXrClw8yMqsPidMbAJ0Mohpp61qetsoScpEO6kVcK/FIcp5j 9J6211BOymScPA4T3jby8Wot9TqbIA4= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=StoT9OIc; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Lo+Ghw1793UV9pKFBxkz2iQJ13EHx7xbrSdwvVIFCO0=; b=StoT9OIcj4o9toRzc4/BXdHg8D Zh68QjCQw2PFRHdenUP0R510VbFMjlweCeuIVsR143Psl/lZf3/pdVFFdbFtG7AS6OLIJvd+r6nQk l/QSbIb5Mb2rd1E3BCU2csUdLClDSM+kvDV2rua3Z8j0RDXsiOBNFMuqoD5JXmhBJ4BwNEVD8r2KW y5CulEBiJDFztXAXSYnIkSDrl/3IDADBmpukaYHdOjjK/Wz7yqWQ1ISRRmhkY8IQuRnTAp1csTfE7 v47kAGtE0q/L/4klerCgiyAuXUgH6GqD8/CkrDsNoedZlhzGkRZBoeMxn7JJ0esryenRXCyn+gEwt 6d+qo20g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYB-00Ffle-QB; Wed, 02 Aug 2023 15:14:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Anshuman Khandual Subject: [PATCH v6 33/38] mm: Use flush_icache_pages() in do_set_pmd() Date: Wed, 2 Aug 2023 16:14:01 +0100 Message-Id: <20230802151406.3735276-34-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: C5B39C0022 X-Rspam-User: X-Stat-Signature: qar89mcyxry49n3mato1o9gfs3jy4jhc X-Rspamd-Server: rspam03 X-HE-Tag: 1690989256-165819 X-HE-Meta: U2FsdGVkX1+Qs3hqQ+U1gl/uV6c9PZLdG6Bq3H8XTqEFfiOXQPzFmb3YM2g89BCT5VIOaj29sdedlV6rdJZ5kfRRM22NLffoJ+ChEK1TILyt44wkg+6y723e8/HOGD5jAmYjApRHUV0xDuvSLdyCsXhlnUfuHy4AiKDhujVDmIQmwVYVWjIx1YZLk/XjVp0WZrPLk8x4OV/IMrPSfYsy/4DXhM3ETN3G/ZHVRc90D/gPWoZoju5hjqcSNfhHZfmKX4KcNlePJpa4sDfTOCKV8LMxT26hoj64i/3WmOpO+CeH0T+cSByyvBvz6F2yVufFFIk3G0IDmdWRqSXp38HtUV7nfMaHinD3SbJa+DUdHFgQNS8GEDAzbskVHtlik4absnWUVd4ZAirzVKLuE3uAbxgsn6WRjkwttxSutmA4xXTAZfXkV2sfh2QtHvcMaUV04WvjVqmFftGUtsOhVPTDjcIoOX+Nw80mHZlzGLAJ6MrWPeDALqg7cKa/84v99/gLLPXC4FarT1aewmpC20GSRdiP3oQHNypVGGGCWUDKB5kWnEo0fJMZMFjiqkArefr/xmtq3CqBpWGxfiWTvtuNvPaHsBZ6Y0MZHwq0XWgFOruF2svhZ7/QhaJ66rGjHcr+oNYkeLscAfLOsbo2u+7yCPxOFG8Ramz3FRyXIyNKU1JeDjFEgVbEJ9qafuww9PBhi04by3oRktWXNBU68lVlysqvKpLFgie7EDOrBt7RgUVye6bg9EvELsetCBwk3gG5X+H/jt/K4QBoIlNz4gAggtnvfszP0n53Ij9CLbfAItEHpwL27S50bm7Ec6oiA4ohydbBfqHex7AgSi91cN9PpVcXjDkbeUb1pww7gufNhXOlUpS6BLG+ReHw/3ycR7SnnAnqnWf4KBZNFaIUDcE0VJnG7eP0PoH7vGCnYsWN4V0/zXyri6zvZtHy2UsR06vyJcwwoOskX1GLJXjoYE+ h/srwu+f qYvIWlggojMIpBfSkxklUZ257ld2PT8xzYqRpV0MdnH/8S94YCHNcaQLm0DzN3ZBFtywR6fE6YNWtGT2SO5/Q0UyeAwLT8DEKuOfe2xSXmc3LgA1hIXQb+95Qz+U/74Ri7F83ckQ687hAelT2TACAmOJk0BT6SuggFF8F69QNNIc3HN7b+HdGHC5Rbo5w0SLatqTkd64FGiUESVhaPVrF+vPIlGQbWwG6SqdcEjW32ULbDkXrjc4/ICvm/HtO0EmIDLVz8dtGpI1JZMu9bpR0ZAk4CqaikybUAEBc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Push the iteration over each page down to the architectures (many can flush the entire THP without iteration). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- mm/memory.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 4e6e9ec7eaaf..e25edd4c24b8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4400,7 +4400,6 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) bool write = vmf->flags & FAULT_FLAG_WRITE; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; pmd_t entry; - int i; vm_fault_t ret = VM_FAULT_FALLBACK; if (!transhuge_vma_suitable(vma, haddr)) @@ -4433,8 +4432,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) if (unlikely(!pmd_none(*vmf->pmd))) goto out; - for (i = 0; i < HPAGE_PMD_NR; i++) - flush_icache_page(vma, page + i); + flush_icache_pages(vma, page, HPAGE_PMD_NR); entry = mk_huge_pmd(page, vma->vm_page_prot); if (write) From patchwork Wed Aug 2 15:14:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7A61C04E69 for ; Wed, 2 Aug 2023 15:14:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1CC3B2801A5; Wed, 2 Aug 2023 11:14:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 03F0F2801A1; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B13F22801A2; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9ABAD2801A1 for ; Wed, 2 Aug 2023 11:14:19 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 716861A0742 for ; Wed, 2 Aug 2023 15:14:19 +0000 (UTC) X-FDA: 81079510638.24.32F5CDF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id B4402140008 for ; Wed, 2 Aug 2023 15:14:17 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fqp32luI; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989257; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TTmj8LRFmGI/uufNCg5PeNzaUBwTamXDfoKIZevsRLE=; b=aZAZHX2/Pc3LZWFrtlP2VP6UeHTVxq2YodfCa2pFRVw/k90Rv5KYOogaggMeswVqwvx21o l57KL33dTOlIcDHYbACevQcsEj64TcwPjsd0DhhnHFieGe6Wh+vPOW00vmcTh5F+4pEEUA CQG6p7rXk7juR7yuqbzWHFq+5bF7C4s= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fqp32luI; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989257; a=rsa-sha256; cv=none; b=XJszUXMcdU3l/KylvTdmMPDj2H+p/SB1GsarpTi7i+cUvf6FIvubM4x66AX24NjnbQZE+7 LhwrwYQNKtrtHUyNZGW/hgnHVQrU6d/uStVQ5Ng/OAI9REl/H5nsrsOJZv4vWf8/WwS4yq lDwPkDEXdbQkTFIYR3LuLcB6UJmff5I= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TTmj8LRFmGI/uufNCg5PeNzaUBwTamXDfoKIZevsRLE=; b=fqp32luIzS6X7O0ptXt5fgXfa9 8roNt9rAFD/QNNKxuvYv11j1FF/BhhoyPBedPkQtsa8n0XNTX/iLrbFDzn1cioJhRZlekE/owKL70 1ZXv2YLU+ZPKfmz7lNy1g4rdql3K0zc+A8FdzcS74g9PBGbnzBXuXYV8/W8v9r2NdJPywgHIkWeKu XU6eHVdgmuzQg0BQKb6dAirb/IxCsolv/9SY1CvENW9tEtosxynBgMwHWePOqxKDanO/6B9p47Qn/ mNsBMUJ0eYNf/XQvY3cRelcnvlVzBek7ScUzGxtAsXqH8pcwZLo+CiSXPKdHeG5/j8QBmZ0x7i6kM WfyhgZ6w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYB-00Ffll-Us; Wed, 02 Aug 2023 15:14:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v6 34/38] filemap: Add filemap_map_folio_range() Date: Wed, 2 Aug 2023 16:14:02 +0100 Message-Id: <20230802151406.3735276-35-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B4402140008 X-Stat-Signature: wckahgndktp4ho7ida3a85rghjbg41b7 X-HE-Tag: 1690989257-921349 X-HE-Meta: U2FsdGVkX1/vjDrJMkDp9cQhoeXZfZzQAVMUCNq0kMzsVQU8oEnlUXxandEIIXlMR4dvYFnEtfb27aPXt5Pr3Mhu8PEHX6SMUs1NzUAAS2UXG+PLGbaPaRqqBQS1vx0W5lJJATKVIuBfM54Ax6/6eMN6cJvpzWphoTK+0Y0erEPPibmEPOSCKaWN3Xl3a+V1HEFzBDdhHWw2uP5DzbIMjZolAgrozGg7E+ea74ALJWkqxe2Nbr2fOja66cHZNUWNpQxGDc6oI2zhHHFOlloq5G1C/hBR9C82pWSLwsupJrTINcyZZtiG2tACMayKWijwI+igO3cHS1mNR0T/hwSjfJH4xnllkAjTXLJhIzJd92CUXTte42xhAYG3BYQvc2jT8fmUhLbSbIzF53cROSZ068wLsnqKYrDAt06o0B+6Y8wdId27UNmbLoKoyF1yKsDcUzrhIl+gw0EyNvRy4V6/gUOFmRVQz4X6FBKRQ3J9+i6Sha0wO0nJWNrSsI4g4VEWrcvgTjTvWjA2zoUqiYjTUrcbj8XtbaRI0uJQr7yD54pE/RLfika8EV2F5cVOlA8S8ZZ1Ltp3E+4ujkfbfPHxOvOCaSnXvojni6G2jM5EniKc/eNwyiMAen+K9fEoDT69E+sKfidZcK0n1AgjgQfFjQaBXRDFcoWH+kv9qu33CMf/0tvRqaPV0e5fXF/zs3vu0LQwzKW6KgCLt91poaVgwVBsxltlgr/RZxd/POX5HAumPZf6SgEJpro/ohsLe4SHz6z65iNLkYg2tGE6U/MqBLIT/wd0tPEul16kSjUSCpnFzloED4kKFJasiCLC8P82OoPN/4iocjPsv0k6QgQ2Q7/XEKdBMems5F6H2vdUISUrz3sDJ5nEA05YOvhEHnPQ9YV6iaD7aJCo0DatQNHCsde6hnxlnjZ7TDqcP8Q9iIowUEEgKn+yU/bdSOHmy6bOoWKs3AucnFaDTkkqB0Y bvTtuqLt QMJD8nD8XFisje4zep4JDWI5HxRwWzWJpbt3aYx701mCHMO36QFdjm0kWh51Evi13plf5WcNroNN2x4k8kjZP600bfQQIf6lhNJ+tiPMCsigxfqbEXzfJWsDmPtI+sx3mKxDjjVcKWmcFVdZehSytUeOdFtsg/hYmsIeaSQKgloUw80z9rY15WE4skMNmrDduxiRoeNoOPHQe14nRtMW8PLwfIAEwgAH0xkWBTbksLMEblsdjWjMxHc5LVJXttF5ZRHDSuPm4Zb+Jat0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei filemap_map_folio_range() maps partial/full folio. Comparing to original filemap_map_pages(), it updates refcount once per folio instead of per page and gets minor performance improvement for large folio. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]), got 2% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 109 ++++++++++++++++++++++++++------------------------- 1 file changed, 55 insertions(+), 54 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 4b23c8dc993c..9dc15af7ab5b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2173,16 +2173,6 @@ unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, } EXPORT_SYMBOL(filemap_get_folios); -static inline -bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) -{ - if (!folio_test_large(folio) || folio_test_hugetlb(folio)) - return false; - if (index >= max) - return false; - return index < folio_next_index(folio) - 1; -} - /** * filemap_get_folios_contig - Get a batch of contiguous folios * @mapping: The address_space to search @@ -3441,10 +3431,10 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio, return false; } -static struct folio *next_uptodate_page(struct folio *folio, - struct address_space *mapping, - struct xa_state *xas, pgoff_t end_pgoff) +static struct folio *next_uptodate_folio(struct xa_state *xas, + struct address_space *mapping, pgoff_t end_pgoff) { + struct folio *folio = xas_next_entry(xas, end_pgoff); unsigned long max_idx; do { @@ -3482,20 +3472,51 @@ static struct folio *next_uptodate_page(struct folio *folio, return NULL; } -static inline struct folio *first_map_page(struct address_space *mapping, - struct xa_state *xas, - pgoff_t end_pgoff) +/* + * Map page range [start_page, start_page + nr_pages) of folio. + * start_page is gotten from start by folio_page(folio, start) + */ +static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, + struct folio *folio, unsigned long start, + unsigned long addr, unsigned int nr_pages) { - return next_uptodate_page(xas_find(xas, end_pgoff), - mapping, xas, end_pgoff); -} + vm_fault_t ret = 0; + struct vm_area_struct *vma = vmf->vma; + struct file *file = vma->vm_file; + struct page *page = folio_page(folio, start); + unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); + unsigned int ref_count = 0, count = 0; -static inline struct folio *next_map_page(struct address_space *mapping, - struct xa_state *xas, - pgoff_t end_pgoff) -{ - return next_uptodate_page(xas_next_entry(xas, end_pgoff), - mapping, xas, end_pgoff); + do { + if (PageHWPoison(page)) + continue; + + if (mmap_miss > 0) + mmap_miss--; + + /* + * NOTE: If there're PTE markers, we'll leave them to be + * handled in the specific fault path, and it'll prohibit the + * fault-around logic. + */ + if (!pte_none(*vmf->pte)) + continue; + + if (vmf->address == addr) + ret = VM_FAULT_NOPAGE; + + ref_count++; + do_set_pte(vmf, page, addr); + update_mmu_cache(vma, addr, vmf->pte); + } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + + /* Restore the vmf->pte */ + vmf->pte -= nr_pages; + + folio_ref_add(folio, ref_count); + WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); + + return ret; } vm_fault_t filemap_map_pages(struct vm_fault *vmf, @@ -3508,12 +3529,11 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, unsigned long addr; XA_STATE(xas, &mapping->i_pages, start_pgoff); struct folio *folio; - struct page *page; - unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); vm_fault_t ret = 0; + int nr_pages = 0; rcu_read_lock(); - folio = first_map_page(mapping, &xas, end_pgoff); + folio = next_uptodate_folio(&xas, mapping, end_pgoff); if (!folio) goto out; @@ -3530,17 +3550,13 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, goto out; } do { -again: - page = folio_file_page(folio, xas.xa_index); - if (PageHWPoison(page)) - goto unlock; - - if (mmap_miss > 0) - mmap_miss--; + unsigned long end; addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; + end = folio->index + folio_nr_pages(folio) - 1; + nr_pages = min(end, end_pgoff) - xas.xa_index + 1; /* * NOTE: If there're PTE markers, we'll leave them to be @@ -3550,32 +3566,17 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, if (!pte_none(ptep_get(vmf->pte))) goto unlock; - /* We're about to handle the fault */ - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; + ret |= filemap_map_folio_range(vmf, folio, + xas.xa_index - folio->index, addr, nr_pages); - do_set_pte(vmf, page, addr); - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, addr, vmf->pte); - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - folio_ref_inc(folio); - goto again; - } - folio_unlock(folio); - continue; unlock: - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - goto again; - } folio_unlock(folio); folio_put(folio); - } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL); + folio = next_uptodate_folio(&xas, mapping, end_pgoff); + } while (folio); pte_unmap_unlock(vmf->pte, vmf->ptl); out: rcu_read_unlock(); - WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret; } EXPORT_SYMBOL(filemap_map_pages); From patchwork Wed Aug 2 15:14:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06493C04E69 for ; Wed, 2 Aug 2023 15:15:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B8E372801AF; Wed, 2 Aug 2023 11:14:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3B432801AA; Wed, 2 Aug 2023 11:14:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A03BE2801AF; Wed, 2 Aug 2023 11:14:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8E0F82801AA for ; Wed, 2 Aug 2023 11:14:32 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 370791C958F for ; Wed, 2 Aug 2023 15:14:32 +0000 (UTC) X-FDA: 81079511184.13.7C34475 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 65FE41C000D for ; Wed, 2 Aug 2023 15:14:30 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SdbjeDHf; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989270; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TdcKo7vjdKwufue4ESIqNQj9yuANHynm6n4NF3jSmIs=; b=FeGdFz+89ORV9yu3df8IbLaAT3e0c/FeQJ0eNj/2Wm+x8TgQc3ZNvIclaPoJ5Bpj9omCN0 9B1bTeYghHKii8z7TzsOR6XpprOObqHeKcSeiiSLwZbNUVGaXL2/IBBpsyqC0EHuzTmqjF ZJn7xicq4aCTRVBvdOzAOQxpnYPE350= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989270; a=rsa-sha256; cv=none; b=nkGrbXviACV/Vb8SV561TAgAeuxWA/SFoVSDwuSHHuwOsFwNLjg77oCZcGhS1hqOq0ElLe yQwhzNU9Q0rQxoTqRT1AajioAZkZWTVZSRBE5asGFZ7sj46mxmDoz/hswC3MnmCBDtb3Zq /fRNyy9AtEq0DIIOvvmpUrjfHcy5PqQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SdbjeDHf; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TdcKo7vjdKwufue4ESIqNQj9yuANHynm6n4NF3jSmIs=; b=SdbjeDHfWjg/0JpcsditjtFvSL +oRaeZWmIDyYutyL9Gdc3vUbtE23PF7hBQPZprJsUfB4z2AMI0hOuJM1R5kNC1fyRVigtWfuFICUf CGC2eJlys7pSUbvJn1JjsJK2zPgl+eBqYR/lyb1NmtWqki9j7JqhUk3jCaUxPft1yXk2b+a4jIxmC 3rzEiLHv4ylPnOD7wrbt2t2oj53l3VawaTFdF/zHL4o/t70NQE6zMXBACFrvYhgBe97tF60xtgZeW +O/Mo6O2TcLONuhW1WmIu4mE+Z3UFgUR7me8a/2rnZG6J+edXKvURdrZ07h5xju2n2/w0ZQdHfpW2 Ix0vV5xw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYC-00Fflr-1a; Wed, 02 Aug 2023 15:14:12 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v6 35/38] rmap: add folio_add_file_rmap_range() Date: Wed, 2 Aug 2023 16:14:03 +0100 Message-Id: <20230802151406.3735276-36-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 65FE41C000D X-Rspam-User: X-Stat-Signature: kdcbwguig44698m8p6uehmgbyh1e73yu X-Rspamd-Server: rspam03 X-HE-Tag: 1690989270-139456 X-HE-Meta: U2FsdGVkX1/JEPtr6ZpWA4kDXlW+oN5BBNX4KRPuJKWHuU38DVyBOOzATmz6Cc//1Fz20rpmmK4s3hLxSsZ/IBZF9X/SpK9gz5bBsWhD0Q1Bbv1UqTwiHmXb2CCO0CCQPqAlLd4G8jxTCbL3gq+x7TQpfhbRoMHjpr5T2+tOTW3IB3HQo8OqUQGUN0+jABVqN9qVq7n58Rucn7AO7/ohnLDARbOnag2SXI4KYHTn8dfGzlMmghshcK+NFwWgLBp3E3DitSYNFezjWFzLDVfl9j4rIyFS5LAOH8HfblXzCRH+0DbAYHp42crlP2sh7xKj3JNbSrVfSWqCTmuGaO8sH6aNzucmAuwvQe+5Z0OjayLSbL/Umc9U5t+z/ZTpFTRSjC9XJBevVSxdggJaue+Ozi5ncALujf4wONoZ38x8C5k5b6TOYBwB0sWZwvhobCLm4Bk4QlOAC6du3E2VgabUkN4Yjg3UJtWePOqgdtn6r7rHh5Jko4B2Y9a0hxjeIqBlc2m7daow+Iqm8iVXNfcp2JFmaR5pe/1T9brIVuM1JUNC+5US+3LCI0M+mOIMReYVPOLabEeq5jRM4xbz/OjUhkS4fVMr8bFrZ4IymqdoyDZoPTyLoMexd6EKE+dN8IJD5B9wZ6TuPeOLg4mvnUqEKtypX0vMrF/gYFhVWaUHccI8dGJVMJPebE5yoInxbMZPnDC3ejwxRr1zwYYbQkCEUBbAdlonWY88CHInq5BOcOzzS37WnVnWGD8heRxNpYYGPgWlnTaXuu5l2ygXLM+QIf8APbBahk3nHpUJRecDmx+u6cxyhavvYFOF8eSg5SsAcMB8goGbMECVkZxyvdpPMdly5KTqDTLjtkM4cuoLDWEGC5wLDLdoMNQ/vI8YLXyuBvf4zeMqNfRzemsg5hFPNcoWIFJ+kkwKbXcbCX4VsgrX2ZjMdSnQQ9YSDtAgAqDM0p3qH3z9qx4xgx22PBc 3KzB1az5 YtL9doLD3TDbwADJOZwdYC20+weLVjDAU8zUKDWojDimlgwCtyPYMD0cqTq2A6dkwFMsrx/QgGyZvAitwmAusQFEqG+JZzVVKrKWSqITBZbtcntdULqmJen4CoLzqpE3nrKX7x9u31vtH/K2sqfMb263sb/hpBBoZCsY+pf925/cNbfvaRcvQKzYpIiIffdUlKWp4ss4/bzQ7kunXtwJBFI/htniE8dtRsiwvOdsIIdb7yzPYtzNZD1vdIw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei folio_add_file_rmap_range() allows to add pte mapping to a specific range of file folio. Comparing to page_add_file_rmap(), it batched updates __lruvec_stat for large folio. Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 2 ++ mm/rmap.c | 60 +++++++++++++++++++++++++++++++++----------- 2 files changed, 48 insertions(+), 14 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index f578975c12c0..d442d1e5425d 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -198,6 +198,8 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, + struct vm_area_struct *, bool compound); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); void folio_remove_rmap_range(struct folio *folio, struct page *page, diff --git a/mm/rmap.c b/mm/rmap.c index 54124f18e0e4..d82d52ebf3a6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1294,31 +1294,39 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } /** - * page_add_file_rmap - add pte mapping to a file page - * @page: the page to add the mapping to + * folio_add_file_rmap_range - add pte mapping to page range of a folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @nr_pages: The number of pages which will be mapped * @vma: the vm area in which the mapping is added * @compound: charge the page as compound or small page * + * The page range of folio is defined by [first_page, first_page + nr_pages) + * * The caller needs to hold the pte lock. */ -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, - bool compound) +void folio_add_file_rmap_range(struct folio *folio, struct page *page, + unsigned int nr_pages, struct vm_area_struct *vma, + bool compound) { - struct folio *folio = page_folio(page); atomic_t *mapped = &folio->_nr_pages_mapped; - int nr = 0, nr_pmdmapped = 0; - bool first; + unsigned int nr_pmdmapped = 0, first; + int nr = 0; - VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); + VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio); /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { - first = atomic_inc_and_test(&page->_mapcount); - nr = first; - if (first && folio_test_large(folio)) { - nr = atomic_inc_return_relaxed(mapped); - nr = (nr < COMPOUND_MAPPED); - } + do { + first = atomic_inc_and_test(&page->_mapcount); + if (first && folio_test_large(folio)) { + first = atomic_inc_return_relaxed(mapped); + first = (first < COMPOUND_MAPPED); + } + + if (first) + nr++; + } while (page++, --nr_pages > 0); } else if (folio_test_pmd_mappable(folio)) { /* That test is redundant: it's for safety or to optimize out */ @@ -1347,6 +1355,30 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma, compound); } +/** + * page_add_file_rmap - add pte mapping to a file page + * @page: the page to add the mapping to + * @vma: the vm area in which the mapping is added + * @compound: charge the page as compound or small page + * + * The caller needs to hold the pte lock. + */ +void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, + bool compound) +{ + struct folio *folio = page_folio(page); + unsigned int nr_pages; + + VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page); + + if (likely(!compound)) + nr_pages = 1; + else + nr_pages = folio_nr_pages(folio); + + folio_add_file_rmap_range(folio, page, nr_pages, vma, compound); +} + /** * __remove_rmap_finish - common operations when taking down a mapping. * @folio: Folio containing all pages taken down. From patchwork Wed Aug 2 15:14:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B44F6C04FDF for ; Wed, 2 Aug 2023 15:15:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA1502801AE; Wed, 2 Aug 2023 11:14:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A526F2801AA; Wed, 2 Aug 2023 11:14:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F3C22801AE; Wed, 2 Aug 2023 11:14:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6B0842801AA for ; Wed, 2 Aug 2023 11:14:30 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3639A80F10 for ; Wed, 2 Aug 2023 15:14:30 +0000 (UTC) X-FDA: 81079511100.11.0E3C27A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 5FD921C000D for ; Wed, 2 Aug 2023 15:14:28 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XVrKAxvc; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989268; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5NeY2tinWPOPcN8XQpBKoS8mG0qewPWY+zZNL+PQrqQ=; b=JvZK737qpz67+XYIQm2cu7QNvrmga7A4q4FhlSv9jrK9ZgENmkeGxg4RU8hBcHb+VA1ugO uwbeAVwHd+QeCbTxTH0Qx/rNjAax0nhah0Fco4xAY2cViT3n8SnfW9kIVjwSaz+eh1tA23 XvD5Ug735CwAqG44M2cjGS4Ctr6LZYU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989268; a=rsa-sha256; cv=none; b=uDlJwo7MLQYmS4dX7pCtdbb9MuWwl9HhQp8bcn+XibodCdCWmZYVEnphUIK8DCghcaORUa dyPUFRRDsq6JPLaie5V8YoxN7836FSErJ2vfSIPn3/6SnnFK32GWkO2VlEyjbK7zmnILak XNb6rEwrnsLNtn+So/3tvA9RTLiM7d8= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XVrKAxvc; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5NeY2tinWPOPcN8XQpBKoS8mG0qewPWY+zZNL+PQrqQ=; b=XVrKAxvc+jVqKyFmXERMoGt5Kw vwMb2okt9L3PZTtcfHLjMcidaiM+dInPYZKdG4RiJ2S/xU98gNNb2jGRpaNTjy9O86mit4xeStfPE HfvMOauWiFKc0COw3Ih0gBrxzH+kFievJLj/HlmbMsbL2AIbUm3Gxu+wgfYQgl8zj2eZtpXk/aPxz LjHbHXgwtrWqmIXLIcq92sAkVI0DYUuRVt6PWgiZ1PvWmB0g+2XooTtKdfyEuz47yzGQTceO0MBC3 RqwrDg+VgVmAmZwR5sucETkA33L69LiobG2TgpntFFGR7dtgBl2CiJHwFQwBxCzO4a0U/tf0RNk/H A4xEcWPA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYC-00Ffly-6k; Wed, 02 Aug 2023 15:14:12 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v6 36/38] mm: Convert do_set_pte() to set_pte_range() Date: Wed, 2 Aug 2023 16:14:04 +0100 Message-Id: <20230802151406.3735276-37-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5FD921C000D X-Rspam-User: X-Stat-Signature: 4szi8ff9ss8a1wsecbho6k4i1gk3fuj9 X-Rspamd-Server: rspam03 X-HE-Tag: 1690989268-18031 X-HE-Meta: U2FsdGVkX18dg8mbfR4DdM1Em3xkBVuXAQAiUknHmLVku2d5AEvWiueSUrtgfBiv4Bjhg/Is2fI4dWGt/4dtQo1EIqrskoZiNN/Dj71DV3b6f2JBdcZb6axDsNWw1MlCEw+6XCIuuG+2OCXKHOf5A5DnawNQoBmuwsmaCllTVJnjzIQ2W/u6QBRzJp2M4qMYVZhQyAoVQqLo2OKdby3kWX9bkKYwn0aChhJyCKywfAwTZOB89+4pkkgCT71SwBNmIvEmBO4d9HlS4+UJYfJqqICR4OwK+8m2GfHptnijdYhcyB9+2aOui/3VaRTm8ZpXvOX00fpHOkNqg/sCwNelIconXVUeGGpHE5Wx40kj27PeayIKlppCiWaoo2hrZj2xs4/hbgDI4BtMxH26xv981HIV7ra2wufA2+Qq5zH288dIidcIvDxAsCFwahqegRBMcL/D0QOU8UDQIA0jZ/ul9PRQT2xTp/oKFi7l6q4lCZKiM92qpMVocilRROfa1ENIB7f+5myJBHskJgA07sySJ2Np9KHaSu3tlfJ76I3aakMlsQWhWifvvGUPxsaO6+n9Usg30BezWGa/RQ5vx4kJVAK73n1JXN7MitHaqtsNlC8TObj0qcxF2hwF6NKoK9S2ynUAvxzvPjb058COfvGNYucjrmau7EPgKxZgVa++BG8dqv+/IjW/TyYCQRAhR7XUvFJd7K96Isxty9U+QgrVotiOkBw9OsR7qYWRnVUx5dRr2qwFqCzCve4aEiWXc/FYGN+5ifDpFqDkgTfSIeejGyAILFlZnpmg0onDxUHodFQqdiW5rZznAZX4HqdADm34dMEw9P/Pbi78J0k0vKUW1gSscLo2YCIPGjmOHMwi+ugsIOtDg1WtwsdR+BwB7FnSQtn5YmhQvebIV+kAuucKM5QVsCfwE8cL8g7WSoK8N757Py4m1fgY4E0WD9nnG/TO2P/aWaj2dQ5l+vc9+Fd DZmO1JmO 1nbpD8dGRgWLdG9g0FLHJ9DlDsM9NSCx2OVbI8OD3aQ6AXA/NScKIaIV80WIe59cggpYOji7vOkktI399nH/TPxEWPf9jsXQfbEr0FmHwEURN6P2UWkUGL5qgGWXfKVjXEm+6CSKfXzIA5C6hrpNC3gx7j3IHEPMxlGaHaNxIj2e0J2876ZgB8pPTLB5KiURWSPls53bWns5PgEmmo3iX9UczvLxPVUrwvjR19Z91wG/7wqFqVUVEJWjk8A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei set_pte_range() allows to setup page table entries for a specific range. It takes advantage of batched rmap update for large folio. It now takes care of calling update_mmu_cache_range(). Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 2 +- include/linux/mm.h | 3 ++- mm/filemap.c | 3 +-- mm/memory.c | 37 +++++++++++++++++---------- 4 files changed, 28 insertions(+), 17 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 89c5ec9e3392..cd032f2324e8 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -670,7 +670,7 @@ locked. The VM will unlock the page. Filesystem should find and map pages associated with offsets from "start_pgoff" till "end_pgoff". ->map_pages() is called with the RCU lock held and must not block. If it's not possible to reach a page without blocking, -filesystem should skip it. Filesystem should use do_set_pte() to setup +filesystem should skip it. Filesystem should use set_pte_range() to setup page table entry. Pointer to entry associated with the page is passed in "pte" field in vm_fault structure. Pointers to entries for other offsets should be calculated relative to "pte". diff --git a/include/linux/mm.h b/include/linux/mm.h index 2fbc6c631764..19493d6a2bb8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1346,7 +1346,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); diff --git a/mm/filemap.c b/mm/filemap.c index 9dc15af7ab5b..2e7050461a87 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3506,8 +3506,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, ret = VM_FAULT_NOPAGE; ref_count++; - do_set_pte(vmf, page, addr); - update_mmu_cache(vma, addr, vmf->pte); + set_pte_range(vmf, folio, page, 1, addr); } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); /* Restore the vmf->pte */ diff --git a/mm/memory.c b/mm/memory.c index e25edd4c24b8..621716109627 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4465,15 +4465,24 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) } #endif -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) +/** + * set_pte_range - Set a range of PTEs to point to pages in a folio. + * @vmf: Fault decription. + * @folio: The folio that contains @page. + * @page: The first page to create a PTE for. + * @nr: The number of PTEs to create. + * @addr: The first address to create a PTE for. + */ +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); bool write = vmf->flags & FAULT_FLAG_WRITE; - bool prefault = vmf->address != addr; + bool prefault = in_range(vmf->address, addr, nr * PAGE_SIZE); pte_t entry; - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr); entry = mk_pte(page, vma->vm_page_prot); if (prefault && arch_wants_old_prefaulted_pte()) @@ -4487,14 +4496,18 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) entry = pte_mkuffd_wp(entry); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr); - lru_cache_add_inactive_or_unevictable(page, vma); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); + VM_BUG_ON_FOLIO(nr != 1, folio); + folio_add_new_anon_rmap(folio, vma, addr); + folio_add_lru_vma(folio, vma); } else { - inc_mm_counter(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page, vma, false); + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); + folio_add_file_rmap_range(folio, page, nr, vma, false); } - set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr); } static bool vmf_pte_changed(struct vm_fault *vmf) @@ -4562,11 +4575,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) /* Re-check under ptl */ if (likely(!vmf_pte_changed(vmf))) { - do_set_pte(vmf, page, vmf->address); - - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, vmf->address, vmf->pte); + struct folio *folio = page_folio(page); + set_pte_range(vmf, folio, page, 1, vmf->address); ret = 0; } else { update_mmu_tlb(vma, vmf->address, vmf->pte); From patchwork Wed Aug 2 15:14:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44700C001DF for ; Wed, 2 Aug 2023 15:15:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D145E2801AB; Wed, 2 Aug 2023 11:14:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CC34F2801AA; Wed, 2 Aug 2023 11:14:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B65A32801AB; Wed, 2 Aug 2023 11:14:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 971562801AA for ; Wed, 2 Aug 2023 11:14:24 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 37B961A0F1D for ; Wed, 2 Aug 2023 15:14:24 +0000 (UTC) X-FDA: 81079510848.24.4F8C04E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 58E551C0016 for ; Wed, 2 Aug 2023 15:14:22 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y8yeRgVx; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T1BSFaeWVOMDVtOKIkiXYBpk8N9UvaWUSRzfgvCLVV0=; b=Tnihn/uT+Ild7iKWH0So/qXQWJ34hPrmjWOpTQOIXSd1i54R346BPLv9+IuaOOyHIoQvea xyH82AA1DMV2oxX5gq9mFhsQeNVPv6zunBsKmrB7zWjN1ZyW9u6GF8i1JkQ+mzEWYtymKH Pbj9MHI5pJgh5jgGyMjAm2cHAuFIRQ4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989262; a=rsa-sha256; cv=none; b=4IQmfts+rR8zxOlsyxven8m0P7vsxYYZkh7QQh1hFXKd6RMfdyC/TS1eAKUOLVy6f/gBBz aMgwvte1Wx+me41gwdcIHDHikRBBBmDo4aefRIGnispmni9/lBt1itoxVpXBjKa+uV2tyP q4T8+TFlZ8NaW87xhMXpKGL3JuO0w6w= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y8yeRgVx; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=T1BSFaeWVOMDVtOKIkiXYBpk8N9UvaWUSRzfgvCLVV0=; b=Y8yeRgVxKYHnNGwsluFZ1rPAT3 VMJV1ITjvfKzTieeS6eOjGcGTEOg0fC/aMR487Iu1dEMX9ttlckwuMArtCMAuIecbvIuoJ9aqIoas P7Ozy/1xX0QDMsE0N1SUfOm4Pw7TzoBgvBTuwPZjGDKH6mT88PJbCAe+V2vrlaMmxJUU6yj9Pwu5m ROJbBRB95rKqE2O5f33eSGHD83hYfJeagG9YIY8NAEBMpPmsbJVgYt690USeD09VCxpYweXuWG0MP Qdnv2s/KHSUkPRUQBLj8qWXXBelfdvlCW0wYxus4VjDIP7zJRkdr8IsPCgS+rpMLvsgP/s5veDxlP BzEB+uUw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYC-00FfmA-Be; Wed, 02 Aug 2023 15:14:12 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v6 37/38] filemap: Batch PTE mappings Date: Wed, 2 Aug 2023 16:14:05 +0100 Message-Id: <20230802151406.3735276-38-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 58E551C0016 X-Rspam-User: X-Stat-Signature: iykzk5c5ng9cjs5ffht4fb7w9ainhox3 X-Rspamd-Server: rspam03 X-HE-Tag: 1690989262-508623 X-HE-Meta: U2FsdGVkX1/iJpfh874BIXyp01us21rpXLT+j9FAfycd4qGFlG1UTH7rbOeui/ZmCuyK+Ga6ppF2YfXzrqvIQQvP/+qSc92BhRRlsXPMfon7yITxBJ87LUNWtRmMJa4T3HoFiqIMy13xM8QS0sDL1Nf+V2XT/qN9eCOOD19stpzwHuD3OWy51V7dwyo03Z60/0xTDblp6L1URboffrDc5TNjXoSVvdLg8345crTanG0Yi3V+VvDavZeJmoXZLRht9eRDhwISi7SnVH4Z/iLFVE6eIdkDA14FXUo3Fi1LXPkCBvF1CUYFyA7C+p+QYACqRJrDw/JRdyLLpjfEaiEvHvsRWVme4Xo/eHBQgRg0JTPAV7K2mvijv7/1F6E2n9HRblGrhYtHWYk+etUjMD7MndZvaOdeGRuFeBoWawrJG6PCmzyes1LlqhP4nAfb2+DSWB0eLplPMrv7QCD9GqNdfcOpAeAXMNXZ75OH7MOHlwaZWFYE+jaEJeAVrrUX3XQDNORkrIwRz59QEBrW3WodzAedunqV7JBbmNmgLEO1IAJ7jiVgW9gsIwlLj3Ap9Rr7NBMkFMhr3tL9iNc0rkGKVjWRsmE9mH9SLjw6T3XQ+L0rYx77EbSUJ2jhKynomGuwWMZupOCD3K1zhGu/I8K2kBJTSqWSQa/n+7Ltce4L08ffC48397lwcZZ511yJEJ9fV5nnJWNOHdJQJsSvrofqsgz+FlbuZ2a7RyAln7lTjQg9YpIjxVzXLF3zf9tjQHVhbEmMZk5UhyJUuKeqThgcYHMMakj4mu/c+aAh54V9xO0wP7rCuNvuNgSuxkzGslkfMAaEeADqz+BwE47hjjrLcSAKWZZ+5799deZ0gIbBHhlt9tmA6zJJ7kqt+QuJEJqAeF2HX9XxC5hyDhYBpiwcSgO0BL65L+S/rHj94LhuYRcUneGTeCzKnnrDalvpvf626KFgZveDxXD/d4JR1Ok XdcmVONv pTCRttH0yyI0ry2v2JxpsBdkUH5l+QcnMUYwSv1aTS8o09sn24VGewTkprIsdSkKHCiFjoypUTOSeVNLRHghO5AAlol53+d9uR9CYUgiMyta4Hgu/X3hWB61tPaKxnW2RzgwRPzRX9CuLMUXK+NUq3TntLtkCjmthNEXdbi1LR5ei1c9WgL3JeWfHX/gh9O+uo7TdDAA1UYj/L/vpQKk1wbKKJYvSJOmZC3Ax X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei Call set_pte_range() once per contiguous range of the folio instead of once per page. This batches the updates to mm counters and the rmap. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]) got 15% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. Perf data collected before/after the change: 18.73%--page_add_file_rmap | --11.60%--__mod_lruvec_page_state | |--7.40%--__mod_memcg_lruvec_state | | | --5.58%--cgroup_rstat_updated | --2.53%--__mod_lruvec_state | --1.48%--__mod_node_page_state 9.93%--page_add_file_rmap_range | --2.67%--__mod_lruvec_page_state | |--1.95%--__mod_memcg_lruvec_state | | | --1.57%--cgroup_rstat_updated | --0.61%--__mod_lruvec_state | --0.54%--__mod_node_page_state The running time of __mode_lruvec_page_state() is reduced about 9%. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 43 +++++++++++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 14 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 2e7050461a87..bf6219d9aaac 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3485,11 +3485,12 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct file *file = vma->vm_file; struct page *page = folio_page(folio, start); unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); - unsigned int ref_count = 0, count = 0; + unsigned int count = 0; + pte_t *old_ptep = vmf->pte; do { - if (PageHWPoison(page)) - continue; + if (PageHWPoison(page + count)) + goto skip; if (mmap_miss > 0) mmap_miss--; @@ -3499,20 +3500,34 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, * handled in the specific fault path, and it'll prohibit the * fault-around logic. */ - if (!pte_none(*vmf->pte)) - continue; - - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; + if (!pte_none(vmf->pte[count])) + goto skip; - ref_count++; - set_pte_range(vmf, folio, page, 1, addr); - } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + count++; + continue; +skip: + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + if (in_range(vmf->address, addr, count)) + ret = VM_FAULT_NOPAGE; + } - /* Restore the vmf->pte */ - vmf->pte -= nr_pages; + count++; + page += count; + vmf->pte += count; + addr += count * PAGE_SIZE; + count = 0; + } while (--nr_pages > 0); + + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + if (in_range(vmf->address, addr, count)) + ret = VM_FAULT_NOPAGE; + } - folio_ref_add(folio, ref_count); + vmf->pte = old_ptep; WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret; From patchwork Wed Aug 2 15:14:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FC82C001DF for ; Wed, 2 Aug 2023 15:15:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 54A2B2801A6; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D2212801A3; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E8EE2801A8; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 08EF22801A3 for ; Wed, 2 Aug 2023 11:14:22 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D77851A0756 for ; Wed, 2 Aug 2023 15:14:21 +0000 (UTC) X-FDA: 81079510722.22.D4E7026 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 017B720021 for ; Wed, 2 Aug 2023 15:14:19 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qzVcnmGn; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989260; a=rsa-sha256; cv=none; b=QKsTyc9gSqi2xbnEu+dcfARdpKwgcfYFuJ7qLL2ykp1zZ9BBVu0Y9legA3RGc0zplSgK1t uVldXeNwuDRasPlK8z/9Fvyi9cBjLbaI8T7OnDn3AYE+GylvnbWwBSi2jN2b6PVA0r8Cxd kR+xk0tg60mz2ddIol5ivEi505RkvgU= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qzVcnmGn; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989260; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nv6mG4/l7H3PNgCsHrZF/ygxFjAa7ZRftAbjaaMLtfU=; b=Xx6EiMxhmYk3c7TlH8BO385q13Kqxj42lGmd1bpNM7L+bcMpPWlI7G/7bLtRa4P2MZNO/j LTcvD+3f0rHuV7hBgHQbsQND49vThQHMpLDyJ5BNpBIAjFPuVm4sd8b2pGiIfL2RkKXtrL qzdy3VnMfe1XGeESKRyOJULiZPWTwto= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nv6mG4/l7H3PNgCsHrZF/ygxFjAa7ZRftAbjaaMLtfU=; b=qzVcnmGnCsgZ+JOYQS6BahTO8Y 4fDBiRQz18mhL7VWDmt+dEhAq3Sa5V5SKNqJlpEUHzHABS6JtgC5YuZl2PoExrpyD8AaocAYrVL0F kZOD8nQ41FvXgl/QA/ZuvG8dBS9zKlb18eK+B21iVqCHpxywaoE23ictHd/iNF3rIOD2HGkDTJIvx kfl4cXXMk7Ue0xHMPdyStav5FoVJ0PeXiJjv0ATED/kYh0KKVQOz6g64xjsWqMVN5kNRgcOMcklQ1 D/q8Qnz61oyymYEfIzTuQ6/SOMDCbqbF4Eo9VoW5m98bWEzT4szSXsV1wTLL2S1QdgPKFh0BO1pgB 5MH2+1ZA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYC-00FfmP-GT; Wed, 02 Aug 2023 15:14:12 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 38/38] mm: Call update_mmu_cache_range() in more page fault handling paths Date: Wed, 2 Aug 2023 16:14:06 +0100 Message-Id: <20230802151406.3735276-39-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 017B720021 X-Stat-Signature: mq4xm9fi9q6qbnhqfo4aeenc44am53yh X-Rspam-User: X-HE-Tag: 1690989259-173062 X-HE-Meta: U2FsdGVkX18gqR62CHTknFwa4ucLyFse5wECfiOTRus2RUpEFfKR5SnW/sUI06X8uRqBNN4GWAEjEs6bpKFNGRmc85w+50V2bynDMesOM5k1/REWahIBe/WTDjlzOE59chFFp8bsy2iPbUfK6xLz97gWK2m/GxAmg0prB9L1D0StB1CHfOVYjYsCcNlRX33/FC5rq2mcg+4eCsmuANAgc8bfendUsxnoJKdxOmyXkc7BiAl7F5N5iEqwALy2kZukSkWqs38p+G7Z5neBgSxYCr91kxnnLi7XfjbdBjEcQmc+lw/PqXi87R8ptLSUDvYtwbcIlG7Xb+9pbFYNQ6QFasEaYjc8IJeU3lci/CJv63el9E/Yn55YofmlUBI94JLaOO4nCHHzhWspGSIeD+ibyiHit4L5n+b8Ho/x5iTlVQJETqAWcTVqOxmm7bi0qQxCDDtV96iaSPxSb/doBVTp7ujf+R1AI6EDFH4r1bupdAD4zORcoq6nUqqA1gs7+ZWR+oSyU/4ICLT00u8KSYTsVExC8M8YNbEelUaA0CEkGE2Swtd3zYLW4NF02sRzz+VOkXLfArPnHEveoycfO2L/ZNU/tVmR4pKMzo7tCs9yNG6cvE/khDZEMY8+P1ggfmqQhp30cK3MNlhOfCKWXnh9iQ4dHozjC/rsYznlxHDFbNvGZ4VYwDbSV3KQypp7T1kM8MtlUBO4fNczDx5Oqbyl2RetdPUZag/Ep4FbSqTjT1RTg3qTwUJWZElQ16Dn83PXEHhKs4flcuCx3iaC72+33D7cKZhVApN67xKPtgQwMB8HqJNHM7avOcdQVQuGtbml62iBfR9Ud/4X133sNNE7knP8R4m3wmJLOcHVG6x0rXqRDqnYio/V3RW6yvAljr/yRtgoAckwGr023buXizUuxejxipE38MC91NMjEX5cqkQLira/NcbapPh2XWUYySWUjeOXAoqoVx6H1N1lsi+ f/Bpb0G4 qPRfsxA6bVDHJ/O6B7IqtIWFcjhA0RqKakbujDXqB8nqmzTOWZ7jp1d2sGcJWwfllEi3gJbGGbAglSUHrRmed4ccJKaBGGjJfek3K1Zbp1UaozrhpurLIJ4Q4k0x12Ybi0ib/qI3fE6t9DaKFNl6M6YNApfYUeCQw5HwnuDtIBKXhjk2nyeA0ujqx19EtFymDTWqShg+TNZkHeddjOtUv1uetWDa/7bilEpBR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass the vm_fault to the architecture to help it make smarter decisions about which PTEs to insert into the TLB. Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 621716109627..236c46e85dc2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2997,7 +2997,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, entry = pte_mkyoung(vmf->orig_pte); if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) - update_mmu_cache(vma, addr, vmf->pte); + update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); } /* @@ -3174,7 +3174,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf) entry = pte_mkyoung(vmf->orig_pte); entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); pte_unmap_unlock(vmf->pte, vmf->ptl); count_vm_event(PGREUSE); } @@ -3298,7 +3298,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) */ BUG_ON(unshare && pte_write(entry)); set_pte_at_notify(mm, vmf->address, vmf->pte, entry); - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); if (old_folio) { /* * Only after switching the pte to the new page may @@ -4181,7 +4181,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } /* No need to invalidate - it was non-present before */ - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -4305,7 +4305,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); /* No need to invalidate - it was non-present before */ - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -4994,7 +4994,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (writable) pte = pte_mkwrite(pte); ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); - update_mmu_cache(vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -5165,7 +5165,8 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) entry = pte_mkyoung(entry); if (ptep_set_access_flags(vmf->vma, vmf->address, vmf->pte, entry, vmf->flags & FAULT_FLAG_WRITE)) { - update_mmu_cache(vmf->vma, vmf->address, vmf->pte); + update_mmu_cache_range(vmf, vmf->vma, vmf->address, + vmf->pte, 1); } else { /* Skip spurious TLB flush for retried page fault */ if (vmf->flags & FAULT_FLAG_TRIED)