From patchwork Tue Mar 25 12:15:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028449 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91DED2571B1; Tue, 25 Mar 2025 12:17:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905025; cv=none; b=AQOFVUiPTOrR3teZzTFhXIAmAYlwHYDf+DItN90aiEkbkfycJpuaRq9cfZ8K7Wp1W8LCBxrgkHGR94UgtbqI1qnDG0F7MoSqyq6MMfEA7vKALr6UX7SaNp29pZBfF4bZpnLSAcmk+ni6Qki/X1s6bmXSMMuj4Wh+z9mUqn0wF5k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905025; c=relaxed/simple; bh=nmHGkxmHyJry0ZX08FwwVvEgncr1QB42+YP4ADMAr78=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rMFJ0pOmkN5PgzdlZQbc9bi2WpDekmPmp5y+F5c9Fh99/sKNd7fAiaNJ5vMlzHHdgEnbtJU9OYCc8sIF8l/QRdkhiQIEbsJDtLo6EtNB9wFdqcBuEHHSzAD3eFIghBTcK+PMqyxscAIahwWs8zOVEz+Rozoc5JF49xK4Qn1obEk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=C8stokNp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="C8stokNp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3789C4CEEE; Tue, 25 Mar 2025 12:16:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905025; bh=nmHGkxmHyJry0ZX08FwwVvEgncr1QB42+YP4ADMAr78=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C8stokNpr7dBS5pbLGr75p2zR9Z5E5lnjJIi5FMFx4w1yPnaNsB9NMmqQBRtkfpAS m5D9xLwQKY25aU9wkunIPimypiai/bIZ6XD9oxRJegbXWrcpMM/NuZmLwtZSpFnxg1 yJZMDIrQ39tNY8fcJZvm0yoHbkmHKxLbfYXPMLQ/qvk2/5PhxbMjwQzpd6oS8aI7HG SqcYkjCV3B/bJh0AayYqX7vAsLA0grpT2VNk3O4epeJqyb9gmIPsDSJBy2njlEJ/au z9wEwUIw91F9GlgJA7f/NY90sbIZNWMC9aOmqwSeZEXYxBewSjQbLgzbOs7FyFi1jA 2bUXqrzqMl3sA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 01/43] rv64ilp32_abi: uapi: Reuse lp64 ABI interface Date: Tue, 25 Mar 2025 08:15:42 -0400 Message-Id: <20250325121624.523258-2-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi kernel accommodates the lp64 abi userspace and leverages the lp64 abi Linux interface. Hence, unify the BITS_PER_LONG = 32 memory layout to match BITS_PER_LONG = 64. #if (__riscv_xlen == 64) && (BITS_PER_LONG == 32) union { void *datap; __u64 __datap; }; #else void *datap; #endif This is inspired from include/uapi/linux/kvm.h: struct kvm_dirty_log { ... union { void __user *dirty_bitmap; /* one bit per page */ __u64 padding2; }; }; This is a suggestion solution for __riscv_xlen == 64, but we need a general way to determine CONFIG_64BIT/32BIT in uapi. Any help are welcome. TODO: Find a general way to replace __riscv_xlen for uapi headers. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/socket.h | 35 +++++++++++++ include/uapi/asm-generic/siginfo.h | 50 +++++++++++++++++++ include/uapi/asm-generic/signal.h | 35 +++++++++++++ include/uapi/asm-generic/stat.h | 25 ++++++++++ include/uapi/linux/atm.h | 7 +++ include/uapi/linux/atmdev.h | 7 +++ include/uapi/linux/blkpg.h | 7 +++ include/uapi/linux/btrfs.h | 19 +++++++ include/uapi/linux/capi.h | 11 ++++ include/uapi/linux/fs.h | 12 +++++ include/uapi/linux/futex.h | 18 +++++++ include/uapi/linux/if.h | 6 +++ include/uapi/linux/netfilter/x_tables.h | 8 +++ include/uapi/linux/netfilter_ipv4/ip_tables.h | 7 +++ include/uapi/linux/nfs4_mount.h | 14 ++++++ include/uapi/linux/ppp-ioctl.h | 7 +++ include/uapi/linux/sctp.h | 3 ++ include/uapi/linux/sem.h | 38 ++++++++++++++ include/uapi/linux/socket.h | 7 +++ include/uapi/linux/sysctl.h | 32 ++++++++++++ include/uapi/linux/uhid.h | 7 +++ include/uapi/linux/uio.h | 11 ++++ include/uapi/linux/usb/tmc.h | 14 ++++++ include/uapi/linux/usbdevice_fs.h | 50 +++++++++++++++++++ include/uapi/linux/uvcvideo.h | 14 ++++++ include/uapi/linux/vfio.h | 7 +++ include/uapi/linux/videodev2.h | 7 +++ 27 files changed, 458 insertions(+) diff --git a/include/linux/socket.h b/include/linux/socket.h index d18cc47e89bd..a1bc6e2b809e 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -81,12 +81,47 @@ struct msghdr { }; struct user_msghdr { +#if __riscv_xlen == 64 + union { + void __user *msg_name; /* ptr to socket address structure */ + u64 __msg_name; + }; +#else void __user *msg_name; /* ptr to socket address structure */ +#endif int msg_namelen; /* size of socket address structure */ +#if __riscv_xlen == 64 + union { + struct iovec __user *msg_iov; /* scatter/gather array */ + u64 __msg_iov; + }; +#else struct iovec __user *msg_iov; /* scatter/gather array */ +#endif +#if __riscv_xlen == 64 + union { + __kernel_size_t msg_iovlen; /* # elements in msg_iov */ + u64 __msg_iovlen; + }; +#else __kernel_size_t msg_iovlen; /* # elements in msg_iov */ +#endif +#if __riscv_xlen == 64 + union { + void __user *msg_control; /* ancillary data */ + u64 __msg_control; + }; +#else void __user *msg_control; /* ancillary data */ +#endif +#if __riscv_xlen == 64 + union { + __kernel_size_t msg_controllen; /* ancillary data buffer length */ + u64 __msg_controllen; + }; +#else __kernel_size_t msg_controllen; /* ancillary data buffer length */ +#endif unsigned int msg_flags; /* flags on received message */ }; diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h index 5a1ca43b5fc6..5c87b85d7858 100644 --- a/include/uapi/asm-generic/siginfo.h +++ b/include/uapi/asm-generic/siginfo.h @@ -7,7 +7,14 @@ typedef union sigval { int sival_int; +#if __riscv_xlen == 64 + union { + void __user *sival_ptr; + __u64 __sival_ptr; + }; +#else void __user *sival_ptr; +#endif } sigval_t; #define SI_MAX_SIZE 128 @@ -67,7 +74,14 @@ union __sifields { /* SIGILL, SIGFPE, SIGSEGV, SIGBUS, SIGTRAP, SIGEMT */ struct { +#if __riscv_xlen == 64 + union { + void __user *_addr; /* faulting insn/memory ref. */ + __u64 ___addr; + }; +#else void __user *_addr; /* faulting insn/memory ref. */ +#endif #define __ADDR_BND_PKEY_PAD (__alignof__(void *) < sizeof(short) ? \ sizeof(short) : __alignof__(void *)) @@ -82,8 +96,23 @@ union __sifields { /* used when si_code=SEGV_BNDERR */ struct { char _dummy_bnd[__ADDR_BND_PKEY_PAD]; +#if __riscv_xlen == 64 + union { + void __user *_lower; + __u64 ___lower; + }; +#else void __user *_lower; +#endif + +#if __riscv_xlen == 64 + union { + void __user *_upper; + __u64 ___upper; + }; +#else void __user *_upper; +#endif } _addr_bnd; /* used when si_code=SEGV_PKUERR */ struct { @@ -92,7 +121,14 @@ union __sifields { } _addr_pkey; /* used when si_code=TRAP_PERF */ struct { +#if __riscv_xlen == 64 + union { + unsigned long _data; + __u64 ___data; + }; +#else unsigned long _data; +#endif __u32 _type; __u32 _flags; } _perf; @@ -101,13 +137,27 @@ union __sifields { /* SIGPOLL */ struct { +#if __riscv_xlen == 64 + union { + __ARCH_SI_BAND_T _band; /* POLL_IN, POLL_OUT, POLL_MSG */ + __u64 ___band; + }; +#else __ARCH_SI_BAND_T _band; /* POLL_IN, POLL_OUT, POLL_MSG */ +#endif int _fd; } _sigpoll; /* SIGSYS */ struct { +#if __riscv_xlen == 64 + union { + void __user *_call_addr; /* calling user insn */ + __u64 ___call_addr; + }; +#else void __user *_call_addr; /* calling user insn */ +#endif int _syscall; /* triggering system call number */ unsigned int _arch; /* AUDIT_ARCH_* of syscall */ } _sigsys; diff --git a/include/uapi/asm-generic/signal.h b/include/uapi/asm-generic/signal.h index 0eb69dc8e572..efcd31a677ee 100644 --- a/include/uapi/asm-generic/signal.h +++ b/include/uapi/asm-generic/signal.h @@ -73,19 +73,54 @@ typedef unsigned long old_sigset_t; #ifndef __KERNEL__ struct sigaction { +#if __riscv_xlen == 64 + union { + __sighandler_t sa_handler; + __u64 __sa_handler; + }; +#else __sighandler_t sa_handler; +#endif +#if __riscv_xlen == 64 + union { + unsigned long sa_flags; + __u64 __sa_flags; + }; +#else unsigned long sa_flags; +#endif #ifdef SA_RESTORER +#if __riscv_xlen == 64 + union { + __sigrestore_t sa_restorer; + __u64 __sa_restorer; + }; +#else __sigrestore_t sa_restorer; +#endif #endif sigset_t sa_mask; /* mask last for extensibility */ }; #endif typedef struct sigaltstack { +#if __riscv_xlen == 64 + union { + void __user *ss_sp; + __u64 __ss_sp; + }; +#else void __user *ss_sp; +#endif int ss_flags; +#if __riscv_xlen == 64 + union { + __kernel_size_t ss_size; + __u64 __ss_size; + }; +#else __kernel_size_t ss_size; +#endif } stack_t; #endif /* __ASSEMBLY__ */ diff --git a/include/uapi/asm-generic/stat.h b/include/uapi/asm-generic/stat.h index 0d962ecd1663..c8908df5213f 100644 --- a/include/uapi/asm-generic/stat.h +++ b/include/uapi/asm-generic/stat.h @@ -21,6 +21,30 @@ #define STAT_HAVE_NSEC 1 +#if __riscv_xlen == 64 +struct stat { + unsigned long long st_dev; /* Device. */ + unsigned long long st_ino; /* File serial number. */ + unsigned int st_mode; /* File mode. */ + unsigned int st_nlink; /* Link count. */ + unsigned int st_uid; /* User ID of the file's owner. */ + unsigned int st_gid; /* Group ID of the file's group. */ + unsigned long long st_rdev; /* Device number, if device. */ + unsigned long long __pad1; + long long st_size; /* Size of file, in bytes. */ + int st_blksize; /* Optimal block size for I/O. */ + int __pad2; + long long st_blocks; /* Number 512-byte blocks allocated. */ + long long st_atime; /* Time of last access. */ + unsigned long long st_atime_nsec; + long long st_mtime; /* Time of last modification. */ + unsigned long long st_mtime_nsec; + long long st_ctime; /* Time of last status change. */ + unsigned long long st_ctime_nsec; + unsigned int __unused4; + unsigned int __unused5; +}; +#else struct stat { unsigned long st_dev; /* Device. */ unsigned long st_ino; /* File serial number. */ @@ -43,6 +67,7 @@ struct stat { unsigned int __unused4; unsigned int __unused5; }; +#endif /* This matches struct stat64 in glibc2.1. Only used for 32 bit. */ #if __BITS_PER_LONG != 64 || defined(__ARCH_WANT_STAT64) diff --git a/include/uapi/linux/atm.h b/include/uapi/linux/atm.h index 95ebdcf4fe88..fe0da6a5e26d 100644 --- a/include/uapi/linux/atm.h +++ b/include/uapi/linux/atm.h @@ -234,7 +234,14 @@ static __inline__ int atmpvc_addr_in_use(struct sockaddr_atmpvc addr) struct atmif_sioc { int number; int length; +#if __riscv_xlen == 64 + union { + void __user *arg; + __u64 __arg; + }; +#else void __user *arg; +#endif }; diff --git a/include/uapi/linux/atmdev.h b/include/uapi/linux/atmdev.h index 20b0215084fc..e0456ed8b698 100644 --- a/include/uapi/linux/atmdev.h +++ b/include/uapi/linux/atmdev.h @@ -155,7 +155,14 @@ struct atm_dev_stats { struct atm_iobuf { int length; +#if __riscv_xlen == 64 + union { + void __user *buffer; + __u64 __buffer; + }; +#else void __user *buffer; +#endif }; /* for ATM_GETCIRANGE / ATM_SETCIRANGE */ diff --git a/include/uapi/linux/blkpg.h b/include/uapi/linux/blkpg.h index d0a64ee97c6d..31f70c9114c2 100644 --- a/include/uapi/linux/blkpg.h +++ b/include/uapi/linux/blkpg.h @@ -12,7 +12,14 @@ struct blkpg_ioctl_arg { int op; int flags; int datalen; +#if __riscv_xlen == 64 + union { + void __user *data; + __u64 __data; + }; +#else void __user *data; +#endif }; /* The subfunctions (for the op field) */ diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h index d3b222d7af24..25a9570cbb1c 100644 --- a/include/uapi/linux/btrfs.h +++ b/include/uapi/linux/btrfs.h @@ -838,7 +838,14 @@ struct btrfs_ioctl_received_subvol_args { struct btrfs_ioctl_send_args { __s64 send_fd; /* in */ __u64 clone_sources_count; /* in */ +#if __riscv_xlen == 64 + union { + __u64 __user *clone_sources; /* in */ + __u64 __pad; + }; +#else __u64 __user *clone_sources; /* in */ +#endif __u64 parent_root; /* in */ __u64 flags; /* in */ __u32 version; /* in */ @@ -959,9 +966,21 @@ struct btrfs_ioctl_encoded_io_args { * increase in the future). This must also be less than or equal to * unencoded_len. */ +#if __riscv_xlen == 64 + union { + const struct iovec __user *iov; + const __u64 __iov; + }; + /* Number of iovecs. */ + union { + unsigned long iovcnt; + __u64 __iovcnt; + }; +#else const struct iovec __user *iov; /* Number of iovecs. */ unsigned long iovcnt; +#endif /* * Offset in file. * diff --git a/include/uapi/linux/capi.h b/include/uapi/linux/capi.h index 31f946f8a88d..dab4bb8e3ebb 100644 --- a/include/uapi/linux/capi.h +++ b/include/uapi/linux/capi.h @@ -77,8 +77,19 @@ typedef struct capi_profile { #define CAPI_GET_PROFILE _IOWR('C',0x09,struct capi_profile) typedef struct capi_manufacturer_cmd { +#if __riscv_xlen == 64 + union { + unsigned long cmd; + __u64 __cmd; + }; + union { + void __user *data; + __u64 __data; + }; +#else unsigned long cmd; void __user *data; +#endif } capi_manufacturer_cmd; /* diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index 2bbe00cf1248..3ccd123a23a2 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -122,15 +122,27 @@ struct file_dedupe_range { /* And dynamically-tunable limits and defaults: */ struct files_stat_struct { +#if __riscv_xlen == 64 + unsigned long long nr_files; /* read only */ + unsigned long long nr_free_files; /* read only */ + unsigned long long max_files; /* tunable */ +#else unsigned long nr_files; /* read only */ unsigned long nr_free_files; /* read only */ unsigned long max_files; /* tunable */ +#endif }; struct inodes_stat_t { +#if __riscv_xlen == 64 + long long nr_inodes; + long long nr_unused; + long long dummy[5]; /* padding for sysctl ABI compatibility */ +#else long nr_inodes; long nr_unused; long dummy[5]; /* padding for sysctl ABI compatibility */ +#endif }; diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h index d2ee625ea189..ae4ee8a66de1 100644 --- a/include/uapi/linux/futex.h +++ b/include/uapi/linux/futex.h @@ -108,7 +108,14 @@ struct futex_waitv { * changed. */ struct robust_list { +#if __riscv_xlen == 64 + union { + struct robust_list __user *next; + u64 __next; + }; +#else struct robust_list __user *next; +#endif }; /* @@ -131,7 +138,11 @@ struct robust_list_head { * we keep userspace flexible, to freely shape its data-structure, * without hardcoding any particular offset into the kernel: */ +#if __riscv_xlen == 64 + long long futex_offset; +#else long futex_offset; +#endif /* * The death of the thread may race with userspace setting @@ -143,7 +154,14 @@ struct robust_list_head { * _might_ have taken. We check the owner TID in any case, * so only truly owned locks will be handled. */ +#if __riscv_xlen == 64 + union { + struct robust_list __user *list_op_pending; + u64 __list_op_pending; + }; +#else struct robust_list __user *list_op_pending; +#endif }; /* diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h index 797ba2c1562a..232ab74922fe 100644 --- a/include/uapi/linux/if.h +++ b/include/uapi/linux/if.h @@ -219,6 +219,9 @@ struct if_settings { /* interface settings */ sync_serial_settings __user *sync; te1_settings __user *te1; +#if __riscv_xlen == 64 + __u64 unused; +#endif } ifs_ifsu; }; @@ -288,6 +291,9 @@ struct ifconf { union { char __user *ifcu_buf; struct ifreq __user *ifcu_req; +#if __riscv_xlen == 64 + __u64 unused; +#endif } ifc_ifcu; }; #endif /* __UAPI_DEF_IF_IFCONF */ diff --git a/include/uapi/linux/netfilter/x_tables.h b/include/uapi/linux/netfilter/x_tables.h index 796af83a963a..7e02e34c6fad 100644 --- a/include/uapi/linux/netfilter/x_tables.h +++ b/include/uapi/linux/netfilter/x_tables.h @@ -18,7 +18,11 @@ struct xt_entry_match { __u8 revision; } user; struct { +#if __riscv_xlen == 64 + __u64 match_size; +#else __u16 match_size; +#endif /* Used inside the kernel */ struct xt_match *match; @@ -41,7 +45,11 @@ struct xt_entry_target { __u8 revision; } user; struct { +#if __riscv_xlen == 64 + __u64 target_size; +#else __u16 target_size; +#endif /* Used inside the kernel */ struct xt_target *target; diff --git a/include/uapi/linux/netfilter_ipv4/ip_tables.h b/include/uapi/linux/netfilter_ipv4/ip_tables.h index 1485df28b239..3a78f8f7bf5d 100644 --- a/include/uapi/linux/netfilter_ipv4/ip_tables.h +++ b/include/uapi/linux/netfilter_ipv4/ip_tables.h @@ -200,7 +200,14 @@ struct ipt_replace { /* Number of counters (must be equal to current number of entries). */ unsigned int num_counters; /* The old entries' counters. */ +#if __riscv_xlen == 64 + union { + struct xt_counters __user *counters; + __u64 __counters; + }; +#else struct xt_counters __user *counters; +#endif /* The entries (hang off end: not really an array). */ struct ipt_entry entries[]; diff --git a/include/uapi/linux/nfs4_mount.h b/include/uapi/linux/nfs4_mount.h index d20bb869bb99..6ec3cec66b6f 100644 --- a/include/uapi/linux/nfs4_mount.h +++ b/include/uapi/linux/nfs4_mount.h @@ -21,7 +21,14 @@ struct nfs_string { unsigned int len; +#if __riscv_xlen == 64 + union { + const char __user * data; + __u64 __data; + }; +#else const char __user * data; +#endif }; struct nfs4_mount_data { @@ -53,7 +60,14 @@ struct nfs4_mount_data { /* Pseudo-flavours to use for authentication. See RFC2623 */ int auth_flavourlen; /* 1 */ +#if __riscv_xlen == 64 + union { + int __user *auth_flavours; /* 1 */ + __u64 __auth_flavours; + }; +#else int __user *auth_flavours; /* 1 */ +#endif }; /* bits in the flags field */ diff --git a/include/uapi/linux/ppp-ioctl.h b/include/uapi/linux/ppp-ioctl.h index 1cc5ce0ae062..8d48eab430c1 100644 --- a/include/uapi/linux/ppp-ioctl.h +++ b/include/uapi/linux/ppp-ioctl.h @@ -59,7 +59,14 @@ struct npioctl { /* Structure describing a CCP configuration option, for PPPIOCSCOMPRESS */ struct ppp_option_data { +#if __riscv_xlen == 64 + union { + __u8 __user *ptr; + __u64 __ptr; + }; +#else __u8 __user *ptr; +#endif __u32 length; int transmit; }; diff --git a/include/uapi/linux/sctp.h b/include/uapi/linux/sctp.h index b7d91d4cf0db..46a06fddcd2f 100644 --- a/include/uapi/linux/sctp.h +++ b/include/uapi/linux/sctp.h @@ -1024,6 +1024,9 @@ struct sctp_getaddrs_old { #else struct sockaddr *addrs; #endif +#if (__riscv_xlen == 64) && (__SIZEOF_LONG__ == 4) + __u32 unused; +#endif }; struct sctp_getaddrs { diff --git a/include/uapi/linux/sem.h b/include/uapi/linux/sem.h index 75aa3b273cd9..de9f441913cd 100644 --- a/include/uapi/linux/sem.h +++ b/include/uapi/linux/sem.h @@ -26,10 +26,29 @@ struct semid_ds { struct ipc_perm sem_perm; /* permissions .. see ipc.h */ __kernel_old_time_t sem_otime; /* last semop time */ __kernel_old_time_t sem_ctime; /* create/last semctl() time */ +#if __riscv_xlen == 64 + union { + struct sem *sem_base; /* ptr to first semaphore in array */ + __u64 __sem_base; + }; + union { + struct sem_queue *sem_pending; /* pending operations to be processed */ + __u64 __sem_pending; + }; + union { + struct sem_queue **sem_pending_last; /* last pending operation */ + __u64 __sem_pending_last; + }; + union { + struct sem_undo *undo; /* undo requests on this array */ + __u64 __undo; + }; +#else struct sem *sem_base; /* ptr to first semaphore in array */ struct sem_queue *sem_pending; /* pending operations to be processed */ struct sem_queue **sem_pending_last; /* last pending operation */ struct sem_undo *undo; /* undo requests on this array */ +#endif unsigned short sem_nsems; /* no. of semaphores in array */ }; @@ -46,10 +65,29 @@ struct sembuf { /* arg for semctl system calls. */ union semun { int val; /* value for SETVAL */ +#if __riscv_xlen == 64 + union { + struct semid_ds __user *buf; /* buffer for IPC_STAT & IPC_SET */ + __u64 ___buf; + }; + union { + unsigned short __user *array; /* array for GETALL & SETALL */ + __u64 __array; + }; + union { + struct seminfo __user *__buf; /* buffer for IPC_INFO */ + __u64 ____buf; + }; + union { + void __user *__pad; + __u64 ____pad; + }; +#else struct semid_ds __user *buf; /* buffer for IPC_STAT & IPC_SET */ unsigned short __user *array; /* array for GETALL & SETALL */ struct seminfo __user *__buf; /* buffer for IPC_INFO */ void __user *__pad; +#endif }; struct seminfo { diff --git a/include/uapi/linux/socket.h b/include/uapi/linux/socket.h index d3fcd3b5ec53..5f7a83649395 100644 --- a/include/uapi/linux/socket.h +++ b/include/uapi/linux/socket.h @@ -22,7 +22,14 @@ struct __kernel_sockaddr_storage { /* space to achieve desired size, */ /* _SS_MAXSIZE value minus size of ss_family */ }; +#if __riscv_xlen == 64 + union { + void *__align; /* implementation specific desired alignment */ + u64 ___align; + }; +#else void *__align; /* implementation specific desired alignment */ +#endif }; }; diff --git a/include/uapi/linux/sysctl.h b/include/uapi/linux/sysctl.h index 8981f00204db..8ed7b29897f9 100644 --- a/include/uapi/linux/sysctl.h +++ b/include/uapi/linux/sysctl.h @@ -33,13 +33,45 @@ member of a struct __sysctl_args to have? */ struct __sysctl_args { +#if __riscv_xlen == 64 + union { + int __user *name; + __u64 __name; + }; +#else int __user *name; +#endif int nlen; +#if __riscv_xlen == 64 + union { + void __user *oldval; + __u64 __oldval; + }; +#else void __user *oldval; +#endif +#if __riscv_xlen == 64 + union { + size_t __user *oldlenp; + __u64 __oldlenp; + }; +#else size_t __user *oldlenp; +#endif +#if __riscv_xlen == 64 + union { + void __user *newval; + __u64 __newval; + }; +#else void __user *newval; +#endif size_t newlen; +#if __riscv_xlen == 64 + unsigned long long __unused[4]; +#else unsigned long __unused[4]; +#endif }; /* Define sysctl names first */ diff --git a/include/uapi/linux/uhid.h b/include/uapi/linux/uhid.h index cef7534d2d19..4a774dbd3de8 100644 --- a/include/uapi/linux/uhid.h +++ b/include/uapi/linux/uhid.h @@ -130,7 +130,14 @@ struct uhid_create_req { __u8 name[128]; __u8 phys[64]; __u8 uniq[64]; +#if __riscv_xlen == 64 + union { + __u8 __user *rd_data; + __u64 __rd_data; + }; +#else __u8 __user *rd_data; +#endif __u16 rd_size; __u16 bus; diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index 649739e0c404..27dfd6032dc6 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -16,8 +16,19 @@ struct iovec { +#if __riscv_xlen == 64 + union { + void __user *iov_base; /* BSD uses caddr_t (1003.1g requires void *) */ + __u64 __iov_base; + }; + union { + __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ + __u64 __iov_len; + }; +#else void __user *iov_base; /* BSD uses caddr_t (1003.1g requires void *) */ __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ +#endif }; struct dmabuf_cmsg { diff --git a/include/uapi/linux/usb/tmc.h b/include/uapi/linux/usb/tmc.h index d791cc58a7f0..443ec5356caf 100644 --- a/include/uapi/linux/usb/tmc.h +++ b/include/uapi/linux/usb/tmc.h @@ -51,7 +51,14 @@ struct usbtmc_request { struct usbtmc_ctrlrequest { struct usbtmc_request req; +#if __riscv_xlen == 64 + union { + void __user *data; /* pointer to user space */ + __u64 __data; /* pointer to user space */ + }; +#else void __user *data; /* pointer to user space */ +#endif } __attribute__ ((packed)); struct usbtmc_termchar { @@ -70,7 +77,14 @@ struct usbtmc_message { __u32 transfer_size; /* size of bytes to transfer */ __u32 transferred; /* size of received/written bytes */ __u32 flags; /* bit 0: 0 = synchronous; 1 = asynchronous */ +#if __riscv_xlen == 64 + union { + void __user *message; /* pointer to header and data in user space */ + __u64 __message; + }; +#else void __user *message; /* pointer to header and data in user space */ +#endif } __attribute__ ((packed)); /* Request values for USBTMC driver's ioctl entry point */ diff --git a/include/uapi/linux/usbdevice_fs.h b/include/uapi/linux/usbdevice_fs.h index 74a84e02422a..8c8efef74c3c 100644 --- a/include/uapi/linux/usbdevice_fs.h +++ b/include/uapi/linux/usbdevice_fs.h @@ -44,14 +44,28 @@ struct usbdevfs_ctrltransfer { __u16 wIndex; __u16 wLength; __u32 timeout; /* in milliseconds */ +#if __riscv_xlen == 64 + union { + void __user *data; + __u64 __data; + }; +#else void __user *data; +#endif }; struct usbdevfs_bulktransfer { unsigned int ep; unsigned int len; unsigned int timeout; /* in milliseconds */ +#if __riscv_xlen == 64 + union { + void __user *data; + __u64 __data; + }; +#else void __user *data; +#endif }; struct usbdevfs_setinterface { @@ -61,7 +75,14 @@ struct usbdevfs_setinterface { struct usbdevfs_disconnectsignal { unsigned int signr; +#if __riscv_xlen == 64 + union { + void __user *context; + __u64 __context; + }; +#else void __user *context; +#endif }; #define USBDEVFS_MAXDRIVERNAME 255 @@ -119,7 +140,14 @@ struct usbdevfs_urb { unsigned char endpoint; int status; unsigned int flags; +#if __riscv_xlen == 64 + union { + void __user *buffer; + __u64 __buffer; + }; +#else void __user *buffer; +#endif int buffer_length; int actual_length; int start_frame; @@ -130,7 +158,14 @@ struct usbdevfs_urb { int error_count; unsigned int signr; /* signal to be sent on completion, or 0 if none should be sent. */ +#if __riscv_xlen == 64 + union { + void __user *usercontext; + __u64 __usercontext; + }; +#else void __user *usercontext; +#endif struct usbdevfs_iso_packet_desc iso_frame_desc[]; }; @@ -139,7 +174,14 @@ struct usbdevfs_ioctl { int ifno; /* interface 0..N ; negative numbers reserved */ int ioctl_code; /* MUST encode size + direction of data so the * macros in give correct values */ +#if __riscv_xlen == 64 + union { + void __user *data; /* param buffer (in, or out) */ + __u64 __pad; + }; +#else void __user *data; /* param buffer (in, or out) */ +#endif }; /* You can do most things with hubs just through control messages, @@ -195,9 +237,17 @@ struct usbdevfs_streams { #define USBDEVFS_SUBMITURB _IOR('U', 10, struct usbdevfs_urb) #define USBDEVFS_SUBMITURB32 _IOR('U', 10, struct usbdevfs_urb32) #define USBDEVFS_DISCARDURB _IO('U', 11) +#if __riscv_xlen == 64 +#define USBDEVFS_REAPURB _IOW('U', 12, __u64) +#else #define USBDEVFS_REAPURB _IOW('U', 12, void *) +#endif #define USBDEVFS_REAPURB32 _IOW('U', 12, __u32) +#if __riscv_xlen == 64 +#define USBDEVFS_REAPURBNDELAY _IOW('U', 13, __u64) +#else #define USBDEVFS_REAPURBNDELAY _IOW('U', 13, void *) +#endif #define USBDEVFS_REAPURBNDELAY32 _IOW('U', 13, __u32) #define USBDEVFS_DISCSIGNAL _IOR('U', 14, struct usbdevfs_disconnectsignal) #define USBDEVFS_DISCSIGNAL32 _IOR('U', 14, struct usbdevfs_disconnectsignal32) diff --git a/include/uapi/linux/uvcvideo.h b/include/uapi/linux/uvcvideo.h index f86185456dc5..3ccb99039a43 100644 --- a/include/uapi/linux/uvcvideo.h +++ b/include/uapi/linux/uvcvideo.h @@ -54,7 +54,14 @@ struct uvc_xu_control_mapping { __u32 v4l2_type; __u32 data_type; +#if __riscv_xlen == 64 + union { + struct uvc_menu_info __user *menu_info; + __u64 __menu_info; + }; +#else struct uvc_menu_info __user *menu_info; +#endif __u32 menu_count; __u32 reserved[4]; @@ -66,7 +73,14 @@ struct uvc_xu_control_query { __u8 query; /* Video Class-Specific Request Code, */ /* defined in linux/usb/video.h A.8. */ __u16 size; +#if __riscv_xlen == 64 + union { + __u8 __user *data; + __u64 __data; + }; +#else __u8 __user *data; +#endif }; #define UVCIOC_CTRL_MAP _IOWR('u', 0x20, struct uvc_xu_control_mapping) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index c8dbf8219c4f..0a1dc2a780fb 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1570,7 +1570,14 @@ struct vfio_iommu_type1_dma_map { struct vfio_bitmap { __u64 pgsize; /* page size for bitmap in bytes */ __u64 size; /* in bytes */ + #if __riscv_xlen == 64 + union { + __u64 __user *data; /* one bit per page */ + __u64 __data; + }; + #else __u64 __user *data; /* one bit per page */ + #endif }; /** diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index e7c4dce39007..8e5391f07626 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -1898,7 +1898,14 @@ struct v4l2_ext_controls { __u32 error_idx; __s32 request_fd; __u32 reserved[1]; +#if __riscv_xlen == 64 + union { + struct v4l2_ext_control *controls; + __u64 __controls; + }; +#else struct v4l2_ext_control *controls; +#endif }; #define V4L2_CTRL_ID_MASK (0x0fffffff) From patchwork Tue Mar 25 12:15:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028450 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 000E9257421; Tue, 25 Mar 2025 12:17:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905039; cv=none; b=Li23NvSFqYTUeBP5kpr+eztDRZQCED2QXCd5C8021aa1MYxvkrx3fI/UI7169EQ6aTJ0X/3iyCkLHSRcPdbWE8C98foe12QYLEdC8ukRyW1pf8MvNyX2XYHhs6jmyef25c8ncpaMyglZ+SsRtreRrXM0iG3UyAoSB1TLOtVuSeU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905039; c=relaxed/simple; bh=LCI/yc1ahl/Zp9x+QUBbqr6DoNgqTtvLT0+YbqUE8Kc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=V/zvo+XkGniZ0eBsL6LVRWfifRC3wHgLCI52gyJyyOtV34rbwoxdfwaesLr+gijd377FL5t+AHzDyvKkWTEC9flb0xb4yIT44gL4Oh82iM61yh6pWlPuuYNKhD2Hlb1Clb0PUVg8M/P5w5XpR0JsHRmh4lpAwaV+5LdE5GD/+NI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=M7zSCa31; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="M7zSCa31" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89036C4CEEF; Tue, 25 Mar 2025 12:17:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905038; bh=LCI/yc1ahl/Zp9x+QUBbqr6DoNgqTtvLT0+YbqUE8Kc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M7zSCa31T5EIDOyp2pisnsb36omDKGCK3CtmKd0jmWms5JTGy4cL6gqroHU+U6Do3 GF5N76/FLWX7xtVasdLhKMcInuvkPjYr3/XnVSZ0Pobo+jSDrMTbHwC4OV0hO+sPFI ioSKnCYcXTCDbUuV2wyGraJbUwJ9AzzjW2UTGrk2nCywWovoornRX4ZXKz0LVTxi7r qUcDCGXPxGB7Nv/XpwP22Vm+8YPGohVwdZX0Qz0dza/5WXPU3SnFd3lRUYrc4XYmk2 cIekIVLnOX0Kw3qDsIdU2cHos9atrofsQ8fmXPfl7xnlXnTI5oe/zK85/xe5ly8pgc 65J8Dyq0AYcUQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 02/43] rv64ilp32_abi: riscv: Adapt Makefile and Kconfig Date: Tue, 25 Mar 2025 08:15:43 -0400 Message-Id: <20250325121624.523258-3-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Extend the ARCH_RV64I base with ABI_RV64ILP32 to compile the Linux kernel self into ILP32 on CONFIG_64BIT=y, minimizing the kernel's memory footprint and cache occupation. The 'cmd_cpp_lds_S' in scripts/Makefile.build uses cpp_flags for ld.s generation, so add "-mabi=xxx" to KBUILD_CPPFLAGS, just like what we've done in KBUILD_CLFAGS and KBUILD_AFLAGS. cmd_cpp_lds_S = $(CPP) $(cpp_flags) -P -U$(ARCH) The rv64ilp32 ABI reuses an rv64 toolchain whose default "-mabi=" is lp64, so add "-mabi=ilp32" to correct it. Add config entry with rv64ilp32.config fragment in Makefile: - rv64ilp32_defconfig Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/Kconfig | 12 ++++++++++-- arch/riscv/Makefile | 17 +++++++++++++++++ arch/riscv/configs/rv64ilp32.config | 1 + 3 files changed, 28 insertions(+), 2 deletions(-) create mode 100644 arch/riscv/configs/rv64ilp32.config diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 7612c52e9b1e..da2111b0111c 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -213,7 +213,7 @@ config RISCV select TRACE_IRQFLAGS_SUPPORT select UACCESS_MEMCPY if !MMU select USER_STACKTRACE_SUPPORT - select ZONE_DMA32 if 64BIT + select ZONE_DMA32 if 64BIT && !ABI_RV64ILP32 config RUSTC_SUPPORTS_RISCV def_bool y @@ -298,6 +298,7 @@ config PAGE_OFFSET config KASAN_SHADOW_OFFSET hex depends on KASAN_GENERIC + default 0x70000000 if ABI_RV64ILP32 default 0xdfffffff00000000 if 64BIT default 0xffffffff if 32BIT @@ -341,7 +342,7 @@ config FIX_EARLYCON_MEM config ILLEGAL_POINTER_VALUE hex - default 0 if 32BIT + default 0 if 32BIT || ABI_RV64ILP32 default 0xdead000000000000 if 64BIT config PGTABLE_LEVELS @@ -418,6 +419,13 @@ config ARCH_RV64I endchoice +config ABI_RV64ILP32 + bool "ABI RV64ILP32" + depends on 64BIT + help + Compile linux kernel self into RV64ILP32 ABI of RISC-V psabi + specification. + # We must be able to map all physical memory into the kernel, but the compiler # is still a bit more efficient when generating code if it's setup in a manner # such that it can only map 2GiB of memory. diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile index 13fbc0f94238..76db01020a22 100644 --- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -30,10 +30,21 @@ ifeq ($(CONFIG_ARCH_RV64I),y) BITS := 64 UTS_MACHINE := riscv64 +ifeq ($(CONFIG_ABI_RV64ILP32),y) + KBUILD_CPPFLAGS += -mabi=ilp32 + + KBUILD_CFLAGS += -mabi=ilp32 + KBUILD_AFLAGS += -mabi=ilp32 + + KBUILD_LDFLAGS += -melf32lriscv +else + KBUILD_CPPFLAGS += -mabi=lp64 + KBUILD_CFLAGS += -mabi=lp64 KBUILD_AFLAGS += -mabi=lp64 KBUILD_LDFLAGS += -melf64lriscv +endif KBUILD_RUSTFLAGS += -Ctarget-cpu=generic-rv64 --target=riscv64imac-unknown-none-elf \ -Cno-redzone @@ -41,6 +52,8 @@ else BITS := 32 UTS_MACHINE := riscv32 + KBUILD_CPPFLAGS += -mabi=ilp32 + KBUILD_CFLAGS += -mabi=ilp32 KBUILD_AFLAGS += -mabi=ilp32 KBUILD_LDFLAGS += -melf32lriscv @@ -224,6 +237,10 @@ PHONY += rv32_nommu_virt_defconfig rv32_nommu_virt_defconfig: $(Q)$(MAKE) -f $(srctree)/Makefile nommu_virt_defconfig 32-bit.config +PHONY += rv64ilp32_defconfig +rv64ilp32_defconfig: + $(Q)$(MAKE) -f $(srctree)/Makefile defconfig rv64ilp32.config + define archhelp echo ' Image - Uncompressed kernel image (arch/riscv/boot/Image)' echo ' Image.gz - Compressed kernel image (arch/riscv/boot/Image.gz)' diff --git a/arch/riscv/configs/rv64ilp32.config b/arch/riscv/configs/rv64ilp32.config new file mode 100644 index 000000000000..07536586e169 --- /dev/null +++ b/arch/riscv/configs/rv64ilp32.config @@ -0,0 +1 @@ +CONFIG_ABI_RV64ILP32=y From patchwork Tue Mar 25 12:15:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028451 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9801D255E30; Tue, 25 Mar 2025 12:17:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905052; cv=none; b=VqnJslXjAGh7doPNNin78w3p3GAXx0PmVJ5N+wAVJMDXLgPWqcttJGCyuHTY8BbEcfc51NOzb8w2wg3iGnLSBIAkBHjNzVl9wE4005Lb9HHo9z44KJ0zt2Nes55Ysqh4abe41A2acAteriNAsuhTBTgpFW4ZkfJJTw/lWKj+UQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905052; c=relaxed/simple; bh=sO/4TmRmKfk2dnjl/JmxjrPBB7D8Gm9kLR/52IOrXMs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=M0Ysbb4IjIbDYXf+MDAHFjSRMrTNvgIicDhTBMkKOJwFgqU8DPOv7GvJRlbU2UczV3cShsLBQQVM0MTqln+W5R9vRkqU582KKX59qsrSiuTnWbcQGc1d4l9d1gshkJ3rUZgl4ev87WE6xMMmp9y/S10M5hEV5y9HgjuDhON5c9A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=n6JUqLMB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="n6JUqLMB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DD08FC4CEE4; Tue, 25 Mar 2025 12:17:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905052; bh=sO/4TmRmKfk2dnjl/JmxjrPBB7D8Gm9kLR/52IOrXMs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n6JUqLMBxN4FBEHCbynRZjyxbbHmfrT5dxslTK0O3f8vBGbwHrHPoAtr3cjfIT9vB PnwhhDUr7ermUkudpqYOZubBzBdljrQtK5DKzFPMtICLWxJ/Sj+2jwOCuD6Vumvr9b OdXacg+BUJ2VRmU+NwKaPI4trgLD+nYEaVDWz/YZRbboklBXdDwxhicYic5GPR2jsb +EIh1oKbAFjAYygDkGblRb1w9cnydYm8h6jUrJ7oXklmwSV5ILl+WT9Thtr5fwygP4 0xruPUpzD8LuQBpXkD4MKTLY+4Jz8Tblf/1Et8uFRxeRDK5g1P0oLHvNz0mk3izFQY QZPBh4eIRz7pQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 03/43] rv64ilp32_abi: riscv: Adapt ULL & UL definition Date: Tue, 25 Mar 2025 08:15:44 -0400 Message-Id: <20250325121624.523258-4-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" On 64-bit systems with ILP32 ABI, BITS_PER_LONG is 32, making the register width different from UL's. Thus, correct them into ULL. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/cmpxchg.h | 4 +- arch/riscv/include/asm/csr.h | 212 ++++++++++++++++--------------- arch/riscv/net/bpf_jit_comp64.c | 6 +- 3 files changed, 115 insertions(+), 107 deletions(-) diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 4cadc56220fe..938d50194dba 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -29,7 +29,7 @@ } else { \ u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \ ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \ - ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ + ulong __mask = GENMASK_ULL(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ << __s; \ ulong __newx = (ulong)(n) << __s; \ ulong __retx; \ @@ -145,7 +145,7 @@ } else { \ u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \ ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \ - ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ + ulong __mask = GENMASK_ULL(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \ << __s; \ ulong __newx = (ulong)(n) << __s; \ ulong __oldx = (ulong)(o) << __s; \ diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 6fed42e37705..25f7c5afea3a 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -9,74 +9,82 @@ #include #include +#if __riscv_xlen == 64 +#define UXL ULL +#define GENMASK_UXL GENMASK_ULL +#else +#define UXL UL +#define GENMASK_UXL GENMASK +#endif + /* Status register flags */ -#define SR_SIE _AC(0x00000002, UL) /* Supervisor Interrupt Enable */ -#define SR_MIE _AC(0x00000008, UL) /* Machine Interrupt Enable */ -#define SR_SPIE _AC(0x00000020, UL) /* Previous Supervisor IE */ -#define SR_MPIE _AC(0x00000080, UL) /* Previous Machine IE */ -#define SR_SPP _AC(0x00000100, UL) /* Previously Supervisor */ -#define SR_MPP _AC(0x00001800, UL) /* Previously Machine */ -#define SR_SUM _AC(0x00040000, UL) /* Supervisor User Memory Access */ - -#define SR_FS _AC(0x00006000, UL) /* Floating-point Status */ -#define SR_FS_OFF _AC(0x00000000, UL) -#define SR_FS_INITIAL _AC(0x00002000, UL) -#define SR_FS_CLEAN _AC(0x00004000, UL) -#define SR_FS_DIRTY _AC(0x00006000, UL) - -#define SR_VS _AC(0x00000600, UL) /* Vector Status */ -#define SR_VS_OFF _AC(0x00000000, UL) -#define SR_VS_INITIAL _AC(0x00000200, UL) -#define SR_VS_CLEAN _AC(0x00000400, UL) -#define SR_VS_DIRTY _AC(0x00000600, UL) - -#define SR_VS_THEAD _AC(0x01800000, UL) /* xtheadvector Status */ -#define SR_VS_OFF_THEAD _AC(0x00000000, UL) -#define SR_VS_INITIAL_THEAD _AC(0x00800000, UL) -#define SR_VS_CLEAN_THEAD _AC(0x01000000, UL) -#define SR_VS_DIRTY_THEAD _AC(0x01800000, UL) - -#define SR_XS _AC(0x00018000, UL) /* Extension Status */ -#define SR_XS_OFF _AC(0x00000000, UL) -#define SR_XS_INITIAL _AC(0x00008000, UL) -#define SR_XS_CLEAN _AC(0x00010000, UL) -#define SR_XS_DIRTY _AC(0x00018000, UL) +#define SR_SIE _AC(0x00000002, UXL) /* Supervisor Interrupt Enable */ +#define SR_MIE _AC(0x00000008, UXL) /* Machine Interrupt Enable */ +#define SR_SPIE _AC(0x00000020, UXL) /* Previous Supervisor IE */ +#define SR_MPIE _AC(0x00000080, UXL) /* Previous Machine IE */ +#define SR_SPP _AC(0x00000100, UXL) /* Previously Supervisor */ +#define SR_MPP _AC(0x00001800, UXL) /* Previously Machine */ +#define SR_SUM _AC(0x00040000, UXL) /* Supervisor User Memory Access */ + +#define SR_FS _AC(0x00006000, UXL) /* Floating-point Status */ +#define SR_FS_OFF _AC(0x00000000, UXL) +#define SR_FS_INITIAL _AC(0x00002000, UXL) +#define SR_FS_CLEAN _AC(0x00004000, UXL) +#define SR_FS_DIRTY _AC(0x00006000, UXL) + +#define SR_VS _AC(0x00000600, UXL) /* Vector Status */ +#define SR_VS_OFF _AC(0x00000000, UXL) +#define SR_VS_INITIAL _AC(0x00000200, UXL) +#define SR_VS_CLEAN _AC(0x00000400, UXL) +#define SR_VS_DIRTY _AC(0x00000600, UXL) + +#define SR_VS_THEAD _AC(0x01800000, UXL) /* xtheadvector Status */ +#define SR_VS_OFF_THEAD _AC(0x00000000, UXL) +#define SR_VS_INITIAL_THEAD _AC(0x00800000, UXL) +#define SR_VS_CLEAN_THEAD _AC(0x01000000, UXL) +#define SR_VS_DIRTY_THEAD _AC(0x01800000, UXL) + +#define SR_XS _AC(0x00018000, UXL) /* Extension Status */ +#define SR_XS_OFF _AC(0x00000000, UXL) +#define SR_XS_INITIAL _AC(0x00008000, UXL) +#define SR_XS_CLEAN _AC(0x00010000, UXL) +#define SR_XS_DIRTY _AC(0x00018000, UXL) #define SR_FS_VS (SR_FS | SR_VS) /* Vector and Floating-Point Unit */ -#ifndef CONFIG_64BIT -#define SR_SD _AC(0x80000000, UL) /* FS/VS/XS dirty */ +#if __riscv_xlen == 32 +#define SR_SD _AC(0x80000000, UXL) /* FS/VS/XS dirty */ #else -#define SR_SD _AC(0x8000000000000000, UL) /* FS/VS/XS dirty */ +#define SR_SD _AC(0x8000000000000000, UXL) /* FS/VS/XS dirty */ #endif -#ifdef CONFIG_64BIT -#define SR_UXL _AC(0x300000000, UL) /* XLEN mask for U-mode */ -#define SR_UXL_32 _AC(0x100000000, UL) /* XLEN = 32 for U-mode */ -#define SR_UXL_64 _AC(0x200000000, UL) /* XLEN = 64 for U-mode */ +#if __riscv_xlen == 64 +#define SR_UXL _AC(0x300000000, UXL) /* XLEN mask for U-mode */ +#define SR_UXL_32 _AC(0x100000000, UXL) /* XLEN = 32 for U-mode */ +#define SR_UXL_64 _AC(0x200000000, UXL) /* XLEN = 64 for U-mode */ #endif /* SATP flags */ -#ifndef CONFIG_64BIT -#define SATP_PPN _AC(0x003FFFFF, UL) -#define SATP_MODE_32 _AC(0x80000000, UL) +#if __riscv_xlen == 32 +#define SATP_PPN _AC(0x003FFFFF, UXL) +#define SATP_MODE_32 _AC(0x80000000, UXL) #define SATP_MODE_SHIFT 31 #define SATP_ASID_BITS 9 #define SATP_ASID_SHIFT 22 -#define SATP_ASID_MASK _AC(0x1FF, UL) +#define SATP_ASID_MASK _AC(0x1FF, UXL) #else -#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UL) -#define SATP_MODE_39 _AC(0x8000000000000000, UL) -#define SATP_MODE_48 _AC(0x9000000000000000, UL) -#define SATP_MODE_57 _AC(0xa000000000000000, UL) +#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UXL) +#define SATP_MODE_39 _AC(0x8000000000000000, UXL) +#define SATP_MODE_48 _AC(0x9000000000000000, UXL) +#define SATP_MODE_57 _AC(0xa000000000000000, UXL) #define SATP_MODE_SHIFT 60 #define SATP_ASID_BITS 16 #define SATP_ASID_SHIFT 44 -#define SATP_ASID_MASK _AC(0xFFFF, UL) +#define SATP_ASID_MASK _AC(0xFFFF, UXL) #endif /* Exception cause high bit - is an interrupt if set */ -#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1)) +#define CAUSE_IRQ_FLAG (_AC(1, UXL) << (__riscv_xlen - 1)) /* Interrupt causes (minus the high bit) */ #define IRQ_S_SOFT 1 @@ -91,7 +99,7 @@ #define IRQ_S_GEXT 12 #define IRQ_PMU_OVF 13 #define IRQ_LOCAL_MAX (IRQ_PMU_OVF + 1) -#define IRQ_LOCAL_MASK GENMASK((IRQ_LOCAL_MAX - 1), 0) +#define IRQ_LOCAL_MASK GENMASK_UXL((IRQ_LOCAL_MAX - 1), 0) /* Exception causes */ #define EXC_INST_MISALIGNED 0 @@ -124,45 +132,45 @@ #define PMP_L 0x80 /* HSTATUS flags */ -#ifdef CONFIG_64BIT -#define HSTATUS_HUPMM _AC(0x3000000000000, UL) -#define HSTATUS_HUPMM_PMLEN_0 _AC(0x0000000000000, UL) -#define HSTATUS_HUPMM_PMLEN_7 _AC(0x2000000000000, UL) -#define HSTATUS_HUPMM_PMLEN_16 _AC(0x3000000000000, UL) -#define HSTATUS_VSXL _AC(0x300000000, UL) +#if __riscv_xlen == 64 +#define HSTATUS_HUPMM _AC(0x3000000000000, UXL) +#define HSTATUS_HUPMM_PMLEN_0 _AC(0x0000000000000, UXL) +#define HSTATUS_HUPMM_PMLEN_7 _AC(0x2000000000000, UXL) +#define HSTATUS_HUPMM_PMLEN_16 _AC(0x3000000000000, UXL) +#define HSTATUS_VSXL _AC(0x300000000, UXL) #define HSTATUS_VSXL_SHIFT 32 #endif -#define HSTATUS_VTSR _AC(0x00400000, UL) -#define HSTATUS_VTW _AC(0x00200000, UL) -#define HSTATUS_VTVM _AC(0x00100000, UL) -#define HSTATUS_VGEIN _AC(0x0003f000, UL) +#define HSTATUS_VTSR _AC(0x00400000, UXL) +#define HSTATUS_VTW _AC(0x00200000, UXL) +#define HSTATUS_VTVM _AC(0x00100000, UXL) +#define HSTATUS_VGEIN _AC(0x0003f000, UXL) #define HSTATUS_VGEIN_SHIFT 12 -#define HSTATUS_HU _AC(0x00000200, UL) -#define HSTATUS_SPVP _AC(0x00000100, UL) -#define HSTATUS_SPV _AC(0x00000080, UL) -#define HSTATUS_GVA _AC(0x00000040, UL) -#define HSTATUS_VSBE _AC(0x00000020, UL) +#define HSTATUS_HU _AC(0x00000200, UXL) +#define HSTATUS_SPVP _AC(0x00000100, UXL) +#define HSTATUS_SPV _AC(0x00000080, UXL) +#define HSTATUS_GVA _AC(0x00000040, UXL) +#define HSTATUS_VSBE _AC(0x00000020, UXL) /* HGATP flags */ -#define HGATP_MODE_OFF _AC(0, UL) -#define HGATP_MODE_SV32X4 _AC(1, UL) -#define HGATP_MODE_SV39X4 _AC(8, UL) -#define HGATP_MODE_SV48X4 _AC(9, UL) -#define HGATP_MODE_SV57X4 _AC(10, UL) +#define HGATP_MODE_OFF _AC(0, UXL) +#define HGATP_MODE_SV32X4 _AC(1, UXL) +#define HGATP_MODE_SV39X4 _AC(8, UXL) +#define HGATP_MODE_SV48X4 _AC(9, UXL) +#define HGATP_MODE_SV57X4 _AC(10, UXL) #define HGATP32_MODE_SHIFT 31 #define HGATP32_VMID_SHIFT 22 -#define HGATP32_VMID GENMASK(28, 22) -#define HGATP32_PPN GENMASK(21, 0) +#define HGATP32_VMID GENMASK_UXL(28, 22) +#define HGATP32_PPN GENMASK_UXL(21, 0) #define HGATP64_MODE_SHIFT 60 #define HGATP64_VMID_SHIFT 44 -#define HGATP64_VMID GENMASK(57, 44) -#define HGATP64_PPN GENMASK(43, 0) +#define HGATP64_VMID GENMASK_UXL(57, 44) +#define HGATP64_PPN GENMASK_UXL(43, 0) #define HGATP_PAGE_SHIFT 12 -#ifdef CONFIG_64BIT +#if __riscv_xlen == 64 #define HGATP_PPN HGATP64_PPN #define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT #define HGATP_VMID HGATP64_VMID @@ -176,31 +184,31 @@ /* VSIP & HVIP relation */ #define VSIP_TO_HVIP_SHIFT (IRQ_VS_SOFT - IRQ_S_SOFT) -#define VSIP_VALID_MASK ((_AC(1, UL) << IRQ_S_SOFT) | \ - (_AC(1, UL) << IRQ_S_TIMER) | \ - (_AC(1, UL) << IRQ_S_EXT) | \ - (_AC(1, UL) << IRQ_PMU_OVF)) +#define VSIP_VALID_MASK ((_AC(1, UXL) << IRQ_S_SOFT) | \ + (_AC(1, UXL) << IRQ_S_TIMER) | \ + (_AC(1, UXL) << IRQ_S_EXT) | \ + (_AC(1, UXL) << IRQ_PMU_OVF)) /* AIA CSR bits */ #define TOPI_IID_SHIFT 16 -#define TOPI_IID_MASK GENMASK(11, 0) -#define TOPI_IPRIO_MASK GENMASK(7, 0) +#define TOPI_IID_MASK GENMASK_UXL(11, 0) +#define TOPI_IPRIO_MASK GENMASK_UXL(7, 0) #define TOPI_IPRIO_BITS 8 #define TOPEI_ID_SHIFT 16 -#define TOPEI_ID_MASK GENMASK(10, 0) -#define TOPEI_PRIO_MASK GENMASK(10, 0) +#define TOPEI_ID_MASK GENMASK_UXL(10, 0) +#define TOPEI_PRIO_MASK GENMASK_UXL(10, 0) #define ISELECT_IPRIO0 0x30 #define ISELECT_IPRIO15 0x3f -#define ISELECT_MASK GENMASK(8, 0) +#define ISELECT_MASK GENMASK_UXL(8, 0) #define HVICTL_VTI BIT(30) -#define HVICTL_IID GENMASK(27, 16) +#define HVICTL_IID GENMASK_UXL(27, 16) #define HVICTL_IID_SHIFT 16 #define HVICTL_DPR BIT(9) #define HVICTL_IPRIOM BIT(8) -#define HVICTL_IPRIO GENMASK(7, 0) +#define HVICTL_IPRIO GENMASK_UXL(7, 0) /* xENVCFG flags */ #define ENVCFG_STCE (_AC(1, ULL) << 63) @@ -210,14 +218,14 @@ #define ENVCFG_PMM_PMLEN_0 (_AC(0x0, ULL) << 32) #define ENVCFG_PMM_PMLEN_7 (_AC(0x2, ULL) << 32) #define ENVCFG_PMM_PMLEN_16 (_AC(0x3, ULL) << 32) -#define ENVCFG_CBZE (_AC(1, UL) << 7) -#define ENVCFG_CBCFE (_AC(1, UL) << 6) +#define ENVCFG_CBZE (_AC(1, UXL) << 7) +#define ENVCFG_CBCFE (_AC(1, UXL) << 6) #define ENVCFG_CBIE_SHIFT 4 -#define ENVCFG_CBIE (_AC(0x3, UL) << ENVCFG_CBIE_SHIFT) -#define ENVCFG_CBIE_ILL _AC(0x0, UL) -#define ENVCFG_CBIE_FLUSH _AC(0x1, UL) -#define ENVCFG_CBIE_INV _AC(0x3, UL) -#define ENVCFG_FIOM _AC(0x1, UL) +#define ENVCFG_CBIE (_AC(0x3, UXL) << ENVCFG_CBIE_SHIFT) +#define ENVCFG_CBIE_ILL _AC(0x0, UXL) +#define ENVCFG_CBIE_FLUSH _AC(0x1, UXL) +#define ENVCFG_CBIE_INV _AC(0x3, UXL) +#define ENVCFG_FIOM _AC(0x1, UXL) /* Smstateen bits */ #define SMSTATEEN0_AIA_IMSIC_SHIFT 58 @@ -446,12 +454,12 @@ /* Scalar Crypto Extension - Entropy */ #define CSR_SEED 0x015 -#define SEED_OPST_MASK _AC(0xC0000000, UL) -#define SEED_OPST_BIST _AC(0x00000000, UL) -#define SEED_OPST_WAIT _AC(0x40000000, UL) -#define SEED_OPST_ES16 _AC(0x80000000, UL) -#define SEED_OPST_DEAD _AC(0xC0000000, UL) -#define SEED_ENTROPY_MASK _AC(0xFFFF, UL) +#define SEED_OPST_MASK _AC(0xC0000000, UXL) +#define SEED_OPST_BIST _AC(0x00000000, UXL) +#define SEED_OPST_WAIT _AC(0x40000000, UXL) +#define SEED_OPST_ES16 _AC(0x80000000, UXL) +#define SEED_OPST_DEAD _AC(0xC0000000, UXL) +#define SEED_ENTROPY_MASK _AC(0xFFFF, UXL) #ifdef CONFIG_RISCV_M_MODE # define CSR_STATUS CSR_MSTATUS @@ -504,14 +512,14 @@ # define RV_IRQ_TIMER IRQ_S_TIMER # define RV_IRQ_EXT IRQ_S_EXT # define RV_IRQ_PMU IRQ_PMU_OVF -# define SIP_LCOFIP (_AC(0x1, UL) << IRQ_PMU_OVF) +# define SIP_LCOFIP (_AC(0x1, UXL) << IRQ_PMU_OVF) #endif /* !CONFIG_RISCV_M_MODE */ /* IE/IP (Supervisor/Machine Interrupt Enable/Pending) flags */ -#define IE_SIE (_AC(0x1, UL) << RV_IRQ_SOFT) -#define IE_TIE (_AC(0x1, UL) << RV_IRQ_TIMER) -#define IE_EIE (_AC(0x1, UL) << RV_IRQ_EXT) +#define IE_SIE (_AC(0x1, UXL) << RV_IRQ_SOFT) +#define IE_TIE (_AC(0x1, UXL) << RV_IRQ_TIMER) +#define IE_EIE (_AC(0x1, UXL) << RV_IRQ_EXT) #ifndef __ASSEMBLY__ diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index ca60db75199d..4f958722ca41 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -136,7 +136,7 @@ static u8 rv_tail_call_reg(struct rv_jit_context *ctx) static bool is_32b_int(s64 val) { - return -(1L << 31) <= val && val < (1L << 31); + return -(1LL << 31) <= val && val < (1LL << 31); } static bool in_auipc_jalr_range(s64 val) @@ -145,8 +145,8 @@ static bool in_auipc_jalr_range(s64 val) * auipc+jalr can reach any signed PC-relative offset in the range * [-2^31 - 2^11, 2^31 - 2^11). */ - return (-(1L << 31) - (1L << 11)) <= val && - val < ((1L << 31) - (1L << 11)); + return (-(1LL << 31) - (1LL << 11)) <= val && + val < ((1LL << 31) - (1LL << 11)); } /* Modify rd pointer to alternate reg to avoid corrupting original reg */ From patchwork Tue Mar 25 12:15:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028452 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71AB6A937; Tue, 25 Mar 2025 12:17:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905067; cv=none; b=ZKaVRU8EPZgmo8Q7A6AQbD7+uZWtYLC1H62V1XMRw1FpKLBLBouVjkM9TpYSOvkVUQJ59skV+VxT0T9rQMvzc/sQU45Y4NpkiwYj/BHKgOhofsb7NlA8Y+5dhJKAMTnYds96GPZWzxEixv2MgpumtnOxbVeWsK893oc4vQR5kPs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905067; c=relaxed/simple; bh=rA1Mf4w9umBEMm9/yBwHJ19r6aOYZm1tJ3T1EWy4X2U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AKwWAx/j3bI9U5If5Iea2Eq/FQtP5D8l3Db0bR1dsaVpUPzIRZQBiIyNieZbxXKH2paMgkn02AVX5yqzAGrvs/y2FyRTOP2are/8jJcdVg/jepJ3qh4QBxOaBuNTT+yBNMa2WKlx8Dg4IR0dPqogi+ulam2SeNYbxvUJrFtxkfg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Qd3pzD+U; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Qd3pzD+U" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 82E24C4CEEE; Tue, 25 Mar 2025 12:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905066; bh=rA1Mf4w9umBEMm9/yBwHJ19r6aOYZm1tJ3T1EWy4X2U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Qd3pzD+Ui6oxuJbCvCrC/EGtTe0Vt4+GBJSDObtd6gqIkKOGKSsPNFv+DVBxiXQzu JG1nCmPj4TEtCZyY8gcjS8GWhs6YeefSvvlY6EPQqi76ow/h0zC07L/VnjG7fIAey2 /KD+F+jpDFhBv/ilLYMMexTuxTpEwLJiMyXjdEPFiGtgVDPDSuArKJRcGRx6Ffd+eC sAq8p3etLhnpV68JxU7AtRmmAiKAh3UNd5bQtclUDBDWEDQrRwaPNC57lIgQXoPSTV P060ndNbFm1EyAq8QUZTKTTvozFo2CznWNFAj8EQLXwJXILqE4h2hJ6YuRKPidj7Fi 7VWUd3QGdvOOA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 04/43] rv64ilp32_abi: riscv: Introduce xlen_t to adapt __riscv_xlen != BITS_PER_LONG Date: Tue, 25 Mar 2025 08:15:45 -0400 Message-Id: <20250325121624.523258-5-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Upon RV64ILP32 ABI definition, BITS_PER_LONG couldn't determine XLEN due to its 32-bit value when CONFIG_64BIT=y. Hence, we've introduced xlen_t and utilized CONFIG_64BIT or __riscv_xlen == 64 to determine register width. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/checksum.h | 4 ++ arch/riscv/include/asm/csr.h | 15 ++-- arch/riscv/include/asm/processor.h | 10 +-- arch/riscv/include/asm/ptrace.h | 92 ++++++++++++------------ arch/riscv/include/asm/sparsemem.h | 2 +- arch/riscv/include/asm/switch_to.h | 4 +- arch/riscv/include/asm/thread_info.h | 2 +- arch/riscv/include/asm/timex.h | 4 +- arch/riscv/include/uapi/asm/elf.h | 4 +- arch/riscv/include/uapi/asm/ptrace.h | 97 ++++++++++++++------------ arch/riscv/include/uapi/asm/ucontext.h | 7 +- arch/riscv/include/uapi/asm/unistd.h | 2 +- arch/riscv/kernel/compat_signal.c | 4 +- arch/riscv/kernel/process.c | 8 +-- arch/riscv/kernel/signal.c | 4 +- arch/riscv/kernel/traps.c | 4 +- arch/riscv/kernel/vector.c | 2 +- arch/riscv/mm/fault.c | 2 +- 18 files changed, 143 insertions(+), 124 deletions(-) diff --git a/arch/riscv/include/asm/checksum.h b/arch/riscv/include/asm/checksum.h index 88e6f1499e88..e887f0983b69 100644 --- a/arch/riscv/include/asm/checksum.h +++ b/arch/riscv/include/asm/checksum.h @@ -36,7 +36,11 @@ __sum16 csum_ipv6_magic(const struct in6_addr *saddr, */ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl) { +#if __riscv_xlen == 64 + unsigned long long csum = 0; +#else unsigned long csum = 0; +#endif int pos = 0; do { diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 25f7c5afea3a..4339600e3c56 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -522,10 +522,11 @@ #define IE_EIE (_AC(0x1, UXL) << RV_IRQ_EXT) #ifndef __ASSEMBLY__ +#include #define csr_swap(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\ : "=r" (__v) : "rK" (__v) \ : "memory"); \ @@ -534,7 +535,7 @@ #define csr_read(csr) \ ({ \ - register unsigned long __v; \ + register xlen_t __v; \ __asm__ __volatile__ ("csrr %0, " __ASM_STR(csr) \ : "=r" (__v) : \ : "memory"); \ @@ -543,7 +544,7 @@ #define csr_write(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0" \ : : "rK" (__v) \ : "memory"); \ @@ -551,7 +552,7 @@ #define csr_read_set(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\ : "=r" (__v) : "rK" (__v) \ : "memory"); \ @@ -560,7 +561,7 @@ #define csr_set(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0" \ : : "rK" (__v) \ : "memory"); \ @@ -568,7 +569,7 @@ #define csr_read_clear(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\ : "=r" (__v) : "rK" (__v) \ : "memory"); \ @@ -577,7 +578,7 @@ #define csr_clear(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0" \ : : "rK" (__v) \ : "memory"); \ diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 5f56eb9d114a..ca57a650c3d2 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -45,7 +45,7 @@ * This decides where the kernel will search for a free chunk of vm * space during mmap's. */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define TASK_UNMAPPED_BASE PAGE_ALIGN((UL(1) << MMAP_MIN_VA_BITS) / 3) #else #define TASK_UNMAPPED_BASE PAGE_ALIGN(TASK_SIZE / 3) @@ -99,10 +99,10 @@ struct thread_struct { /* Callee-saved registers */ unsigned long ra; unsigned long sp; /* Kernel mode stack */ - unsigned long s[12]; /* s[0]: frame pointer */ + xlen_t s[12]; /* s[0]: frame pointer */ struct __riscv_d_ext_state fstate; unsigned long bad_cause; - unsigned long envcfg; + xlen_t envcfg; u32 riscv_v_flags; u32 vstate_ctrl; struct __riscv_v_ext_state vstate; @@ -133,8 +133,8 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset, ((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE \ - ALIGN(sizeof(struct pt_regs), STACK_ALIGN))) -#define KSTK_EIP(tsk) (task_pt_regs(tsk)->epc) -#define KSTK_ESP(tsk) (task_pt_regs(tsk)->sp) +#define KSTK_EIP(tsk) (ulong)(task_pt_regs(tsk)->epc) +#define KSTK_ESP(tsk) (ulong)(task_pt_regs(tsk)->sp) /* Do necessary setup to start up a newly executed thread. */ diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h index b5b0adcc85c1..a0ed27c2346b 100644 --- a/arch/riscv/include/asm/ptrace.h +++ b/arch/riscv/include/asm/ptrace.h @@ -13,51 +13,51 @@ #ifndef __ASSEMBLY__ struct pt_regs { - unsigned long epc; - unsigned long ra; - unsigned long sp; - unsigned long gp; - unsigned long tp; - unsigned long t0; - unsigned long t1; - unsigned long t2; - unsigned long s0; - unsigned long s1; - unsigned long a0; - unsigned long a1; - unsigned long a2; - unsigned long a3; - unsigned long a4; - unsigned long a5; - unsigned long a6; - unsigned long a7; - unsigned long s2; - unsigned long s3; - unsigned long s4; - unsigned long s5; - unsigned long s6; - unsigned long s7; - unsigned long s8; - unsigned long s9; - unsigned long s10; - unsigned long s11; - unsigned long t3; - unsigned long t4; - unsigned long t5; - unsigned long t6; + xlen_t epc; + xlen_t ra; + xlen_t sp; + xlen_t gp; + xlen_t tp; + xlen_t t0; + xlen_t t1; + xlen_t t2; + xlen_t s0; + xlen_t s1; + xlen_t a0; + xlen_t a1; + xlen_t a2; + xlen_t a3; + xlen_t a4; + xlen_t a5; + xlen_t a6; + xlen_t a7; + xlen_t s2; + xlen_t s3; + xlen_t s4; + xlen_t s5; + xlen_t s6; + xlen_t s7; + xlen_t s8; + xlen_t s9; + xlen_t s10; + xlen_t s11; + xlen_t t3; + xlen_t t4; + xlen_t t5; + xlen_t t6; /* Supervisor/Machine CSRs */ - unsigned long status; - unsigned long badaddr; - unsigned long cause; + xlen_t status; + xlen_t badaddr; + xlen_t cause; /* a0 value before the syscall */ - unsigned long orig_a0; + xlen_t orig_a0; }; #define PTRACE_SYSEMU 0x1f #define PTRACE_SYSEMU_SINGLESTEP 0x20 #ifdef CONFIG_64BIT -#define REG_FMT "%016lx" +#define REG_FMT "%016llx" #else #define REG_FMT "%08lx" #endif @@ -69,12 +69,12 @@ struct pt_regs { /* Helpers for working with the instruction pointer */ static inline unsigned long instruction_pointer(struct pt_regs *regs) { - return regs->epc; + return (unsigned long)regs->epc; } static inline void instruction_pointer_set(struct pt_regs *regs, unsigned long val) { - regs->epc = val; + regs->epc = (xlen_t)val; } #define profile_pc(regs) instruction_pointer(regs) @@ -82,40 +82,40 @@ static inline void instruction_pointer_set(struct pt_regs *regs, /* Helpers for working with the user stack pointer */ static inline unsigned long user_stack_pointer(struct pt_regs *regs) { - return regs->sp; + return (unsigned long)regs->sp; } static inline void user_stack_pointer_set(struct pt_regs *regs, unsigned long val) { - regs->sp = val; + regs->sp = (xlen_t)val; } /* Valid only for Kernel mode traps. */ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs) { - return regs->sp; + return (unsigned long)regs->sp; } /* Helpers for working with the frame pointer */ static inline unsigned long frame_pointer(struct pt_regs *regs) { - return regs->s0; + return (unsigned long)regs->s0; } static inline void frame_pointer_set(struct pt_regs *regs, unsigned long val) { - regs->s0 = val; + regs->s0 = (xlen_t)val; } static inline unsigned long regs_return_value(struct pt_regs *regs) { - return regs->a0; + return (unsigned long)regs->a0; } static inline void regs_set_return_value(struct pt_regs *regs, unsigned long val) { - regs->a0 = val; + regs->a0 = (xlen_t)val; } extern int regs_query_register_offset(const char *name); diff --git a/arch/riscv/include/asm/sparsemem.h b/arch/riscv/include/asm/sparsemem.h index 2f901a410586..68907698caa6 100644 --- a/arch/riscv/include/asm/sparsemem.h +++ b/arch/riscv/include/asm/sparsemem.h @@ -4,7 +4,7 @@ #define _ASM_RISCV_SPARSEMEM_H #ifdef CONFIG_SPARSEMEM -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define MAX_PHYSMEM_BITS 56 #else #define MAX_PHYSMEM_BITS 32 diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h index 0e71eb82f920..6d01b0fc5a25 100644 --- a/arch/riscv/include/asm/switch_to.h +++ b/arch/riscv/include/asm/switch_to.h @@ -71,9 +71,9 @@ static __always_inline bool has_fpu(void) { return false; } #endif static inline void envcfg_update_bits(struct task_struct *task, - unsigned long mask, unsigned long val) + xlen_t mask, xlen_t val) { - unsigned long envcfg; + xlen_t envcfg; envcfg = (task->thread.envcfg & ~mask) | val; task->thread.envcfg = envcfg; diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index f5916a70879a..637a46fc7ed8 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -71,7 +71,7 @@ struct thread_info { * Used in handle_exception() to save a0, a1 and a2 before knowing if we * can access the kernel stack. */ - unsigned long a0, a1, a2; + xlen_t a0, a1, a2; #endif }; diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h index a06697846e69..b5ca67b30d0b 100644 --- a/arch/riscv/include/asm/timex.h +++ b/arch/riscv/include/asm/timex.h @@ -8,7 +8,7 @@ #include -typedef unsigned long cycles_t; +typedef xlen_t cycles_t; #ifdef CONFIG_RISCV_M_MODE @@ -84,7 +84,7 @@ static inline u64 get_cycles64(void) #define ARCH_HAS_READ_CURRENT_TIMER static inline int read_current_timer(unsigned long *timer_val) { - *timer_val = get_cycles(); + *timer_val = (unsigned long)get_cycles(); return 0; } diff --git a/arch/riscv/include/uapi/asm/elf.h b/arch/riscv/include/uapi/asm/elf.h index 11a71b8533d5..9fc8c2e3556b 100644 --- a/arch/riscv/include/uapi/asm/elf.h +++ b/arch/riscv/include/uapi/asm/elf.h @@ -15,7 +15,7 @@ #include /* ELF register definitions */ -typedef unsigned long elf_greg_t; +typedef xlen_t elf_greg_t; typedef struct user_regs_struct elf_gregset_t; #define ELF_NGREG (sizeof(elf_gregset_t) / sizeof(elf_greg_t)) @@ -24,7 +24,7 @@ typedef __u64 elf_fpreg_t; typedef union __riscv_fp_state elf_fpregset_t; #define ELF_NFPREG (sizeof(struct __riscv_d_ext_state) / sizeof(elf_fpreg_t)) -#if __riscv_xlen == 64 +#if BITS_PER_LONG == 64 #define ELF_RISCV_R_SYM(r_info) ELF64_R_SYM(r_info) #define ELF_RISCV_R_TYPE(r_info) ELF64_R_TYPE(r_info) #else diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h index a38268b19c3d..f040a2ba07b0 100644 --- a/arch/riscv/include/uapi/asm/ptrace.h +++ b/arch/riscv/include/uapi/asm/ptrace.h @@ -15,6 +15,14 @@ #define PTRACE_GETFDPIC_EXEC 0 #define PTRACE_GETFDPIC_INTERP 1 +#if __riscv_xlen == 64 +typedef u64 xlen_t; +#endif + +#if __riscv_xlen == 32 +typedef ulong xlen_t; +#endif + /* * User-mode register state for core dumps, ptrace, sigcontext * @@ -22,38 +30,38 @@ * struct user_regs_struct must form a prefix of struct pt_regs. */ struct user_regs_struct { - unsigned long pc; - unsigned long ra; - unsigned long sp; - unsigned long gp; - unsigned long tp; - unsigned long t0; - unsigned long t1; - unsigned long t2; - unsigned long s0; - unsigned long s1; - unsigned long a0; - unsigned long a1; - unsigned long a2; - unsigned long a3; - unsigned long a4; - unsigned long a5; - unsigned long a6; - unsigned long a7; - unsigned long s2; - unsigned long s3; - unsigned long s4; - unsigned long s5; - unsigned long s6; - unsigned long s7; - unsigned long s8; - unsigned long s9; - unsigned long s10; - unsigned long s11; - unsigned long t3; - unsigned long t4; - unsigned long t5; - unsigned long t6; + xlen_t pc; + xlen_t ra; + xlen_t sp; + xlen_t gp; + xlen_t tp; + xlen_t t0; + xlen_t t1; + xlen_t t2; + xlen_t s0; + xlen_t s1; + xlen_t a0; + xlen_t a1; + xlen_t a2; + xlen_t a3; + xlen_t a4; + xlen_t a5; + xlen_t a6; + xlen_t a7; + xlen_t s2; + xlen_t s3; + xlen_t s4; + xlen_t s5; + xlen_t s6; + xlen_t s7; + xlen_t s8; + xlen_t s9; + xlen_t s10; + xlen_t s11; + xlen_t t3; + xlen_t t4; + xlen_t t5; + xlen_t t6; }; struct __riscv_f_ext_state { @@ -98,12 +106,15 @@ union __riscv_fp_state { }; struct __riscv_v_ext_state { - unsigned long vstart; - unsigned long vl; - unsigned long vtype; - unsigned long vcsr; - unsigned long vlenb; - void *datap; + xlen_t vstart; + xlen_t vl; + xlen_t vtype; + xlen_t vcsr; + xlen_t vlenb; + union { + void *datap; + xlen_t pad; + }; /* * In signal handler, datap will be set a correct user stack offset * and vector registers will be copied to the address of datap @@ -112,11 +123,11 @@ struct __riscv_v_ext_state { }; struct __riscv_v_regset_state { - unsigned long vstart; - unsigned long vl; - unsigned long vtype; - unsigned long vcsr; - unsigned long vlenb; + xlen_t vstart; + xlen_t vl; + xlen_t vtype; + xlen_t vcsr; + xlen_t vlenb; char vreg[]; }; diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h index 516bd0bb0da5..572b96c3ccf4 100644 --- a/arch/riscv/include/uapi/asm/ucontext.h +++ b/arch/riscv/include/uapi/asm/ucontext.h @@ -11,8 +11,11 @@ #include struct ucontext { - unsigned long uc_flags; - struct ucontext *uc_link; + xlen_t uc_flags; + union { + struct ucontext *uc_link; + xlen_t pad; + }; stack_t uc_stack; sigset_t uc_sigmask; /* diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h index 81896bbbf727..e33dd5161b8d 100644 --- a/arch/riscv/include/uapi/asm/unistd.h +++ b/arch/riscv/include/uapi/asm/unistd.h @@ -16,7 +16,7 @@ */ #include -#if __BITS_PER_LONG == 64 +#if __riscv_xlen == 64 #include #else #include diff --git a/arch/riscv/kernel/compat_signal.c b/arch/riscv/kernel/compat_signal.c index 6ec4e34255a9..859104618f34 100644 --- a/arch/riscv/kernel/compat_signal.c +++ b/arch/riscv/kernel/compat_signal.c @@ -126,7 +126,7 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) /* Always make any pending restarted system calls return -EINTR */ current->restart_block.fn = do_no_restart_syscall; - frame = (struct compat_rt_sigframe __user *)regs->sp; + frame = (struct compat_rt_sigframe __user *)(ulong)regs->sp; if (!access_ok(frame, sizeof(*frame))) goto badframe; @@ -150,7 +150,7 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) pr_info_ratelimited( "%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n", task->comm, task_pid_nr(task), __func__, - frame, (void *)regs->epc, (void *)regs->sp); + frame, (void *)(ulong)regs->epc, (void *)(ulong)regs->sp); } force_sig(SIGSEGV); return 0; diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 7c244de77180..5c827761f84b 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -65,8 +65,8 @@ void __show_regs(struct pt_regs *regs) show_regs_print_info(KERN_DEFAULT); if (!user_mode(regs)) { - pr_cont("epc : %pS\n", (void *)regs->epc); - pr_cont(" ra : %pS\n", (void *)regs->ra); + pr_cont("epc : %pS\n", (void *)(ulong)regs->epc); + pr_cont(" ra : %pS\n", (void *)(ulong)regs->ra); } pr_cont("epc : " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n", @@ -272,7 +272,7 @@ long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg) unsigned long valid_mask = PR_PMLEN_MASK | PR_TAGGED_ADDR_ENABLE; struct thread_info *ti = task_thread_info(task); struct mm_struct *mm = task->mm; - unsigned long pmm; + xlen_t pmm; u8 pmlen; if (is_compat_thread(ti)) @@ -352,7 +352,7 @@ long get_tagged_addr_ctrl(struct task_struct *task) return ret; } -static bool try_to_set_pmm(unsigned long value) +static bool try_to_set_pmm(xlen_t value) { csr_set(CSR_ENVCFG, value); return (csr_read_clear(CSR_ENVCFG, ENVCFG_PMM) & ENVCFG_PMM) == value; diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index 94e905eea1de..b3eb4154faf7 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -239,7 +239,7 @@ SYSCALL_DEFINE0(rt_sigreturn) /* Always make any pending restarted system calls return -EINTR */ current->restart_block.fn = do_no_restart_syscall; - frame = (struct rt_sigframe __user *)regs->sp; + frame = (struct rt_sigframe __user *)(ulong)regs->sp; if (!access_ok(frame, frame_size)) goto badframe; @@ -265,7 +265,7 @@ SYSCALL_DEFINE0(rt_sigreturn) pr_info_ratelimited( "%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n", task->comm, task_pid_nr(task), __func__, - frame, (void *)regs->epc, (void *)regs->sp); + frame, (void *)(ulong)regs->epc, (void *)(ulong)regs->sp); } force_sig(SIGSEGV); return 0; diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index 8ff8e8b36524..1fada4c7ddfa 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -118,7 +118,7 @@ void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr) if (show_unhandled_signals && unhandled_signal(tsk, signo) && printk_ratelimit()) { pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT, - tsk->comm, task_pid_nr(tsk), signo, code, addr); + tsk->comm, task_pid_nr(tsk), signo, code, (xlen_t)addr); print_vma_addr(KERN_CONT " in ", instruction_pointer(regs)); pr_cont("\n"); __show_regs(regs); @@ -281,7 +281,7 @@ void handle_break(struct pt_regs *regs) current->thread.bad_cause = regs->cause; if (user_mode(regs)) - force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc); + force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)instruction_pointer(regs)); #ifdef CONFIG_KGDB else if (notify_die(DIE_TRAP, "EBREAK", regs, 0, regs->cause, SIGTRAP) == NOTIFY_STOP) diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 184f780c932d..884edd99e6b0 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -180,7 +180,7 @@ EXPORT_SYMBOL_GPL(riscv_v_vstate_ctrl_user_allowed); bool riscv_v_first_use_handler(struct pt_regs *regs) { - u32 __user *epc = (u32 __user *)regs->epc; + u32 __user *epc = (u32 __user *)(ulong)regs->epc; u32 insn = (u32)regs->badaddr; if (!(has_vector() || has_xtheadvector())) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 0194324a0c50..fcc23350610e 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -78,7 +78,7 @@ static void die_kernel_fault(const char *msg, unsigned long addr, { bust_spinlocks(1); - pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n", msg, + pr_alert("Unable to handle kernel %s at virtual address %08lx\n", msg, addr); bust_spinlocks(0); From patchwork Tue Mar 25 12:15:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028453 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6DB125B69D; Tue, 25 Mar 2025 12:18:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905082; cv=none; b=FmDS6pGYfOS5MTgEfg/0uzgmTgQTeZO3Kw40D1k+D8RHJi40Dqy6+w6KHVsBSHa86zL43NerZ0sFcMyR1C/fqpRhq35f41Po7l8YlBp8cokxAqO7Pp4R4M9dj1/t+QIxfcgPlfMvEoZwbYKllZlJr0szUFmB+TASY20n9xOTL6o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905082; c=relaxed/simple; bh=iWYi5/d3lIOrxHNI1EIVP5k9hUi9+eK2hLYPYcT89jg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hS7fLASXrlg/B6TGaWLkgRSzkCTpRQEafmnj2cUVU5wvkItXf62lH8/hyYbnMKfcxPaZEQLflo8WxR8sE8cf+bjVWi/zwvgn5pJV6Z2qxqQKH0A6MJbm7iVosfx2ctxuG1xDioAuMPq9XjDxT8Y3og3H0T9GD+eK2+eyv6Hwlzg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mJnrUfpQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mJnrUfpQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B2FDC4CEF0; Tue, 25 Mar 2025 12:17:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905081; bh=iWYi5/d3lIOrxHNI1EIVP5k9hUi9+eK2hLYPYcT89jg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mJnrUfpQU3tRiz/k25gw4Uqd12Aw8TcM6cjFO/pD8K+3ShkeSS04mnHDe7BpyS9ke O8z/kOa0aXdpl9GLvZpnTK+Xx+g5W6fJjabNS4JH2Wwvd5+2wgVNE54fuiymHoC1wG 50KxnfqoOdtuha4aPxrF6rszRIRlwHFNHlIfbWoUlSci7i3a1tzNjs60WJvLsFt2CL k4uREm/BjGR8keu0URF80UT0E0VmN+e0XoVL0J9C2x4l4RFyOLnqNqNi9ALPjHGSGJ JkfPLCYVJWSPzMY2y47OjruYC0XtjptJeJQjpc7rLp5TYbe0YDu+bublV7qn3GnUev yahK8zT/PrMfQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 05/43] rv64ilp32_abi: riscv: crc32: Utilize 64-bit width to improve the performance Date: Tue, 25 Mar 2025 08:15:46 -0400 Message-Id: <20250325121624.523258-6-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI, derived from a 64-bit ISA, uses 32-bit BITS_PER_LONG. Therefore, crc32 algorithm could utilize 64-bit width to improve the performance. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/lib/crc32-riscv.c | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/arch/riscv/lib/crc32-riscv.c b/arch/riscv/lib/crc32-riscv.c index 53d56ab422c7..68dfb0565696 100644 --- a/arch/riscv/lib/crc32-riscv.c +++ b/arch/riscv/lib/crc32-riscv.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -59,12 +60,12 @@ */ # define CRC32_POLY_QT_BE 0x04d101df481b4e5a -static inline u64 crc32_le_prep(u32 crc, unsigned long const *ptr) +static inline u64 crc32_le_prep(u32 crc, u64 const *ptr) { return (u64)crc ^ (__force u64)__cpu_to_le64(*ptr); } -static inline u32 crc32_le_zbc(unsigned long s, u32 poly, unsigned long poly_qt) +static inline u32 crc32_le_zbc(u64 s, u32 poly, u64 poly_qt) { u32 crc; @@ -85,7 +86,7 @@ static inline u32 crc32_le_zbc(unsigned long s, u32 poly, unsigned long poly_qt) return crc; } -static inline u64 crc32_be_prep(u32 crc, unsigned long const *ptr) +static inline u64 crc32_be_prep(u32 crc, u64 const *ptr) { return ((u64)crc << 32) ^ (__force u64)__cpu_to_be64(*ptr); } @@ -131,7 +132,7 @@ static inline u32 crc32_be_prep(u32 crc, unsigned long const *ptr) # error "Unexpected __riscv_xlen" #endif -static inline u32 crc32_be_zbc(unsigned long s) +static inline u32 crc32_be_zbc(xlen_t s) { u32 crc; @@ -156,16 +157,16 @@ typedef u32 (*fallback)(u32 crc, unsigned char const *p, size_t len); static inline u32 crc32_le_unaligned(u32 crc, unsigned char const *p, size_t len, u32 poly, - unsigned long poly_qt) + xlen_t poly_qt) { size_t bits = len * 8; - unsigned long s = 0; + xlen_t s = 0; u32 crc_low = 0; for (int i = 0; i < len; i++) - s = ((unsigned long)*p++ << (__riscv_xlen - 8)) | (s >> 8); + s = ((xlen_t)*p++ << (__riscv_xlen - 8)) | (s >> 8); - s ^= (unsigned long)crc << (__riscv_xlen - bits); + s ^= (xlen_t)crc << (__riscv_xlen - bits); if (__riscv_xlen == 32 || len < sizeof(u32)) crc_low = crc >> bits; @@ -177,12 +178,12 @@ static inline u32 crc32_le_unaligned(u32 crc, unsigned char const *p, static inline u32 __pure crc32_le_generic(u32 crc, unsigned char const *p, size_t len, u32 poly, - unsigned long poly_qt, + xlen_t poly_qt, fallback crc_fb) { size_t offset, head_len, tail_len; - unsigned long const *p_ul; - unsigned long s; + xlen_t const *p_ul; + xlen_t s; asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0, RISCV_ISA_EXT_ZBC, 1) @@ -199,7 +200,7 @@ static inline u32 __pure crc32_le_generic(u32 crc, unsigned char const *p, tail_len = len & OFFSET_MASK; len = len >> STEP_ORDER; - p_ul = (unsigned long const *)p; + p_ul = (xlen_t const *)p; for (int i = 0; i < len; i++) { s = crc32_le_prep(crc, p_ul); @@ -236,7 +237,7 @@ static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p, size_t len) { size_t bits = len * 8; - unsigned long s = 0; + xlen_t s = 0; u32 crc_low = 0; s = 0; @@ -247,7 +248,7 @@ static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p, s ^= crc >> (32 - bits); crc_low = crc << bits; } else { - s ^= (unsigned long)crc << (bits - 32); + s ^= (xlen_t)crc << (bits - 32); } crc = crc32_be_zbc(s); @@ -259,8 +260,8 @@ static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p, u32 __pure crc32_be_arch(u32 crc, const u8 *p, size_t len) { size_t offset, head_len, tail_len; - unsigned long const *p_ul; - unsigned long s; + xlen_t const *p_ul; + xlen_t s; asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0, RISCV_ISA_EXT_ZBC, 1) @@ -277,7 +278,7 @@ u32 __pure crc32_be_arch(u32 crc, const u8 *p, size_t len) tail_len = len & OFFSET_MASK; len = len >> STEP_ORDER; - p_ul = (unsigned long const *)p; + p_ul = (xlen_t const *)p; for (int i = 0; i < len; i++) { s = crc32_be_prep(crc, p_ul); From patchwork Tue Mar 25 12:15:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028454 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CB0E2571BF; Tue, 25 Mar 2025 12:18:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905098; cv=none; b=j67yYqqg0y6WqNl8J4rYQIUAmFo6pfTvgmvxQdEwRgD/qujWd/k/xAyvnTg/f6i3nWXg4kZ6FFASvXb1TtS4CqOCLs6hZpOCXCXM0dthnD8++QMzFpCJ1beq65ck2rqrpuPSvrEIXsJTEoEz6ZsioHRekQp/WIXY9EOzeTiYleI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905098; c=relaxed/simple; bh=UABL4vMSXTJ7ioKDShPOJnvQ/v/SXYqnkdRYWjffYuI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SyJwV++7HcvMup5wz1WMiXdvGniqVOA9szkS+ycAZDN+s5VtFzB9Uvn38uRQ1d0ohCODLFbI3SNXT4UqPR6aoef5bmqj15daAE621Ns9c1uSvahOqPaXXkiqIrRNtYmVvTEadaCWhs1tBMaCkxLsVpY4QfEc0m4jLv6TOO2kBSY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=teUw2FHO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="teUw2FHO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3232CC4CEE4; Tue, 25 Mar 2025 12:18:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905095; bh=UABL4vMSXTJ7ioKDShPOJnvQ/v/SXYqnkdRYWjffYuI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=teUw2FHOlBQ1lrIKT7Hs+qUfkkWnEEpTWG7+L12tSB2mbphU+tEwWLTIzWHI0iiRJ DbQvjYXhNoIHuNkVkJjJR9grTurlc5egb+jrZsRmnn/7yCBhuoST5sWNFrEg4whn8S R8w4LPsMxn2q34XM1d2a9BJgQLJkEbNoee3acd75JHM7OSqPc/dbF55YqAnxMmxb3B GonCtkmj1hPTAVD6eZOTRDAsO7Au8f+9qjRty68GqzTjCoMQ60GAujyzoXiVezS8hQ Cw9dI4jq4Uyzfsd0C+8lhRsjuPYR84Qgic3GJeVkTgp4byQiRpTz3vrpRfdwMGr/ce BEShK8B1mJ11Q== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 06/43] rv64ilp32_abi: riscv: csum: Utilize 64-bit width to improve the performance Date: Tue, 25 Mar 2025 08:15:47 -0400 Message-Id: <20250325121624.523258-7-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI, derived from a 64-bit ISA, uses 32-bit BITS_PER_LONG. Therefore, checksum algorithm could utilize 64-bit width to improve the performance. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/lib/csum.c | 48 +++++++++++++++++++++---------------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/arch/riscv/lib/csum.c b/arch/riscv/lib/csum.c index 7fb12c59e571..7139ab855349 100644 --- a/arch/riscv/lib/csum.c +++ b/arch/riscv/lib/csum.c @@ -22,17 +22,17 @@ __sum16 csum_ipv6_magic(const struct in6_addr *saddr, __u32 len, __u8 proto, __wsum csum) { unsigned int ulen, uproto; - unsigned long sum = (__force unsigned long)csum; + xlen_t sum = (__force xlen_t)csum; - sum += (__force unsigned long)saddr->s6_addr32[0]; - sum += (__force unsigned long)saddr->s6_addr32[1]; - sum += (__force unsigned long)saddr->s6_addr32[2]; - sum += (__force unsigned long)saddr->s6_addr32[3]; + sum += (__force xlen_t)saddr->s6_addr32[0]; + sum += (__force xlen_t)saddr->s6_addr32[1]; + sum += (__force xlen_t)saddr->s6_addr32[2]; + sum += (__force xlen_t)saddr->s6_addr32[3]; - sum += (__force unsigned long)daddr->s6_addr32[0]; - sum += (__force unsigned long)daddr->s6_addr32[1]; - sum += (__force unsigned long)daddr->s6_addr32[2]; - sum += (__force unsigned long)daddr->s6_addr32[3]; + sum += (__force xlen_t)daddr->s6_addr32[0]; + sum += (__force xlen_t)daddr->s6_addr32[1]; + sum += (__force xlen_t)daddr->s6_addr32[2]; + sum += (__force xlen_t)daddr->s6_addr32[3]; ulen = (__force unsigned int)htonl((unsigned int)len); sum += ulen; @@ -46,7 +46,7 @@ __sum16 csum_ipv6_magic(const struct in6_addr *saddr, */ if (IS_ENABLED(CONFIG_RISCV_ISA_ZBB) && IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) { - unsigned long fold_temp; + xlen_t fold_temp; /* * Zbb is likely available when the kernel is compiled with Zbb @@ -85,12 +85,12 @@ EXPORT_SYMBOL(csum_ipv6_magic); #define OFFSET_MASK 7 #endif -static inline __no_sanitize_address unsigned long -do_csum_common(const unsigned long *ptr, const unsigned long *end, - unsigned long data) +static inline __no_sanitize_address xlen_t +do_csum_common(const xlen_t *ptr, const xlen_t *end, + xlen_t data) { unsigned int shift; - unsigned long csum = 0, carry = 0; + xlen_t csum = 0, carry = 0; /* * Do 32-bit reads on RV32 and 64-bit reads otherwise. This should be @@ -130,8 +130,8 @@ static inline __no_sanitize_address unsigned int do_csum_with_alignment(const unsigned char *buff, int len) { unsigned int offset, shift; - unsigned long csum, data; - const unsigned long *ptr, *end; + xlen_t csum, data; + const xlen_t *ptr, *end; /* * Align address to closest word (double word on rv64) that comes before @@ -140,7 +140,7 @@ do_csum_with_alignment(const unsigned char *buff, int len) */ offset = (unsigned long)buff & OFFSET_MASK; kasan_check_read(buff, len); - ptr = (const unsigned long *)(buff - offset); + ptr = (const xlen_t *)(buff - offset); /* * Clear the most significant bytes that were over-read if buff was not @@ -153,7 +153,7 @@ do_csum_with_alignment(const unsigned char *buff, int len) #else data = (data << shift) >> shift; #endif - end = (const unsigned long *)(buff + len); + end = (const xlen_t *)(buff + len); csum = do_csum_common(ptr, end, data); #ifdef CC_HAS_ASM_GOTO_TIED_OUTPUT @@ -163,7 +163,7 @@ do_csum_with_alignment(const unsigned char *buff, int len) */ if (IS_ENABLED(CONFIG_RISCV_ISA_ZBB) && IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) { - unsigned long fold_temp; + xlen_t fold_temp; /* * Zbb is likely available when the kernel is compiled with Zbb @@ -233,15 +233,15 @@ do_csum_with_alignment(const unsigned char *buff, int len) static inline __no_sanitize_address unsigned int do_csum_no_alignment(const unsigned char *buff, int len) { - unsigned long csum, data; - const unsigned long *ptr, *end; + xlen_t csum, data; + const xlen_t *ptr, *end; - ptr = (const unsigned long *)(buff); + ptr = (const xlen_t *)(buff); data = *(ptr++); kasan_check_read(buff, len); - end = (const unsigned long *)(buff + len); + end = (const xlen_t *)(buff + len); csum = do_csum_common(ptr, end, data); /* @@ -250,7 +250,7 @@ do_csum_no_alignment(const unsigned char *buff, int len) */ if (IS_ENABLED(CONFIG_RISCV_ISA_ZBB) && IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) { - unsigned long fold_temp; + xlen_t fold_temp; /* * Zbb is likely available when the kernel is compiled with Zbb From patchwork Tue Mar 25 12:15:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028455 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBE1B2571BF; Tue, 25 Mar 2025 12:18:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905110; cv=none; b=Q4BR1RjAqyqf8wfccMCNcsRgwbzKIEjU7E8OamM7Mm45qB+8cwYkyj9mGXvKGRoKfDX1ZThNVFxtR1zT4PnSfiPtKqtuL1wC+0MejuH1ZwmseMTX1re01tlLEW+O7DOkwRIdRj5tTnUVxoK89bN0Frw7BwUZIMh8XPEtZQ+Gzow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905110; c=relaxed/simple; bh=BZhFrxC5LqWkJB24//QTxmrLgLREqoX75xqFkQDG2c4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pIK058E41+geMZ3KB0ui3+TP5zTuPsuibuHNVOxLzzBCq1PvfUI6q/qz5PrmSyhMvDkf3QIqN7zQ9ZhbUxpi03oLTUkKgKuV1NYrelztzrLNN0iL6uDOvXJ8fNEM5CtTRRRfyvVwIA0fW7SmWG20s9H2d2Mqb+mHLPce/m1FVHM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WAArLR1c; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WAArLR1c" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC5F5C4CEE9; Tue, 25 Mar 2025 12:18:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905109; bh=BZhFrxC5LqWkJB24//QTxmrLgLREqoX75xqFkQDG2c4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WAArLR1cIlUkjcVTXX0Tls/hyrhxZQce1pwWCWmmrhVqZo6CHID40/UUJJxlUdSIH pvJzyeJsFsbR3Qj6oZQI0sosmdA0f0axmt/j3x0JtA4NVFN4oEroDbhzfKMj77LjcG VpeybYwfnePV+fqBKEfi+c8b/VpfgQniRgpy+YGfPxfyz64Z96hgATbQA+bOjettSE TFQQdcxc3W5FBVyXIBMFFRRS9NmLp2l79WVGP9idTb0YBrpmWMraoXJreb1OnK+1kW R4WBNVgCauqIfaxd2Ttz8fBWM1S2wpc+NpxrTt7N9OYAE6+nKKUlTbfrvFhzZhVeQ9 0U3othdem+g7g== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 07/43] rv64ilp32_abi: riscv: arch_hweight: Adapt cpopw & cpop of zbb extension Date: Tue, 25 Mar 2025 08:15:48 -0400 Message-Id: <20250325121624.523258-8-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on 64-bit ISA, but BITS_PER_LONG is 32. Use cpopw for u32_weight and cpop for u64_weight. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/arch_hweight.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/arch_hweight.h b/arch/riscv/include/asm/arch_hweight.h index 613769b9cdc9..42577965f5bb 100644 --- a/arch/riscv/include/asm/arch_hweight.h +++ b/arch/riscv/include/asm/arch_hweight.h @@ -12,7 +12,11 @@ #if (BITS_PER_LONG == 64) #define CPOPW "cpopw " #elif (BITS_PER_LONG == 32) +#ifdef CONFIG_64BIT +#define CPOPW "cpopw " +#else #define CPOPW "cpop " +#endif #else #error "Unexpected BITS_PER_LONG" #endif @@ -47,7 +51,7 @@ static inline unsigned int __arch_hweight8(unsigned int w) return __arch_hweight32(w & 0xff); } -#if BITS_PER_LONG == 64 +#ifdef CONFIG_64BIT static __always_inline unsigned long __arch_hweight64(__u64 w) { # ifdef CONFIG_RISCV_ISA_ZBB @@ -61,7 +65,7 @@ static __always_inline unsigned long __arch_hweight64(__u64 w) ".option pop\n" : "=r" (w) : "r" (w) :); - return w; + return (unsigned long)w; legacy: # endif From patchwork Tue Mar 25 12:15:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028456 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98A7D1531C5; Tue, 25 Mar 2025 12:18:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905123; cv=none; b=KFaCZjXEfLRbq9Ba56FDJipm1n+VG/dmWTc9LIHcQlJlG4st9kxyaXMU8GrJiFySYQGgSb4kybH+spCMKDKD4Z5OvRRb0mXPosVfNePKCmTvY2T6/1hEtu1v0QglzDuNU4CoTcTOWpZ5Od/rCaF6ozYaCsqcuWtCMrlbYvuIP8E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905123; c=relaxed/simple; bh=IoEFeh+m75KDxz7bIm+5+JPKqFD60PyKrITdPvnkO1Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=awooEMC4V9YZTwcaVbas+w/ybRvdxEvo2K32qBX44L2Nl70IeYKqJrk7x5o5ooaNVwXTePl7PEKixFzr5+iOARLn33cgVbE78TP+nepMqEeEHZLtD2QBjXqtnOtwwUZPSpaKXjsIbYhgItbGIhkDshg6pQZUi0VyJigfTo4bgaE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lucl4scH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lucl4scH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2D25C4CEED; Tue, 25 Mar 2025 12:18:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905123; bh=IoEFeh+m75KDxz7bIm+5+JPKqFD60PyKrITdPvnkO1Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lucl4scHiNV8o0GAc3IKkqwvULaoWzPmEp5Cf3HWn54r4yb4FO0Wcun579c2vfuZz J8cjJa3GTTzkMWx5VdBC4BoSRe3Rj4eXqqI4hKBSbaoTMZ+l2q1jnuJGFbX5wp6j2P m4aeZBNLUPltdt6Ih13MGR6Div/iKVhSIZIxo4dWkQp8ORKrVGeC8JOoffP8TQadpr bZFHGx2/54BhY+aa5gsFpFKEFifG6AH1a8qy5WxVap/Iz20XwOxa0fOKpKS0lOBu99 ve/Glkh5gV2+3Jeg8PA+Vog5uAShqNQ+riLO4RMDzBggckY3glJExTgraHapieY6+k 4nOhr50cYNVQA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 08/43] rv64ilp32_abi: riscv: bitops: Adapt ctzw & clzw of zbb extension Date: Tue, 25 Mar 2025 08:15:49 -0400 Message-Id: <20250325121624.523258-9-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on 64-bit ISA, but BITS_PER_LONG is 32. Use ctzw and clzw for int and long types instead of ctz and clz. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/bitops.h | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h index c6bd3d8354a9..d041b9e3ba84 100644 --- a/arch/riscv/include/asm/bitops.h +++ b/arch/riscv/include/asm/bitops.h @@ -35,14 +35,27 @@ #include #include -#if (BITS_PER_LONG == 64) +#if (__riscv_xlen == 64) #define CTZW "ctzw " #define CLZW "clzw " + +#if (BITS_PER_LONG == 64) +#define CTZ "ctz " +#define CLZ "clz " #elif (BITS_PER_LONG == 32) +#define CTZ "ctzw " +#define CLZ "clzw " +#else +#error "Unexpected BITS_PER_LONG" +#endif + +#elif (__riscv_xlen == 32) #define CTZW "ctz " #define CLZW "clz " +#define CTZ "ctz " +#define CLZ "clz " #else -#error "Unexpected BITS_PER_LONG" +#error "Unexpected __riscv_xlen" #endif static __always_inline unsigned long variable__ffs(unsigned long word) @@ -53,7 +66,7 @@ static __always_inline unsigned long variable__ffs(unsigned long word) asm volatile (".option push\n" ".option arch,+zbb\n" - "ctz %0, %1\n" + CTZ "%0, %1\n" ".option pop\n" : "=r" (word) : "r" (word) :); @@ -82,7 +95,7 @@ static __always_inline unsigned long variable__fls(unsigned long word) asm volatile (".option push\n" ".option arch,+zbb\n" - "clz %0, %1\n" + CLZ "%0, %1\n" ".option pop\n" : "=r" (word) : "r" (word) :); From patchwork Tue Mar 25 12:15:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028457 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2F7C1531C5; Tue, 25 Mar 2025 12:18:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905137; cv=none; b=n0B1337E7y+I1P6O3E1qGFAy4+c0MzkLnYcXAsvlsHARrtj7dCC6BBUYIsLPFhSFdnh7MLJX4nT5G4tPURnzEh8AcU0mRSHovKGRdXEGMDUKbkQNk7da+AcbsNmjw8BuEqZX77TRS92gt/enRyVJ+BO9VGwFiToyrjZUjQFNhjk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905137; c=relaxed/simple; bh=CkHSnvK7MIdZ92ELdn81bKYc+IanecbF+EpzeS9eHgE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ey2BljTHrNk38SWIWw4SAlaFUbJy4knR0C1mOhOMKscJF0tbHZqHSNwl6uRFRdiKdevE9e88O5Oi8ir/vBBnviogi1Y0aOL/1C3ELcR8LNGo/ycOgTmxkSySrq4EjOKDHKHehtxYGp8G7Sg0UlfyLMgUPLIxMhJLbZ6qoWqL/B8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Av1iWZ11; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Av1iWZ11" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 86390C4CEE9; Tue, 25 Mar 2025 12:18:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905136; bh=CkHSnvK7MIdZ92ELdn81bKYc+IanecbF+EpzeS9eHgE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Av1iWZ11H8OzWTpyQcaIUWDHae7LxA08a0FAeGsTT0s3iZgbkbcvAiqA4Rq0Xet5j 2+nlUrRJn1Vdt4iPov5qd9QlPxivEDsvOF5iUx9h8lOhYO9d47s7kHGa++wrFLVQww 53MRjEzXHEO+oKZMkSTWTYqHIQtTZyJhdAOmn6ZL1My7/G6jgIBZEKrbYJVnm5oCxa /DbU6+44emZ6CpsPev+uHgQN0R65JAVjtBDYAeDCA2qNPLdBL72n5vY6OzwaN5WDP0 mgLrQoaZyBOPAzgJfO6fikBN77EJGRFL39+NYHrRa8zWigK//5KeYsOg84YR6W6DN2 W8PIgwH1HOLeQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 09/43] rv64ilp32_abi: riscv: Reuse LP64 SBI interface Date: Tue, 25 Mar 2025 08:15:50 -0400 Message-Id: <20250325121624.523258-10-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI leverages the LP64 SBI interface, enabling the RV64ILP32 Linux kernel to run seamlessly on LP64 OpenSBI or KVM. Using RV64ILP32 Linux doesn't require changing the bootloader, firmware, or hypervisor; it could replace the LP64 kernel directly. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/cpu_ops_sbi.h | 4 ++-- arch/riscv/include/asm/sbi.h | 22 +++++++++++----------- arch/riscv/kernel/cpu_ops_sbi.c | 4 ++-- arch/riscv/kernel/sbi_ecall.c | 22 +++++++++++----------- 4 files changed, 26 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/cpu_ops_sbi.h b/arch/riscv/include/asm/cpu_ops_sbi.h index d6e4665b3195..d967adad6b48 100644 --- a/arch/riscv/include/asm/cpu_ops_sbi.h +++ b/arch/riscv/include/asm/cpu_ops_sbi.h @@ -19,8 +19,8 @@ extern const struct cpu_operations cpu_ops_sbi; * @stack_ptr: A pointer to the hart specific sp */ struct sbi_hart_boot_data { - void *task_ptr; - void *stack_ptr; + xlen_t task_ptr; + xlen_t stack_ptr; }; #endif diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 3d250824178b..fd9a9c723ec6 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -138,16 +138,16 @@ enum sbi_ext_pmu_fid { }; union sbi_pmu_ctr_info { - unsigned long value; + xlen_t value; struct { - unsigned long csr:12; - unsigned long width:6; + xlen_t csr:12; + xlen_t width:6; #if __riscv_xlen == 32 - unsigned long reserved:13; + xlen_t reserved:13; #else - unsigned long reserved:45; + xlen_t reserved:45; #endif - unsigned long type:1; + xlen_t type:1; }; }; @@ -422,15 +422,15 @@ enum sbi_ext_nacl_feature { extern unsigned long sbi_spec_version; struct sbiret { - long error; - long value; + xlen_t error; + xlen_t value; }; void sbi_init(void); long __sbi_base_ecall(int fid); -struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1, - unsigned long arg2, unsigned long arg3, - unsigned long arg4, unsigned long arg5, +struct sbiret __sbi_ecall(xlen_t arg0, xlen_t arg1, + xlen_t arg2, xlen_t arg3, + xlen_t arg4, xlen_t arg5, int fid, int ext); #define sbi_ecall(e, f, a0, a1, a2, a3, a4, a5) \ __sbi_ecall(a0, a1, a2, a3, a4, a5, f, e) diff --git a/arch/riscv/kernel/cpu_ops_sbi.c b/arch/riscv/kernel/cpu_ops_sbi.c index e6fbaaf54956..f9ef3c0155f4 100644 --- a/arch/riscv/kernel/cpu_ops_sbi.c +++ b/arch/riscv/kernel/cpu_ops_sbi.c @@ -71,8 +71,8 @@ static int sbi_cpu_start(unsigned int cpuid, struct task_struct *tidle) /* Make sure tidle is updated */ smp_mb(); - bdata->task_ptr = tidle; - bdata->stack_ptr = task_pt_regs(tidle); + bdata->task_ptr = (ulong)tidle; + bdata->stack_ptr = (ulong)task_pt_regs(tidle); /* Make sure boot data is updated */ smp_mb(); hsm_data = __pa(bdata); diff --git a/arch/riscv/kernel/sbi_ecall.c b/arch/riscv/kernel/sbi_ecall.c index 24aabb4fbde3..ee22e69d70da 100644 --- a/arch/riscv/kernel/sbi_ecall.c +++ b/arch/riscv/kernel/sbi_ecall.c @@ -17,23 +17,23 @@ long __sbi_base_ecall(int fid) } EXPORT_SYMBOL(__sbi_base_ecall); -struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1, - unsigned long arg2, unsigned long arg3, - unsigned long arg4, unsigned long arg5, +struct sbiret __sbi_ecall(xlen_t arg0, xlen_t arg1, + xlen_t arg2, xlen_t arg3, + xlen_t arg4, xlen_t arg5, int fid, int ext) { struct sbiret ret; trace_sbi_call(ext, fid); - register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0); - register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1); - register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2); - register uintptr_t a3 asm ("a3") = (uintptr_t)(arg3); - register uintptr_t a4 asm ("a4") = (uintptr_t)(arg4); - register uintptr_t a5 asm ("a5") = (uintptr_t)(arg5); - register uintptr_t a6 asm ("a6") = (uintptr_t)(fid); - register uintptr_t a7 asm ("a7") = (uintptr_t)(ext); + register xlen_t a0 asm ("a0") = (xlen_t)(arg0); + register xlen_t a1 asm ("a1") = (xlen_t)(arg1); + register xlen_t a2 asm ("a2") = (xlen_t)(arg2); + register xlen_t a3 asm ("a3") = (xlen_t)(arg3); + register xlen_t a4 asm ("a4") = (xlen_t)(arg4); + register xlen_t a5 asm ("a5") = (xlen_t)(arg5); + register xlen_t a6 asm ("a6") = (xlen_t)(fid); + register xlen_t a7 asm ("a7") = (xlen_t)(ext); asm volatile ("ecall" : "+r" (a0), "+r" (a1) : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7) From patchwork Tue Mar 25 12:15:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028458 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7F382580E1; Tue, 25 Mar 2025 12:19:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905152; cv=none; b=hm5thc+6llBEnYvDKg8SciB1j3lRC6uAkYPdUKXBTsCEbWpc/TxTPOeiyFBaC+lbKwWpWoABOBsNIw8DtLOWmS3CLvEhiSW4e4zeH4gLco4Tw6mKqUluIXLcFJhzHebhNK2VxbrxjF8gYdViPIg6UFbG2No7CizG1l+IfNbwObE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905152; c=relaxed/simple; bh=PoJfUz6fT96JeSEjLBp9COIPADXcL2k0zITHh5nATsw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UMOcLCuPfdDC8a+RVYXp/HthF+vB/7sok9FWK7LVtiPPMcO/Z4+pPap6+2Mz0tyrAYktYc6RDp9HkNE1qlx/xxchEm9DQioBz7HQNvMk1EuBzu/XUtJpgPNib5XWOsyzLOyX/j7UwMVSDKO0ujoBd7lhq8I7NMOq7VA5TYAHur0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=idrJWqFJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="idrJWqFJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80920C4CEED; Tue, 25 Mar 2025 12:18:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905151; bh=PoJfUz6fT96JeSEjLBp9COIPADXcL2k0zITHh5nATsw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=idrJWqFJvuxgW12tmUJ11d1s4uc7k7PpbAbB+aU99OQWXa8XQFwfpZR+CB2M6ows7 Bz7BKgIF7aIQFg1EH6+6k9nvQ4/LIXOrz/dv06mqIlwL6pxot5D2kv43LU9xJWOJzn bHh+KnkpPFol8tYdehZj0xxoJoT5gFWoHWFPvMSPjzMlXWP4nj3z6rtUCxGDtyVqFK wOG14MRBpR1bBnJkwf2yaxxZEEY6ZbpvA2VfbgC0LfqqXSOCNlw5XHFEghaI39aEcK y+5moRsvfS2RFZv/6u6rfqAAETpyO09KDMkz7jZWqmtTOZfS1qXlXxnvTZYH3YblLN kreGeQ3duF5RA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 10/43] rv64ilp32_abi: riscv: Update SATP.MODE.ASID width Date: Tue, 25 Mar 2025 08:15:51 -0400 Message-Id: <20250325121624.523258-11-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV32 employs 9-bit asid_bits due to CSR's xlen=32 constraint, whereas RV64ILP32 ABI, rooted in RV64 ISA, features a 64-bit satp CSR. Hence, for rv64ilp32 abi, the exact asid mechanism as in 64-bit architecture is adopted. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/mm/context.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 4abe3de23225..c3f9926d9337 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -226,14 +226,18 @@ static inline void set_mm(struct mm_struct *prev, static int __init asids_init(void) { - unsigned long asid_bits, old; + xlen_t asid_bits, old; /* Figure-out number of ASID bits in HW */ old = csr_read(CSR_SATP); asid_bits = old | (SATP_ASID_MASK << SATP_ASID_SHIFT); csr_write(CSR_SATP, asid_bits); asid_bits = (csr_read(CSR_SATP) >> SATP_ASID_SHIFT) & SATP_ASID_MASK; - asid_bits = fls_long(asid_bits); +#if __riscv_xlen == 64 + asid_bits = fls64(asid_bits); +#else + asid_bits = fls(asid_bits); +#endif csr_write(CSR_SATP, old); /* @@ -265,9 +269,9 @@ static int __init asids_init(void) static_branch_enable(&use_asid_allocator); pr_info("ASID allocator using %lu bits (%lu entries)\n", - asid_bits, num_asids); + (ulong)asid_bits, num_asids); } else { - pr_info("ASID allocator disabled (%lu bits)\n", asid_bits); + pr_info("ASID allocator disabled (%lu bits)\n", (ulong)asid_bits); } return 0; From patchwork Tue Mar 25 12:15:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028459 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E1C5259CB1; Tue, 25 Mar 2025 12:19:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905165; cv=none; b=BPmeNiww5MK547qTldJ6elqQWBaylLzAGxBOROM+fw3fiy6JS3aYdzSwIDrA3Nbm1wHFyvMD5Ju8+0PB2mXav8otVo7MBCHCZWXFP1mi7axLIYf3C39vzJbJoDOqs+Wgrb6457Cf+hlPCIaJQUs5+A1TMtciyBQxhOPxsRnSdF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905165; c=relaxed/simple; bh=TL2FUUudYeGR9aUH8EHF68qPdOytUMHPJcMNMO4C6gI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=E0vRswEDLaTtylC8c8NMtRzcNyqWTIJ58Gf3fAdpwQvHBUgF3xefhsOv2mqoRm+FEVwCu/vgOSb8+Rnlw/O1LwPMpKNoRwN9zilquDVbdyKWTgUkKNlD/DARklQXjBsMWYCREODUFSQtyuglwYq2qfL8jHCH3t+MVaMamqsJQMQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Z9Owa6vw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Z9Owa6vw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DCB0AC4CEE4; Tue, 25 Mar 2025 12:19:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905165; bh=TL2FUUudYeGR9aUH8EHF68qPdOytUMHPJcMNMO4C6gI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z9Owa6vw61GNZTo9S5QAJkMaxwfFdSHmn39VE6QvFbjvLgobYJXjoxt4SFzZ9+H45 3bBUxyIWXOs8+vctcc1o0z3EPcy5ylluVr1qaPks5R0Of7RuyosVXAlY1uWvUJ2AMq AWXcxuO9frvt0rZhx+kYmwWptlxaskKYsKBYBrhlA00p5ujGlXfEgpH+FdgG+3suN5 DcfXa+xebjYrFGQi64b2ybrTNKHm1wMzPcYxDRlQ+BZrJ9Ss2tuzrYUMsYpIapwuLV epLCYOMwS8It/5/qluGk6hyVcphUsHbtL9YgvqmSpPF9Fliz9vORyp9sFkHZ/283pY je6WQknjoL8Jg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 11/43] rv64ilp32_abi: riscv: Introduce PTR_L and PTR_S Date: Tue, 25 Mar 2025 08:15:52 -0400 Message-Id: <20250325121624.523258-12-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" REG_L and REG_S can't satisfy rv64ilp32 abi requirements, because BITS_PER_LONG != __riscv_xlen. So we introduce new PTR_L and PTR_S macro to help head.S and entry.S deal with the pointer data type. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/asm.h | 13 +++++++++---- arch/riscv/include/asm/scs.h | 4 ++-- arch/riscv/kernel/entry.S | 32 ++++++++++++++++---------------- arch/riscv/kernel/head.S | 8 ++++---- 4 files changed, 31 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h index 776354895b81..e37d73abbedd 100644 --- a/arch/riscv/include/asm/asm.h +++ b/arch/riscv/include/asm/asm.h @@ -38,6 +38,7 @@ #define RISCV_SZPTR "8" #define RISCV_LGPTR "3" #endif +#define __PTR_SEL(a, b) __ASM_STR(a) #elif __SIZEOF_POINTER__ == 4 #ifdef __ASSEMBLY__ #define RISCV_PTR .word @@ -48,10 +49,14 @@ #define RISCV_SZPTR "4" #define RISCV_LGPTR "2" #endif +#define __PTR_SEL(a, b) __ASM_STR(b) #else #error "Unexpected __SIZEOF_POINTER__" #endif +#define PTR_L __PTR_SEL(ld, lw) +#define PTR_S __PTR_SEL(sd, sw) + #if (__SIZEOF_INT__ == 4) #define RISCV_INT __ASM_STR(.word) #define RISCV_SZINT __ASM_STR(4) @@ -83,18 +88,18 @@ .endm #ifdef CONFIG_SMP -#ifdef CONFIG_32BIT +#if BITS_PER_LONG == 32 #define PER_CPU_OFFSET_SHIFT 2 #else #define PER_CPU_OFFSET_SHIFT 3 #endif .macro asm_per_cpu dst sym tmp - REG_L \tmp, TASK_TI_CPU_NUM(tp) + PTR_L \tmp, TASK_TI_CPU_NUM(tp) slli \tmp, \tmp, PER_CPU_OFFSET_SHIFT la \dst, __per_cpu_offset add \dst, \dst, \tmp - REG_L \tmp, 0(\dst) + PTR_L \tmp, 0(\dst) la \dst, \sym add \dst, \dst, \tmp .endm @@ -106,7 +111,7 @@ .macro load_per_cpu dst ptr tmp asm_per_cpu \dst \ptr \tmp - REG_L \dst, 0(\dst) + PTR_L \dst, 0(\dst) .endm #ifdef CONFIG_SHADOW_CALL_STACK diff --git a/arch/riscv/include/asm/scs.h b/arch/riscv/include/asm/scs.h index 0e45db78b24b..30929afb4e1a 100644 --- a/arch/riscv/include/asm/scs.h +++ b/arch/riscv/include/asm/scs.h @@ -20,7 +20,7 @@ /* Load task_scs_sp(current) to gp. */ .macro scs_load_current - REG_L gp, TASK_TI_SCS_SP(tp) + PTR_L gp, TASK_TI_SCS_SP(tp) .endm /* Load task_scs_sp(current) to gp, but only if tp has changed. */ @@ -32,7 +32,7 @@ /* Save gp to task_scs_sp(current). */ .macro scs_save_current - REG_S gp, TASK_TI_SCS_SP(tp) + PTR_S gp, TASK_TI_SCS_SP(tp) .endm #else /* CONFIG_SHADOW_CALL_STACK */ diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 33a5a9f2a0d4..2cf36e3ab6b9 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -117,19 +117,19 @@ SYM_CODE_START(handle_exception) new_vmalloc_check #endif - REG_S sp, TASK_TI_KERNEL_SP(tp) + PTR_S sp, TASK_TI_KERNEL_SP(tp) #ifdef CONFIG_VMAP_STACK addi sp, sp, -(PT_SIZE_ON_STACK) srli sp, sp, THREAD_SHIFT andi sp, sp, 0x1 bnez sp, handle_kernel_stack_overflow - REG_L sp, TASK_TI_KERNEL_SP(tp) + PTR_L sp, TASK_TI_KERNEL_SP(tp) #endif .Lsave_context: - REG_S sp, TASK_TI_USER_SP(tp) - REG_L sp, TASK_TI_KERNEL_SP(tp) + PTR_S sp, TASK_TI_USER_SP(tp) + PTR_L sp, TASK_TI_KERNEL_SP(tp) addi sp, sp, -(PT_SIZE_ON_STACK) REG_S x1, PT_RA(sp) REG_S x3, PT_GP(sp) @@ -145,7 +145,7 @@ SYM_CODE_START(handle_exception) */ li t0, SR_SUM | SR_FS_VS - REG_L s0, TASK_TI_USER_SP(tp) + PTR_L s0, TASK_TI_USER_SP(tp) csrrc s1, CSR_STATUS, t0 csrr s2, CSR_EPC csrr s3, CSR_TVAL @@ -193,7 +193,7 @@ SYM_CODE_START(handle_exception) add t0, t1, t0 /* Check if exception code lies within bounds */ bgeu t0, t2, 3f - REG_L t1, 0(t0) + PTR_L t1, 0(t0) 2: jalr t1 j ret_from_exception 3: @@ -226,7 +226,7 @@ SYM_CODE_START_NOALIGN(ret_from_exception) /* Save unwound kernel stack pointer in thread_info */ addi s0, sp, PT_SIZE_ON_STACK - REG_S s0, TASK_TI_KERNEL_SP(tp) + PTR_S s0, TASK_TI_KERNEL_SP(tp) /* Save the kernel shadow call stack pointer */ scs_save_current @@ -301,7 +301,7 @@ SYM_CODE_START_LOCAL(handle_kernel_stack_overflow) REG_S x5, PT_T0(sp) save_from_x6_to_x31 - REG_L s0, TASK_TI_KERNEL_SP(tp) + PTR_L s0, TASK_TI_KERNEL_SP(tp) csrr s1, CSR_STATUS csrr s2, CSR_EPC csrr s3, CSR_TVAL @@ -341,8 +341,8 @@ SYM_CODE_END(ret_from_fork) SYM_FUNC_START(call_on_irq_stack) /* Create a frame record to save ra and s0 (fp) */ addi sp, sp, -STACKFRAME_SIZE_ON_STACK - REG_S ra, STACKFRAME_RA(sp) - REG_S s0, STACKFRAME_FP(sp) + PTR_S ra, STACKFRAME_RA(sp) + PTR_S s0, STACKFRAME_FP(sp) addi s0, sp, STACKFRAME_SIZE_ON_STACK /* Switch to the per-CPU shadow call stack */ @@ -360,8 +360,8 @@ SYM_FUNC_START(call_on_irq_stack) /* Switch back to the thread stack and restore ra and s0 */ addi sp, s0, -STACKFRAME_SIZE_ON_STACK - REG_L ra, STACKFRAME_RA(sp) - REG_L s0, STACKFRAME_FP(sp) + PTR_L ra, STACKFRAME_RA(sp) + PTR_L s0, STACKFRAME_FP(sp) addi sp, sp, STACKFRAME_SIZE_ON_STACK ret @@ -383,8 +383,8 @@ SYM_FUNC_START(__switch_to) li a4, TASK_THREAD_RA add a3, a0, a4 add a4, a1, a4 - REG_S ra, TASK_THREAD_RA_RA(a3) - REG_S sp, TASK_THREAD_SP_RA(a3) + PTR_S ra, TASK_THREAD_RA_RA(a3) + PTR_S sp, TASK_THREAD_SP_RA(a3) REG_S s0, TASK_THREAD_S0_RA(a3) REG_S s1, TASK_THREAD_S1_RA(a3) REG_S s2, TASK_THREAD_S2_RA(a3) @@ -400,8 +400,8 @@ SYM_FUNC_START(__switch_to) /* Save the kernel shadow call stack pointer */ scs_save_current /* Restore context from next->thread */ - REG_L ra, TASK_THREAD_RA_RA(a4) - REG_L sp, TASK_THREAD_SP_RA(a4) + PTR_L ra, TASK_THREAD_RA_RA(a4) + PTR_L sp, TASK_THREAD_SP_RA(a4) REG_L s0, TASK_THREAD_S0_RA(a4) REG_L s1, TASK_THREAD_S1_RA(a4) REG_L s2, TASK_THREAD_S2_RA(a4) diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S index 356d5397b2a2..e55a92be12b1 100644 --- a/arch/riscv/kernel/head.S +++ b/arch/riscv/kernel/head.S @@ -42,7 +42,7 @@ SYM_CODE_START(_start) /* Image load offset (0MB) from start of RAM for M-mode */ .dword 0 #else -#if __riscv_xlen == 64 +#ifdef CONFIG_64BIT /* Image load offset(2MB) from start of RAM */ .dword 0x200000 #else @@ -75,7 +75,7 @@ relocate_enable_mmu: /* Relocate return address */ la a1, kernel_map XIP_FIXUP_OFFSET a1 - REG_L a1, KERNEL_MAP_VIRT_ADDR(a1) + PTR_L a1, KERNEL_MAP_VIRT_ADDR(a1) la a2, _start sub a1, a1, a2 add ra, ra, a1 @@ -349,8 +349,8 @@ SYM_CODE_START(_start_kernel) */ .Lwait_for_cpu_up: /* FIXME: We should WFI to save some energy here. */ - REG_L sp, (a1) - REG_L tp, (a2) + PTR_L sp, (a1) + PTR_L tp, (a2) beqz sp, .Lwait_for_cpu_up beqz tp, .Lwait_for_cpu_up fence From patchwork Tue Mar 25 12:15:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028460 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF3112571DC; Tue, 25 Mar 2025 12:19:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905179; cv=none; b=FYNrDUpw1Wrp5Ajk6V99vPXthqcfPEpOjZxyACviPw41TjDffkwC/MAXZ4zkTSiobA0zbi6UzCZ1lIWtzdjb9oNh4Ucf7BUPLBmUldb6zCZ35ZsbGeXH1KixzNz/y9UzfGhPswb0PK3yJwAZW/TrH8OcMfo8/zHpXGQAgmIq5VM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905179; c=relaxed/simple; bh=ISWjhncK1TCy0/aI9ckRc78SEAPhWPAPLhDyLEaKeFw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JvgsKFGcEhyjuxpVFzml/Q17slAQ53ke1Nz484I89Jw6eb88x8YE3MPuH358Jo9QxNiRG3cwKEJ26UGaKEQb3G9tq1ZTZAAkuuFib7ahbuBh5MKKFrTBO3QHZ7ThUSr+G3W3Ad+rEt1kKhp9Khs2pFKiLOoo6Z3xctdLzzdUbRo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JljTIfo5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JljTIfo5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9525CC4CEE4; Tue, 25 Mar 2025 12:19:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905178; bh=ISWjhncK1TCy0/aI9ckRc78SEAPhWPAPLhDyLEaKeFw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JljTIfo5gVHDwGhEgHhBruSFSNYbpMWvToA3rfmH1hgjQuNmceDaFgEXe/3gGSZTH exytkNjTyCOkyjtIhKFOb0XGBYJ/tpWrvx6cqLa0p1rcK+JqHxqzeKhqUHGTeZz3P1 268w16IQAohPKFTX+GCAMPr0BXefchXZAvm7xNqRcO9DHHxQ6x1Rv1VAQxtWYoauze 0NzZoWSBD1Kbjn5BxOB0hY0NfP+8euHRcxIFeOdx1yFNa3b1LyV/a54VkoW3kpUbNb g/Y6OJcrFuELv1f7GjT/hSs+HZj8euBQvkCNVQ43y1CvxpAwAHEzaLOslayxcIa/XB KKnS677yv+Sgw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 12/43] rv64ilp32_abi: riscv: Introduce cmpxchg_double Date: Tue, 25 Mar 2025 08:15:53 -0400 Message-Id: <20250325121624.523258-13-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi has the ability to exclusively load and store (ld/sd) a pair of words from an address. Then the SLUB can take advantage of a cmpxchg_double implementation to avoid taking some locks. This patch provides an implementation of cmpxchg_double for 32-bit pairs, and activates the logic required for the SLUB to use these functions (HAVE_ALIGNED_STRUCT_PAGE and HAVE_CMPXCHG_DOUBLE). Inspired from the commit: 5284e1b4bc8a ("arm64: xchg: Implement cmpxchg_double") Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/cmpxchg.h | 53 ++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index da2111b0111c..884235cf4092 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -141,6 +141,7 @@ config RISCV select HAVE_ARCH_USERFAULTFD_MINOR if 64BIT && USERFAULTFD select HAVE_ARCH_VMAP_STACK if MMU && 64BIT select HAVE_ASM_MODVERSIONS + select HAVE_CMPXCHG_DOUBLE if ABI_RV64ILP32 select HAVE_CONTEXT_TRACKING_USER select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS if MMU diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 938d50194dba..944f6d825f78 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -7,6 +7,7 @@ #define _ASM_RISCV_CMPXCHG_H #include +#include #include #include @@ -409,6 +410,58 @@ static __always_inline void __cmpwait(volatile void *ptr, #define __cmpwait_relaxed(ptr, val) \ __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr))) + +#ifdef CONFIG_HAVE_CMPXCHG_DOUBLE +#define system_has_cmpxchg_double() 1 + +#define __cmpxchg_double_check(ptr1, ptr2) \ +({ \ + if (sizeof(*(ptr1)) != 4) \ + BUILD_BUG(); \ + if (sizeof(*(ptr2)) != 4) \ + BUILD_BUG(); \ + VM_BUG_ON((ulong *)(ptr2) - (ulong *)(ptr1) != 1); \ + VM_BUG_ON(((ulong)ptr1 & 0x7) != 0); \ +}) + +#define __cmpxchg_double(old1, old2, new1, new2, ptr) \ +({ \ + __typeof__(ptr) __ptr = (ptr); \ + register unsigned int __ret; \ + u64 __old; \ + u64 __new; \ + u64 __tmp; \ + switch (sizeof(*(ptr))) { \ + case 4: \ + __old = ((u64)old2 << 32) | (u64)old1; \ + __new = ((u64)new2 << 32) | (u64)new1; \ + __asm__ __volatile__ ( \ + "0: lr.d %0, %2\n" \ + " bne %0, %z3, 1f\n" \ + " sc.d %1, %z4, %2\n" \ + " bnez %1, 0b\n" \ + "1:\n" \ + : "=&r" (__tmp), "=&r" (__ret), "+A" (*__ptr) \ + : "rJ" (__old), "rJ" (__new) \ + : "memory"); \ + __ret = (__old == __tmp); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ + __ret; \ +}) + +#define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ +({ \ + int __ret; \ + __cmpxchg_double_check(ptr1, ptr2); \ + __ret = __cmpxchg_double((ulong)(o1), (ulong)(o2), \ + (ulong)(n1), (ulong)(n2), \ + ptr1); \ + __ret; \ +}) +#endif #endif #endif /* _ASM_RISCV_CMPXCHG_H */ From patchwork Tue Mar 25 12:15:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028461 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD509257AC7; Tue, 25 Mar 2025 12:19:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905192; cv=none; b=SBeIQ8qO8DtMFTHEWv1Aa+rq2DXpjPdHL9+4wXJ2J/AmA3amJfZotKf48MUg4VCi1JOCOA/e0Sir90panLl72pI7ezaz1d5OiHsfztqBkhq427JVYxXp9xXKubF3wgASevClHxgSoN26CYMwA40sRnx1uUbK+Kr8KSF3pHRG3rE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905192; c=relaxed/simple; bh=XQU8VnkDJ6h/4evak6H0aqpy4CserfZS0N3cJolheZI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rcWrVq4j2fCTHcVg3U/Ti2+lDcu0NaS96tVyxi10TEd1elOOzOfy5p/NUG9sHAd5PxYAj1XxHx8SkNd/5NxZ/GGCkD+dG9+d1mS8MfdV1MZsyJQXnqT+wk0T7BAXLPw/+tA4NIUscOYAj/BP5ugUFXzoDJxXrsUFsZsArTVNUfg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=G1E1R9FL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="G1E1R9FL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DF255C4CEE9; Tue, 25 Mar 2025 12:19:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905191; bh=XQU8VnkDJ6h/4evak6H0aqpy4CserfZS0N3cJolheZI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G1E1R9FLbeWgS5k22Da9vESE7SNBjisQcD8Ll173fXAGcQN37cnhXheRGmLfW6pRb MFLsfzd7HgjVH005lgBFvW06/crTgH4L0EHiQbOEksUxh9Cm3UBKD6ejBcUubgKPF+ 97RlOQAlVtBCil9XLs+Xbgk3C6KGGayJilFKJsAV/UpFtnCXuhZCTJjbZlQdfZKyTb MyfxR93XsxrBACHiTxnYv2eAGsUWe7gPsnFAzSbzhHyU5bDPVis0et56YxbhL/ob5m Q4rucj1fLSrorLrs8uSIG48yz367/37H2evJ3YrCjJS9FYyhi9lHCNduNMhM7w+iCA jH7ZkARaHIwqg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 13/43] rv64ilp32_abi: riscv: Correct stackframe layout Date: Tue, 25 Mar 2025 08:15:54 -0400 Message-Id: <20250325121624.523258-14-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" In RV64ILP32 ABI, the callee saved fp & ra are 64-bit width, not long size. This patch corrects the layout for the struct stackframe. echo c > /proc/sysrq-trigger Before the patch: sysrq: Trigger a crash Kernel panic - not syncing: sysrq triggered crash CPU: 0 PID: 102 Comm: sh Not tainted ... Hardware name: riscv-virtio,qemu (DT) Call Trace: ---[ end Kernel panic - not syncing: sysrq triggered crash ]--- After the patch: sysrq: Trigger a crash Kernel panic - not syncing: sysrq triggered crash CPU: 0 PID: 102 Comm: sh Not tainted ... Hardware name: riscv-virtio,qemu (DT) Call Trace: [] dump_backtrace+0x1e/0x26 [] show_stack+0x2e/0x3c [] dump_stack_lvl+0x40/0x5a [] dump_stack+0x16/0x1e [] panic+0x10c/0x2a8 [] sysrq_reset_seq_param_set+0x0/0x76 [] __handle_sysrq+0x9c/0x19c [] write_sysrq_trigger+0x64/0x78 [] proc_reg_write+0x4a/0xa2 [] vfs_write+0xac/0x308 [] ksys_write+0x62/0xda [] sys_write+0xe/0x16 [] do_trap_ecall_u+0xd8/0xda [] ret_from_exception+0x0/0x66 ---[ end Kernel panic - not syncing: sysrq triggered crash ]--- Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/stacktrace.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/riscv/include/asm/stacktrace.h b/arch/riscv/include/asm/stacktrace.h index b1495a7e06ce..556655cab09d 100644 --- a/arch/riscv/include/asm/stacktrace.h +++ b/arch/riscv/include/asm/stacktrace.h @@ -8,7 +8,13 @@ struct stackframe { unsigned long fp; +#if IS_ENABLED(CONFIG_64BIT) && (BITS_PER_LONG == 32) + unsigned long __fp; +#endif unsigned long ra; +#if IS_ENABLED(CONFIG_64BIT) && (BITS_PER_LONG == 32) + unsigned long __ra; +#endif }; extern void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs, From patchwork Tue Mar 25 12:15:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028462 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD51E25A33E; Tue, 25 Mar 2025 12:20:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905206; cv=none; b=FO0PeYUj9mjI3XtLL6EhdgFMKI4Bl2lYHYrOVB7XgWKXg2ISuc0flfpA4uNxiPsSzm3lnmhREEyUyKisfXnTvFobCbs6V56DX8Vt4QUzenfLlHiGa/4sRUVVrgXLQyEl4VnAs92V4Ngxn7kg9TLooPA0I4/utOuZQhPm8l+x1ws= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905206; c=relaxed/simple; bh=S5onwW8Rhyiou8sUliJC71k2++Mi5WmKeWwnQHNXJR4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nXNg+hccrfbi5znxnNMx3O7P6GmaBFUO4T1wLGqNL8gGhXq9MLH8L9op6nJ7RGrFeiSuBcLNcYxFDfc9wrz8qoG/DYZWkCJIuwPdxFCVlti8z8qu3qCq2ksjzcjHc+NCXyEnDMhZYUvcTYDwfas0ISkLLqsmhWokv4v027dcDMc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SLdkNpDM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SLdkNpDM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4631C4CEED; Tue, 25 Mar 2025 12:19:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905206; bh=S5onwW8Rhyiou8sUliJC71k2++Mi5WmKeWwnQHNXJR4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SLdkNpDMElfs6JtH93HL2qjB8ooc8njsv/pKcwCApdLyBmrRvX5jPuX625CrCL3KO LLg2i7GO2jiDdmvOrZJHbXmmZ//hxLsMMV2R487NeFcTSsmqjUedkLmuzv7hLlERU6 kiuccu5qk3IkbjYpbkj/s/pT/RPFgjJK5adv7y6A6BEEgO8BBTscDUHE/YjxJcfAd0 ppo9vCiFRV5Tz05przlyIDyBiWv9hSmTIygPLI/aoXi+ZELjeBp9iCTv0eEu9tP+Tz vHMjNE6CxPdHgma3URda2HPkGEEkWTjPcCLQMXFENwL53LrB+OGroHLD8Txjexf9jd 7/iF+KyP6XFLQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 14/43] rv64ilp32_abi: riscv: Adapt kernel module code Date: Tue, 25 Mar 2025 08:15:55 -0400 Message-Id: <20250325121624.523258-15-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Because riscv_insn_valid_32bit_offset is always true for ILP32, use BITS_PER_LONG instead of CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/kernel/module.c | 2 +- include/asm-generic/module.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 47d0ebeec93c..d7360878e618 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -45,7 +45,7 @@ struct relocation_handlers { */ static bool riscv_insn_valid_32bit_offset(ptrdiff_t val) { -#ifdef CONFIG_32BIT +#if BITS_PER_LONG == 32 return true; #else return (-(1L << 31) - (1L << 11)) <= val && val < ((1L << 31) - (1L << 11)); diff --git a/include/asm-generic/module.h b/include/asm-generic/module.h index 98e1541b72b7..f870171b14a8 100644 --- a/include/asm-generic/module.h +++ b/include/asm-generic/module.h @@ -12,7 +12,7 @@ struct mod_arch_specific }; #endif -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define Elf_Shdr Elf64_Shdr #define Elf_Phdr Elf64_Phdr #define Elf_Sym Elf64_Sym From patchwork Tue Mar 25 12:15:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028463 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1069E55B; Tue, 25 Mar 2025 12:20:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905220; cv=none; b=bN3lGgCm+4EgU86V0yR61wZT2uEW4IRzcUZ51c0RSEbZczPdl+hOq7oyRnuEewNo5sMkjjAa8I1dWgKxCGorMVFSeoZgxUiA3RscaVUCojQtEHtEzaFlwGCSQkr/d/rgO4t2m5O3l9gDEH0Wn8TCwOa7Ne3kULxu4nv9S2acwm4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905220; c=relaxed/simple; bh=mr+aCskDI8FnVUv97AkI1nVng2udAHyoEjzxRXN+v3k=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jbad9ic/MXANFABBexvxCoM+ANXhHZU/j8aMMZJVK/6EWcNz66AAtqnQ+0xWGifj1r8TptGxkCcLjRffy7nqHW6+XOAKM1S9mE7nKRiseMmE4/QyOXBZNwdCMgXMwljeKZLr8wb04N2VA3o8RN+pWN+/kveZT8U9PCHSqq1NIHU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VKT4xQjP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VKT4xQjP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 053F9C4CEE4; Tue, 25 Mar 2025 12:20:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905220; bh=mr+aCskDI8FnVUv97AkI1nVng2udAHyoEjzxRXN+v3k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VKT4xQjPJQDbTJK65nUQN6bTlb4n5iwqfpuEAE6mLf8dAZrJ5VsB3GWoUzEs58E5m iMizoEMYPrRGr9I9IExEu0dkMxh2lwWdwiU8cQrGLp/9IxIvatPcMnB6r59Ome97cF uYUpybpm5G2SdSWA950lwKaUjdu4uSPk8vVJyxswo86EjYCwzXNyg1oC7PK2ZwrxqZ 5Ak7U8tdChp1b8q6i6FAyhIRwxkbJOB3cMtlwyQM8AN2FFP2jxeEj2xAWq2b4vLhof izkyv+oDHImICCaizrUJhFftLkYPZMa+4qOxRPhbdPved6Vlj6q5hjhJ00b8lfBzPG ImwwWKfE+IQLg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 15/43] rv64ilp32_abi: riscv: mm: Adapt MMU_SV39 for 2GiB address space Date: Tue, 25 Mar 2025 08:15:56 -0400 Message-Id: <20250325121624.523258-16-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI has two independent 2GiB address space for kernel and user. There is no sv32 mmu mode support in xlen=64 ISA. This commit enables MMU_SV39 for RV64ILP32 to satisfy the user & kernel 2GiB mapping requirements. The Sv39 is the mandatory MMU mode when rv64 satp != bare, so we needn't care about Sv48 & Sv57. 2GiB virtual userspace memory layout (u64lp64 ABI): 55555000-5560c000 r-xp 00000000 fe:00 17 /bin/busybox 5560c000-5560f000 r--p 000b7000 fe:00 17 /bin/busybox 5560f000-55610000 rw-p 000ba000 fe:00 17 /bin/busybox 55610000-55631000 rw-p 00000000 00:00 0 [heap] 77e69000-77e6b000 rw-p 00000000 00:00 0 77e6b000-77fba000 r-xp 00000000 fe:00 140 /lib/libc.so.6 77fba000-77fbd000 r--p 0014f000 fe:00 140 /lib/libc.so.6 77fbd000-77fbf000 rw-p 00152000 fe:00 140 /lib/libc.so.6 77fbf000-77fcb000 rw-p 00000000 00:00 0 77fcb000-77fd5000 r-xp 00000000 fe:00 148 /lib/libresolv.so.2 77fd5000-77fd6000 r--p 0000a000 fe:00 148 /lib/libresolv.so.2 77fd6000-77fd7000 rw-p 0000b000 fe:00 148 /lib/libresolv.so.2 77fd7000-77fd9000 rw-p 00000000 00:00 0 77fd9000-77fdb000 r--p 00000000 00:00 0 [vvar] 77fdb000-77fdc000 r-xp 00000000 00:00 0 [vdso] 77fdc000-77ffc000 r-xp 00000000 fe:00 135 /lib/ld-linux-riscv64-lp64d.so.1 77ffc000-77ffe000 r--p 0001f000 fe:00 135 /lib/ld-linux-riscv64-lp64d.so.1 77ffe000-78000000 rw-p 00021000 fe:00 135 /lib/ld-linux-riscv64-lp64d.so.1 7ffdf000-80000000 rw-p 00000000 00:00 0 [stack] 2GiB virtual kernel memory layout: fixmap : 0x90a00000 - 0x90ffffff (6144 kB) pci io : 0x91000000 - 0x91ffffff ( 16 MB) vmemmap : 0x92000000 - 0x93ffffff ( 32 MB) vmalloc : 0x94000000 - 0xb3ffffff ( 512 MB) modules : 0xb4000000 - 0xb7ffffff ( 64 MB) lowmem : 0xc0000000 - 0xc7ffffff ( 128 MB) kasan : 0x80000000 - 0x8fffffff ( 256 MB) kernel : 0xb8000000 - 0xbfffffff ( 128 MB) For satp=sv39, introduce a double mapping to make the sign-extended virtual address identical to the zero-extended virtual address: +--------+ +---------+ +--------+ | | +--| 511:PUD1| | | | | | +---------+ | | | | | | 510:PUD0|--+ | | | | | +---------+ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INVALID | | | | | | | | | | | | | .... | | | | | | .... | | | | | | | | | | | | +---------+ | | | | | +--| 3:PUD1 | | | | | | | +---------+ | | | | | | | 2:PUD0 |--+ | | | | | +---------+ | | | | | | |1:USR_PUD| | | | | | | +---------+ | | | | | | |0:USR_PUD| | | | +--------+<--+ +---------+ +-->+--------+ PUD1 ^ PGD PUD0 1GB | 4GB 1GB | +----------+ | Sv39 PGDP| +----------+ SATP Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/Kconfig | 2 +- arch/riscv/include/asm/page.h | 23 ++++++----- arch/riscv/include/asm/pgtable-64.h | 55 ++++++++++++++------------ arch/riscv/include/asm/pgtable.h | 60 ++++++++++++++++++++++++----- arch/riscv/include/asm/processor.h | 2 +- arch/riscv/kernel/cpu.c | 4 +- arch/riscv/mm/fault.c | 10 ++--- arch/riscv/mm/init.c | 55 ++++++++++++++++++-------- arch/riscv/mm/pageattr.c | 4 +- arch/riscv/mm/pgtable.c | 2 +- 10 files changed, 145 insertions(+), 72 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 884235cf4092..9469cdc51ba4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -293,7 +293,7 @@ config PAGE_OFFSET hex default 0x80000000 if !MMU && RISCV_M_MODE default 0x80200000 if !MMU - default 0xc0000000 if 32BIT + default 0xc0000000 if 32BIT || ABI_RV64ILP32 default 0xff60000000000000 if 64BIT config KASAN_SHADOW_OFFSET diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index 125f5ecd9565..45091a9de0d4 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -24,7 +24,7 @@ * When not using MMU this corresponds to the first free page in * physical memory (aligned on a page boundary). */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #ifdef CONFIG_MMU #define PAGE_OFFSET kernel_map.page_offset #else @@ -38,7 +38,7 @@ #define PAGE_OFFSET_L3 _AC(0xffffffd600000000, UL) #else #define PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL) -#endif /* CONFIG_64BIT */ +#endif /* BITS_PER_LONG == 64 */ #ifndef __ASSEMBLY__ @@ -56,19 +56,24 @@ void clear_page(void *page); /* * Use struct definitions to apply C type checking */ +#if CONFIG_PGTABLE_LEVELS > 2 +typedef u64 ptval_t; +#else +typedef ulong ptval_t; +#endif /* Page Global Directory entry */ typedef struct { - unsigned long pgd; + ptval_t pgd; } pgd_t; /* Page Table entry */ typedef struct { - unsigned long pte; + ptval_t pte; } pte_t; typedef struct { - unsigned long pgprot; + ptval_t pgprot; } pgprot_t; typedef struct page *pgtable_t; @@ -81,13 +86,13 @@ typedef struct page *pgtable_t; #define __pgd(x) ((pgd_t) { (x) }) #define __pgprot(x) ((pgprot_t) { (x) }) -#ifdef CONFIG_64BIT -#define PTE_FMT "%016lx" +#if CONFIG_PGTABLE_LEVELS > 2 +#define PTE_FMT "%016llx" #else #define PTE_FMT "%08lx" #endif -#if defined(CONFIG_64BIT) && defined(CONFIG_MMU) +#if (CONFIG_PGTABLE_LEVELS > 2) && defined(CONFIG_MMU) /* * We override this value as its generic definition uses __pa too early in * the boot process (before kernel_map.va_pa_offset is set). @@ -128,7 +133,7 @@ extern unsigned long vmemmap_start_pfn; ((x) >= kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_map.size)) #define is_linear_mapping(x) \ - ((x) >= PAGE_OFFSET && (!IS_ENABLED(CONFIG_64BIT) || (x) < PAGE_OFFSET + KERN_VIRT_SIZE)) + ((x) >= PAGE_OFFSET && ((BITS_PER_LONG == 32) || (x) < PAGE_OFFSET + KERN_VIRT_SIZE)) #ifndef CONFIG_DEBUG_VIRTUAL #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + kernel_map.va_pa_offset)) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 0897dd99ab8d..401c012d0b66 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -19,7 +19,12 @@ extern bool pgtable_l5_enabled; #define PGDIR_SHIFT (pgtable_l5_enabled ? PGDIR_SHIFT_L5 : \ (pgtable_l4_enabled ? PGDIR_SHIFT_L4 : PGDIR_SHIFT_L3)) /* Size of region mapped by a page global directory */ +#if BITS_PER_LONG == 64 #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) +#else +#define PGDIR_SIZE (_AC(1, ULL) << PGDIR_SHIFT) +#endif + #define PGDIR_MASK (~(PGDIR_SIZE - 1)) /* p4d is folded into pgd in case of 4-level page table */ @@ -28,7 +33,7 @@ extern bool pgtable_l5_enabled; #define P4D_SHIFT_L5 39 #define P4D_SHIFT (pgtable_l5_enabled ? P4D_SHIFT_L5 : \ (pgtable_l4_enabled ? P4D_SHIFT_L4 : P4D_SHIFT_L3)) -#define P4D_SIZE (_AC(1, UL) << P4D_SHIFT) +#define P4D_SIZE (_AC(1, ULL) << P4D_SHIFT) #define P4D_MASK (~(P4D_SIZE - 1)) /* pud is folded into pgd in case of 3-level page table */ @@ -43,7 +48,7 @@ extern bool pgtable_l5_enabled; /* Page 4th Directory entry */ typedef struct { - unsigned long p4d; + u64 p4d; } p4d_t; #define p4d_val(x) ((x).p4d) @@ -52,7 +57,7 @@ typedef struct { /* Page Upper Directory entry */ typedef struct { - unsigned long pud; + u64 pud; } pud_t; #define pud_val(x) ((x).pud) @@ -61,7 +66,7 @@ typedef struct { /* Page Middle Directory entry */ typedef struct { - unsigned long pmd; + u64 pmd; } pmd_t; #define pmd_val(x) ((x).pmd) @@ -74,7 +79,7 @@ typedef struct { * | 63 | 62 61 | 60 54 | 53 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 * N MT RSV PFN reserved for SW D A G U X W R V */ -#define _PAGE_PFN_MASK GENMASK(53, 10) +#define _PAGE_PFN_MASK GENMASK_ULL(53, 10) /* * [63] Svnapot definitions: @@ -82,7 +87,7 @@ typedef struct { * 1 Svnapot enabled */ #define _PAGE_NAPOT_SHIFT 63 -#define _PAGE_NAPOT BIT(_PAGE_NAPOT_SHIFT) +#define _PAGE_NAPOT BIT_ULL(_PAGE_NAPOT_SHIFT) /* * Only 64KB (order 4) napot ptes supported. */ @@ -100,9 +105,9 @@ enum napot_cont_order { #define napot_cont_order(val) (__builtin_ctzl((val.pte >> _PAGE_PFN_SHIFT) << 1)) #define napot_cont_shift(order) ((order) + PAGE_SHIFT) -#define napot_cont_size(order) BIT(napot_cont_shift(order)) +#define napot_cont_size(order) BIT_ULL(napot_cont_shift(order)) #define napot_cont_mask(order) (~(napot_cont_size(order) - 1UL)) -#define napot_pte_num(order) BIT(order) +#define napot_pte_num(order) BIT_ULL(order) #ifdef CONFIG_RISCV_ISA_SVNAPOT #define HUGE_MAX_HSTATE (2 + (NAPOT_ORDER_MAX - NAPOT_CONT_ORDER_BASE)) @@ -118,8 +123,8 @@ enum napot_cont_order { * 10 - IO Non-cacheable, non-idempotent, strongly-ordered I/O memory * 11 - Rsvd Reserved for future standard use */ -#define _PAGE_NOCACHE_SVPBMT (1UL << 61) -#define _PAGE_IO_SVPBMT (1UL << 62) +#define _PAGE_NOCACHE_SVPBMT (1ULL << 61) +#define _PAGE_IO_SVPBMT (1ULL << 62) #define _PAGE_MTMASK_SVPBMT (_PAGE_NOCACHE_SVPBMT | _PAGE_IO_SVPBMT) /* @@ -133,10 +138,10 @@ enum napot_cont_order { * 01110 - PMA Weakly-ordered, Cacheable, Bufferable, Shareable, Non-trustable * 10010 - IO Strongly-ordered, Non-cacheable, Non-bufferable, Shareable, Non-trustable */ -#define _PAGE_PMA_THEAD ((1UL << 62) | (1UL << 61) | (1UL << 60)) -#define _PAGE_NOCACHE_THEAD ((1UL << 61) | (1UL << 60)) -#define _PAGE_IO_THEAD ((1UL << 63) | (1UL << 60)) -#define _PAGE_MTMASK_THEAD (_PAGE_PMA_THEAD | _PAGE_IO_THEAD | (1UL << 59)) +#define _PAGE_PMA_THEAD ((1ULL << 62) | (1ULL << 61) | (1ULL << 60)) +#define _PAGE_NOCACHE_THEAD ((1ULL << 61) | (1ULL << 60)) +#define _PAGE_IO_THEAD ((1ULL << 63) | (1ULL << 60)) +#define _PAGE_MTMASK_THEAD (_PAGE_PMA_THEAD | _PAGE_IO_THEAD | (1ULL << 59)) static inline u64 riscv_page_mtmask(void) { @@ -167,7 +172,7 @@ static inline u64 riscv_page_io(void) #define _PAGE_MTMASK riscv_page_mtmask() /* Set of bits to preserve across pte_modify() */ -#define _PAGE_CHG_MASK (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ | \ +#define _PAGE_CHG_MASK (~(u64)(_PAGE_PRESENT | _PAGE_READ | \ _PAGE_WRITE | _PAGE_EXEC | \ _PAGE_USER | _PAGE_GLOBAL | \ _PAGE_MTMASK)) @@ -208,12 +213,12 @@ static inline void pud_clear(pud_t *pudp) set_pud(pudp, __pud(0)); } -static inline pud_t pfn_pud(unsigned long pfn, pgprot_t prot) +static inline pud_t pfn_pud(u64 pfn, pgprot_t prot) { return __pud((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot)); } -static inline unsigned long _pud_pfn(pud_t pud) +static inline u64 _pud_pfn(pud_t pud) { return __page_val_to_pfn(pud_val(pud)); } @@ -248,16 +253,16 @@ static inline bool mm_pud_folded(struct mm_struct *mm) #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) -static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot) +static inline pmd_t pfn_pmd(u64 pfn, pgprot_t prot) { - unsigned long prot_val = pgprot_val(prot); + u64 prot_val = pgprot_val(prot); ALT_THEAD_PMA(prot_val); return __pmd((pfn << _PAGE_PFN_SHIFT) | prot_val); } -static inline unsigned long _pmd_pfn(pmd_t pmd) +static inline u64 _pmd_pfn(pmd_t pmd) { return __page_val_to_pfn(pmd_val(pmd)); } @@ -265,13 +270,13 @@ static inline unsigned long _pmd_pfn(pmd_t pmd) #define mk_pmd(page, prot) pfn_pmd(page_to_pfn(page), prot) #define pmd_ERROR(e) \ - pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e)) + pr_err("%s:%d: bad pmd " PTE_FMT ".\n", __FILE__, __LINE__, pmd_val(e)) #define pud_ERROR(e) \ - pr_err("%s:%d: bad pud %016lx.\n", __FILE__, __LINE__, pud_val(e)) + pr_err("%s:%d: bad pud " PTE_FMT ".\n", __FILE__, __LINE__, pud_val(e)) #define p4d_ERROR(e) \ - pr_err("%s:%d: bad p4d %016lx.\n", __FILE__, __LINE__, p4d_val(e)) + pr_err("%s:%d: bad p4d " PTE_FMT ".\n", __FILE__, __LINE__, p4d_val(e)) static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) { @@ -311,12 +316,12 @@ static inline void p4d_clear(p4d_t *p4d) set_p4d(p4d, __p4d(0)); } -static inline p4d_t pfn_p4d(unsigned long pfn, pgprot_t prot) +static inline p4d_t pfn_p4d(u64 pfn, pgprot_t prot) { return __p4d((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot)); } -static inline unsigned long _p4d_pfn(p4d_t p4d) +static inline u64 _p4d_pfn(p4d_t p4d) { return __page_val_to_pfn(p4d_val(p4d)); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 050fdc49b5ad..5f1b48cb3311 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -9,6 +9,7 @@ #include #include +#include #include #ifndef CONFIG_MMU @@ -19,8 +20,13 @@ #define ADDRESS_SPACE_END (UL(-1)) #ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 /* Leave 2GB for kernel and BPF at the end of the address space */ #define KERNEL_LINK_ADDR (ADDRESS_SPACE_END - SZ_2G + 1) +#elif BITS_PER_LONG == 32 +/* Leave 64MB for kernel and BPF below PAGE_OFFSET */ +#define KERNEL_LINK_ADDR (PAGE_OFFSET - SZ_64M) +#endif #else #define KERNEL_LINK_ADDR PAGE_OFFSET #endif @@ -34,31 +40,45 @@ * Half of the kernel address space (1/4 of the entries of the page global * directory) is for the direct mapping. */ +#if (BITS_PER_LONG == 32) && (CONFIG_PGTABLE_LEVELS > 2) +#define KERN_VIRT_SIZE (PTRS_PER_PGD * PMD_SIZE) +#else #define KERN_VIRT_SIZE ((PTRS_PER_PGD / 2 * PGDIR_SIZE) / 2) +#endif #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +#define VMALLOC_END MODULES_LOWEST_VADDR +#else #define VMALLOC_END PAGE_OFFSET -#define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) +#endif +#define VMALLOC_START (VMALLOC_END - VMALLOC_SIZE) #define BPF_JIT_REGION_SIZE (SZ_128M) -#ifdef CONFIG_64BIT #define BPF_JIT_REGION_START (BPF_JIT_REGION_END - BPF_JIT_REGION_SIZE) +#if BITS_PER_LONG == 64 #define BPF_JIT_REGION_END (MODULES_END) #else -#define BPF_JIT_REGION_START (PAGE_OFFSET - BPF_JIT_REGION_SIZE) #define BPF_JIT_REGION_END (VMALLOC_END) #endif /* Modules always live before the kernel */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 /* This is used to define the end of the KASAN shadow region */ #define MODULES_LOWEST_VADDR (KERNEL_LINK_ADDR - SZ_2G) #define MODULES_VADDR (PFN_ALIGN((unsigned long)&_end) - SZ_2G) #define MODULES_END (PFN_ALIGN((unsigned long)&_start)) #else +#ifdef CONFIG_64BIT +#define MODULES_LOWEST_VADDR (KERNEL_LINK_ADDR - SZ_64M) +#define MODULES_VADDR MODULES_LOWEST_VADDR +#define MODULES_END KERNEL_LINK_ADDR +#else +#define MODULES_LOWEST_VADDR VMALLOC_START #define MODULES_VADDR VMALLOC_START #define MODULES_END VMALLOC_END #endif +#endif /* * Roughly size the vmemmap space to be large enough to fit enough @@ -66,7 +86,7 @@ * position vmemmap directly below the VMALLOC region. */ #define VA_BITS_SV32 32 -#ifdef CONFIG_64BIT +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 64) #define VA_BITS_SV39 39 #define VA_BITS_SV48 48 #define VA_BITS_SV57 57 @@ -126,9 +146,14 @@ #define MMAP_VA_BITS_64 ((VA_BITS >= VA_BITS_SV48) ? VA_BITS_SV48 : VA_BITS) #define MMAP_MIN_VA_BITS_64 (VA_BITS_SV39) +#if BITS_PER_LONG == 64 #define MMAP_VA_BITS (is_compat_task() ? VA_BITS_SV32 : MMAP_VA_BITS_64) #define MMAP_MIN_VA_BITS (is_compat_task() ? VA_BITS_SV32 : MMAP_MIN_VA_BITS_64) #else +#define MMAP_VA_BITS VA_BITS_SV32 +#define MMAP_MIN_VA_BITS VA_BITS_SV32 +#endif +#else #include #endif /* CONFIG_64BIT */ @@ -252,7 +277,7 @@ static inline void pmd_clear(pmd_t *pmdp) static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot) { - unsigned long prot_val = pgprot_val(prot); + ptval_t prot_val = pgprot_val(prot); ALT_THEAD_PMA(prot_val); @@ -591,7 +616,11 @@ extern int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long a static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, pte_t *ptep) { +#if CONFIG_PGTABLE_LEVELS > 2 + pte_t pte = __pte(atomic64_xchg((atomic64_t *)ptep, 0)); +#else pte_t pte = __pte(atomic_long_xchg((atomic_long_t *)ptep, 0)); +#endif page_table_check_pte_clear(mm, pte); @@ -602,7 +631,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long address, pte_t *ptep) { +#if CONFIG_PGTABLE_LEVELS > 2 + atomic64_and(~(u64)_PAGE_WRITE, (atomic64_t *)ptep); +#else atomic_long_and(~(unsigned long)_PAGE_WRITE, (atomic_long_t *)ptep); +#endif } #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH @@ -636,7 +669,7 @@ static inline pgprot_t pgprot_nx(pgprot_t _prot) #define pgprot_noncached pgprot_noncached static inline pgprot_t pgprot_noncached(pgprot_t _prot) { - unsigned long prot = pgprot_val(_prot); + ptval_t prot = pgprot_val(_prot); prot &= ~_PAGE_MTMASK; prot |= _PAGE_IO; @@ -647,7 +680,7 @@ static inline pgprot_t pgprot_noncached(pgprot_t _prot) #define pgprot_writecombine pgprot_writecombine static inline pgprot_t pgprot_writecombine(pgprot_t _prot) { - unsigned long prot = pgprot_val(_prot); + ptval_t prot = pgprot_val(_prot); prot &= ~_PAGE_MTMASK; prot |= _PAGE_NOCACHE; @@ -905,8 +938,12 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) * and give the kernel the other (upper) half. */ #ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define KERN_VIRT_START (-(BIT(VA_BITS)) + TASK_SIZE) #else +#define KERN_VIRT_START TASK_SIZE_32 +#endif +#else #define KERN_VIRT_START FIXADDR_START #endif @@ -915,6 +952,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) * Note that PGDIR_SIZE must evenly divide TASK_SIZE. * Task size is: * - 0x9fc00000 (~2.5GB) for RV32. + * - 0x80000000 ( 2GB) for RV32_COMPAT & RV64ILP32 * - 0x4000000000 ( 256GB) for RV64 using SV39 mmu * - 0x800000000000 ( 128TB) for RV64 using SV48 mmu * - 0x100000000000000 ( 64PB) for RV64 using SV57 mmu @@ -928,15 +966,19 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) #ifdef CONFIG_64BIT #define TASK_SIZE_64 (PGDIR_SIZE * PTRS_PER_PGD / 2) #define TASK_SIZE_MAX LONG_MAX +#define TASK_SIZE_32 _AC(0x80000000, UL) +#if BITS_PER_LONG == 64 #ifdef CONFIG_COMPAT -#define TASK_SIZE_32 (_AC(0x80000000, UL) - PAGE_SIZE) #define TASK_SIZE (is_compat_task() ? \ TASK_SIZE_32 : TASK_SIZE_64) #else #define TASK_SIZE TASK_SIZE_64 #endif +#else +#define TASK_SIZE TASK_SIZE_32 +#endif #else #define TASK_SIZE FIXADDR_START #endif diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index ca57a650c3d2..9f4e0be595fd 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -24,7 +24,7 @@ base; \ }) -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define DEFAULT_MAP_WINDOW (UL(1) << (MMAP_VA_BITS - 1)) #define STACK_TOP_MAX TASK_SIZE_64 #else diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index f6b13e9f5e6c..ce1440c63606 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -291,9 +291,9 @@ static void print_mmu(struct seq_file *f) const char *sv_type; #ifdef CONFIG_MMU -#if defined(CONFIG_32BIT) +#if CONFIG_PGTABLE_LEVELS == 2 sv_type = "sv32"; -#elif defined(CONFIG_64BIT) +#else if (pgtable_l5_enabled) sv_type = "sv57"; else if (pgtable_l4_enabled) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index fcc23350610e..1e854e9633b3 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -40,25 +40,25 @@ static void show_pte(unsigned long addr) pgdp = pgd_offset(mm, addr); pgd = pgdp_get(pgdp); - pr_alert("[%016lx] pgd=%016lx", addr, pgd_val(pgd)); + pr_alert("[%016lx] pgd=" REG_FMT, addr, pgd_val(pgd)); if (pgd_none(pgd) || pgd_bad(pgd) || pgd_leaf(pgd)) goto out; p4dp = p4d_offset(pgdp, addr); p4d = p4dp_get(p4dp); - pr_cont(", p4d=%016lx", p4d_val(p4d)); + pr_cont(", p4d=" REG_FMT, p4d_val(p4d)); if (p4d_none(p4d) || p4d_bad(p4d) || p4d_leaf(p4d)) goto out; pudp = pud_offset(p4dp, addr); pud = pudp_get(pudp); - pr_cont(", pud=%016lx", pud_val(pud)); + pr_cont(", pud=" REG_FMT, pud_val(pud)); if (pud_none(pud) || pud_bad(pud) || pud_leaf(pud)) goto out; pmdp = pmd_offset(pudp, addr); pmd = pmdp_get(pmdp); - pr_cont(", pmd=%016lx", pmd_val(pmd)); + pr_cont(", pmd=" REG_FMT, pmd_val(pmd)); if (pmd_none(pmd) || pmd_bad(pmd) || pmd_leaf(pmd)) goto out; @@ -67,7 +67,7 @@ static void show_pte(unsigned long addr) goto out; pte = ptep_get(ptep); - pr_cont(", pte=%016lx", pte_val(pte)); + pr_cont(", pte=" REG_FMT, pte_val(pte)); pte_unmap(ptep); out: pr_cont("\n"); diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 15b2eda4c364..3cdbb033860e 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -46,16 +46,20 @@ EXPORT_SYMBOL(kernel_map); #define kernel_map (*(struct kernel_mapping *)XIP_FIXUP(&kernel_map)) #endif -#ifdef CONFIG_64BIT +#if CONFIG_PGTABLE_LEVELS > 2 +#if BITS_PER_LONG == 64 u64 satp_mode __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL) ? SATP_MODE_57 : SATP_MODE_39; #else +u64 satp_mode __ro_after_init = SATP_MODE_39; +#endif +#else u64 satp_mode __ro_after_init = SATP_MODE_32; #endif EXPORT_SYMBOL(satp_mode); #ifdef CONFIG_64BIT -bool pgtable_l4_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL); -bool pgtable_l5_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL); +bool pgtable_l4_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL) && (BITS_PER_LONG == 64); +bool pgtable_l5_enabled __ro_after_init = !IS_ENABLED(CONFIG_XIP_KERNEL) && (BITS_PER_LONG == 64); EXPORT_SYMBOL(pgtable_l4_enabled); EXPORT_SYMBOL(pgtable_l5_enabled); #endif @@ -117,7 +121,7 @@ static inline void print_mlg(char *name, unsigned long b, unsigned long t) (((t) - (b)) >> LOG2_SZ_1G)); } -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 static inline void print_mlt(char *name, unsigned long b, unsigned long t) { pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld TB)\n", name, b, t, @@ -131,7 +135,7 @@ static inline void print_ml(char *name, unsigned long b, unsigned long t) { unsigned long diff = t - b; - if (IS_ENABLED(CONFIG_64BIT) && (diff >> LOG2_SZ_1T) >= 10) + if ((BITS_PER_LONG == 64) && (diff >> LOG2_SZ_1T) >= 10) print_mlt(name, b, t); else if ((diff >> LOG2_SZ_1G) >= 10) print_mlg(name, b, t); @@ -164,7 +168,9 @@ static void __init print_vm_layout(void) #endif print_ml("kernel", (unsigned long)kernel_map.virt_addr, - (unsigned long)ADDRESS_SPACE_END); + (BITS_PER_LONG == 64) ? + (unsigned long)ADDRESS_SPACE_END : + (unsigned long)PAGE_OFFSET); } } #else @@ -173,7 +179,8 @@ static void print_vm_layout(void) { } void __init mem_init(void) { - bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); + bool swiotlb = (BITS_PER_LONG == 32) ? false: + (max_pfn > PFN_DOWN(dma32_phys_limit)); #ifdef CONFIG_FLATMEM BUG_ON(!mem_map); #endif /* CONFIG_FLATMEM */ @@ -319,7 +326,7 @@ static void __init setup_bootmem(void) memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va)); dma_contiguous_reserve(dma32_phys_limit); - if (IS_ENABLED(CONFIG_64BIT)) + if (BITS_PER_LONG == 64) hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); } @@ -685,16 +692,26 @@ void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, phy pgd_next_t *nextp; phys_addr_t next_phys; uintptr_t pgd_idx = pgd_index(va); +#if (CONFIG_PGTABLE_LEVELS > 2) && (BITS_PER_LONG == 32) + uintptr_t pgd_idh = pgd_index(sign_extend64((u64)va, 31)); +#endif if (sz == PGDIR_SIZE) { - if (pgd_val(pgdp[pgd_idx]) == 0) + if (pgd_val(pgdp[pgd_idx]) == 0) { pgdp[pgd_idx] = pfn_pgd(PFN_DOWN(pa), prot); +#if (CONFIG_PGTABLE_LEVELS > 2) && (BITS_PER_LONG == 32) + pgdp[pgd_idh] = pfn_pgd(PFN_DOWN(pa), prot); +#endif + } return; } if (pgd_val(pgdp[pgd_idx]) == 0) { next_phys = alloc_pgd_next(va); pgdp[pgd_idx] = pfn_pgd(PFN_DOWN(next_phys), PAGE_TABLE); +#if (CONFIG_PGTABLE_LEVELS > 2) && (BITS_PER_LONG == 32) + pgdp[pgd_idh] = pfn_pgd(PFN_DOWN(next_phys), PAGE_TABLE); +#endif nextp = get_pgd_next_virt(next_phys); memset(nextp, 0, PAGE_SIZE); } else { @@ -775,7 +792,7 @@ static __meminit pgprot_t pgprot_from_va(uintptr_t va) } #endif /* CONFIG_STRICT_KERNEL_RWX */ -#if defined(CONFIG_64BIT) && !defined(CONFIG_XIP_KERNEL) +#if (BITS_PER_LONG == 64) && !defined(CONFIG_XIP_KERNEL) u64 __pi_set_satp_mode_from_cmdline(uintptr_t dtb_pa); static void __init disable_pgtable_l5(void) @@ -981,8 +998,8 @@ static void __init create_fdt_early_page_table(uintptr_t fix_fdt_va, /* Make sure the fdt fixmap address is always aligned on PMD size */ BUILD_BUG_ON(FIX_FDT % (PMD_SIZE / PAGE_SIZE)); - /* In 32-bit only, the fdt lies in its own PGD */ - if (!IS_ENABLED(CONFIG_64BIT)) { + /* In Sv32 only, the fdt lies in its own PGD */ + if (CONFIG_PGTABLE_LEVELS == 2) { create_pgd_mapping(early_pg_dir, fix_fdt_va, pa, MAX_FDT_SIZE, PAGE_KERNEL); } else { @@ -1108,7 +1125,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) kernel_map.virt_addr = KERNEL_LINK_ADDR + kernel_map.virt_offset; #ifdef CONFIG_XIP_KERNEL -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 kernel_map.page_offset = PAGE_OFFSET_L3; #else kernel_map.page_offset = _AC(CONFIG_PAGE_OFFSET, UL); @@ -1133,7 +1150,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) kernel_map.va_kernel_pa_offset = kernel_map.virt_addr - kernel_map.phys_addr; #endif -#if defined(CONFIG_64BIT) && !defined(CONFIG_XIP_KERNEL) +#if (BITS_PER_LONG == 64) && !defined(CONFIG_XIP_KERNEL) set_satp_mode(dtb_pa); set_mmap_rnd_bits_max(); #endif @@ -1164,7 +1181,11 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) * The last 4K bytes of the addressable memory can not be mapped because * of IS_ERR_VALUE macro. */ +#if BITS_PER_LONG == 64 BUG_ON((kernel_map.virt_addr + kernel_map.size) > ADDRESS_SPACE_END - SZ_4K); +#else + BUG_ON((kernel_map.virt_addr + kernel_map.size) > PAGE_OFFSET - SZ_4K); +#endif #endif #ifdef CONFIG_RELOCATABLE @@ -1246,7 +1267,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) fix_bmap_epmd = fixmap_pmd[pmd_index(__fix_to_virt(FIX_BTMAP_END))]; if (pmd_val(fix_bmap_spmd) != pmd_val(fix_bmap_epmd)) { WARN_ON(1); - pr_warn("fixmap btmap start [%08lx] != end [%08lx]\n", + pr_warn("fixmap btmap start [" PTE_FMT "] != end [" PTE_FMT "]\n", pmd_val(fix_bmap_spmd), pmd_val(fix_bmap_epmd)); pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n", fix_to_virt(FIX_BTMAP_BEGIN)); @@ -1336,7 +1357,7 @@ static void __init create_linear_mapping_page_table(void) static void __init setup_vm_final(void) { /* Setup swapper PGD for fixmap */ -#if !defined(CONFIG_64BIT) +#if CONFIG_PGTABLE_LEVELS == 2 /* * In 32-bit, the device tree lies in a pgd entry, so it must be copied * directly in swapper_pg_dir in addition to the pgd entry that points @@ -1354,7 +1375,7 @@ static void __init setup_vm_final(void) create_linear_mapping_page_table(); /* Map the kernel */ - if (IS_ENABLED(CONFIG_64BIT)) + if (CONFIG_PGTABLE_LEVELS > 2) create_kernel_page_table(swapper_pg_dir, false); #ifdef CONFIG_KASAN diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index d815448758a1..45927f713cb9 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -15,10 +15,10 @@ struct pageattr_masks { pgprot_t clear_mask; }; -static unsigned long set_pageattr_masks(unsigned long val, struct mm_walk *walk) +static unsigned long set_pageattr_masks(ptval_t val, struct mm_walk *walk) { struct pageattr_masks *masks = walk->private; - unsigned long new_val = val; + ptval_t new_val = val; new_val &= ~(pgprot_val(masks->clear_mask)); new_val |= (pgprot_val(masks->set_mask)); diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c index 4ae67324f992..564679b4c48e 100644 --- a/arch/riscv/mm/pgtable.c +++ b/arch/riscv/mm/pgtable.c @@ -37,7 +37,7 @@ int ptep_test_and_clear_young(struct vm_area_struct *vma, { if (!pte_young(ptep_get(ptep))) return 0; - return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep)); + return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, (unsigned long *)&pte_val(*ptep)); } EXPORT_SYMBOL_GPL(ptep_test_and_clear_young); From patchwork Tue Mar 25 12:15:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028464 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 393C425A634; Tue, 25 Mar 2025 12:20:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905235; cv=none; b=qRVQxWTE0q4WHX8CtpT3RT9pkVURq9wwNO4FDZK6VEoNLSPR8STliCoY72Y5e8tq/5+t5sHCLXhZPvqhr61zCs/hpFBcyUdMUOClw5KymIPZqx1kipC6DknwxgeFFnMaHKMZ4Ho76pUzpGJtA7D5YW0EDJEJuhClIdWBavuOFSI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905235; c=relaxed/simple; bh=IFZFjonaMzbpRZ4swpgyItEna4nE83LkZzb9BRzGITE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=I758uCRIsZ/pTmdIvgYzt7IIftxvY9iSVz++22aEQ4JkCcOa91ax7MBgIpP6nx1PCJYo5LSsB5AhGg7MHMqWdh/eK3uVdWFAP+X0vHRoWSF1+70f5Vg4I2vBuh2q60M7Xz4CxVKkljpU9FsDaGM3YSgem7xQ0jaZnyXQoO7AbfM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YGv7Ju8F; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YGv7Ju8F" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E3FBC4CEEE; Tue, 25 Mar 2025 12:20:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905234; bh=IFZFjonaMzbpRZ4swpgyItEna4nE83LkZzb9BRzGITE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YGv7Ju8FJqwJKAhoK+0bpefpHJmqwSqitCJQ3KA8SHoFBO3xjgZJGCnN1+EpM9Ri8 G0F8vKvx8n6+OSgZoVo+DMyBKKoIIhJSCP3taMQ5FZeWGN/R31z9hqNogcz/s/Onyr z7gFsqQn85faeI7gGJQLGHu93WAcoXryyyRSyQsANFNuhZaVIo6Dx8943m3kKw9yDg 27bjtFahCMk02wsXueW0/FZUhPIbW+fn3gQxchZIXU1nl5TX8/4hg5q3hXCN/9iqSh /WA528nukATyoTdAqfWLTZshpwfuMiYbtl8zLXtJpFV+4Wd+cfr5szXSWpIHHAHUjx 4Ts1fcwAE9wQQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 16/43] rv64ilp32_abi: riscv: Support physical addresses >= 0x80000000 Date: Tue, 25 Mar 2025 08:15:57 -0400 Message-Id: <20250325121624.523258-17-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI has two independent 2GiB address space: - 0 ~ 0x7fffffff - 0xffffffff80000000 ~ 0xffffffffffffffff In the rv64ilp32 ABI, 0x80000000 is illegal; hence, use a temporary trap handler to zero-extend the address for jalr, load and store operations. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/kernel/head.S | 112 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 112 insertions(+) diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S index e55a92be12b1..bd2f30aa6d01 100644 --- a/arch/riscv/kernel/head.S +++ b/arch/riscv/kernel/head.S @@ -170,6 +170,118 @@ secondary_start_sbi: .align 2 .Lsecondary_park: +#ifdef CONFIG_ABI_RV64ILP32 +.option push +.option norelax +.option norvc + addiw sp, sp, -32 + + /* zext.w sp */ + slli sp, sp, 32 + srli sp, sp, 32 + + /* zext.w ra */ + slli ra, ra, 32 + srli ra, ra, 32 + + /* zext.w fp */ + slli fp, fp, 32 + srli fp, fp, 32 + + /* zext.w tp */ + slli tp, tp, 32 + srli tp, tp, 32 + + /* save tmp reg */ + REG_S ra, 24(sp) + REG_S fp, 16(sp) + REG_S tp, 8(sp) + REG_S gp, 0(sp) + + /* zext.w epc */ + csrr ra, CSR_EPC + slli ra, ra, 32 + srli ra, ra, 32 + csrw CSR_SEPC, ra + + csrr gp, CSR_CAUSE + + /* EXC_INST_ACCESS */ + addiw fp, gp, -1 + beqz fp, 6f + + /* EXC_LOAD_ACCESS */ + addiw fp, gp, -5 + beqz fp, 1f + + /* EXC_STORE_ACCESS */ + addiw fp, gp, -7 + beqz fp, 1f + + j 7f +1: + /* get inst */ + lw ra, 0(ra) + andi gp, ra, 0x3 + + /* c.(lw/sw/ld/sd)sp */ + addiw fp, gp, -2 + beqz fp, 6f + + /* lw/sw/ld/sd */ + addiw fp, gp, -3 + beqz fp, 2f + + /* c.(lw/sw/ld/sd) */ + li fp, 0x7 + slli fp, fp, 7 + and ra, fp, ra + slli ra, ra, 8 + j 3f + +2: + /* get rs1 */ + li fp, 0x1f + slli fp, fp, 15 + and ra, fp, ra + +3: + /* copy rs1 to rd */ + mv fp, ra + srli fp, fp, 8 + or ra, fp, ra + + /* modify slli */ + la fp, 4f + lw tp, 0(fp) + mv gp, tp + or tp, ra, tp + sw tp, 0(fp) + fence.i +4: slli x0, x0, 32 + sw gp, 0(fp) + + /* modify srli */ + la fp, 5f + lw tp, 0(fp) + mv gp, tp + or tp, ra, tp + sw tp, 0(fp) + fence.i +5: srli x0, x0, 32 + sw gp, 0(fp) + +6: + /* restore tmp reg */ + REG_L ra, 24(sp) + REG_L fp, 16(sp) + REG_L tp, 8(sp) + REG_L gp, 0(sp) + addi sp, sp, 32 + sret +.option pop +7: +#endif /* * Park this hart if we: * - have too many harts on CONFIG_RISCV_BOOT_SPINWAIT From patchwork Tue Mar 25 12:15:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028465 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C99A525A654; Tue, 25 Mar 2025 12:20:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905249; cv=none; b=PXA5/9Qb0S+RV8JZtCUOA/lBhL14/s2dk44d2hCQerhzIw5o/KQjCgVPjBaqvwa1/CSMRnHG3o6fy7wfjsR8YEoJZQS5k1UGO+YPhOFoyjbPzzqkGszC2Bi7TBfL1ugeq4jSg5auQvPPD4LMXUzicrv8PwSmdPT0jq/ckFsrf1A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905249; c=relaxed/simple; bh=j18oLXPHFjnaGmgCwscTDaPTneiSyVqaoCKba61TqXg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ksvMfmY3i/W6NiLN+klluOkriT32EGq0noZ3BusNVu2GF1AiyMz1e9hsraXsCjKqLoktGK0HuqvGxV7Chie2WyAg0trLkEr7HRdWJXG0YdGkmxxNSw4WqETQnjLPn6jZ97gwQJPhPeJNGOogJavJ9OO67radIxFoZ1IA8nnwXfk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Oz8iARGb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Oz8iARGb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 213B1C4CEE4; Tue, 25 Mar 2025 12:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905248; bh=j18oLXPHFjnaGmgCwscTDaPTneiSyVqaoCKba61TqXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Oz8iARGb8tr7htUnaAeUCJ6IYWmPD2sUy6Fahrj1eenZT1kKMM+eFWJiKMQdGqcqw rqWz1qpr3jo3lVxFFYwrBasJ+tHw3PWT4Rv+apRvPWQa/i4K/2vaflC0hzKnPuX+P4 8bTM1EjGGpOTuQHdC8TBFwQkrFBcJI49u7thNpghWlOyMzJ3hMkq6mroidX2glUScJ uyG/XkFYIxn9NAsEUQSTt4TYvFJgoGc/h+TFjpPprvl74n7fsNLu4FNEPQT7I99tBn keJQj/Yhn5p3/N2C2J6YITPpmyRqNzvFWJV+jSwmuHsx+jq6dfD8Ek+CQ8ONbZxS8/ mQmFdBGqaAdYQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 17/43] rv64ilp32_abi: riscv: Adapt kasan memory layout Date: Tue, 25 Mar 2025 08:15:58 -0400 Message-Id: <20250325121624.523258-18-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" For generic KASAN, the size of each memory granule is 8, which needs 1/8 address space. The kernel space is 2GiB in rv64ilp32, so we need 256MiB range (0x80000000 ~ 0x90000000), and the offset is 0x7000000 for the whole 4GiB address space. Virtual kernel memory layout: fixmap : 0x90a00000 - 0x90ffffff (6144 kB) pci io : 0x91000000 - 0x91ffffff ( 16 MB) vmemmap : 0x92000000 - 0x93ffffff ( 32 MB) vmalloc : 0x94000000 - 0xb3ffffff ( 512 MB) modules : 0xb4000000 - 0xb7ffffff ( 64 MB) lowmem : 0xc0000000 - 0xc7ffffff ( 128 MB) kasan : 0x80000000 - 0x8fffffff ( 256 MB) <= kernel : 0xb8000000 - 0xbfffffff ( 128 MB) Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/kasan.h | 6 +++++- arch/riscv/mm/kasan_init.c | 2 +- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kasan.h b/arch/riscv/include/asm/kasan.h index e6a0071bdb56..dd3a211bc5d0 100644 --- a/arch/riscv/include/asm/kasan.h +++ b/arch/riscv/include/asm/kasan.h @@ -21,7 +21,7 @@ * [KASAN_SHADOW_OFFSET, KASAN_SHADOW_END) cover all 64-bits of virtual * addresses. So KASAN_SHADOW_OFFSET should satisfy the following equation: * KASAN_SHADOW_OFFSET = KASAN_SHADOW_END - - * (1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT)) + * (1ULL << (BITS_PER_LONG - KASAN_SHADOW_SCALE_SHIFT)) */ #define KASAN_SHADOW_SCALE_SHIFT 3 @@ -31,7 +31,11 @@ * aligned on PGDIR_SIZE, so force its alignment to ease its population. */ #define KASAN_SHADOW_START ((KASAN_SHADOW_END - KASAN_SHADOW_SIZE) & PGDIR_MASK) +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +#define KASAN_SHADOW_END 0x90000000UL +#else #define KASAN_SHADOW_END MODULES_LOWEST_VADDR +#endif #ifdef CONFIG_KASAN #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 41c635d6aca4..1e864598779a 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -324,7 +324,7 @@ asmlinkage void __init kasan_early_init(void) uintptr_t i; BUILD_BUG_ON(KASAN_SHADOW_OFFSET != - KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); + KASAN_SHADOW_END - (1UL << (BITS_PER_LONG - KASAN_SHADOW_SCALE_SHIFT))); for (i = 0; i < PTRS_PER_PTE; ++i) set_pte(kasan_early_shadow_pte + i, From patchwork Tue Mar 25 12:15:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028466 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E309F1531C5; Tue, 25 Mar 2025 12:21:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905264; cv=none; b=FwBOLV8lP8jXfLLxv3UvraGAzCH+SrtWELEImE5GDoQcOB30M2kROianVwV/HrUC3SySkolcA7aWzfixpxIe2QEBpiBAdkVH1RymaZs35fiKbpmawrs5wThIlExQGmCWJK0BqIBBo/MlbcnRxIlcFpo+Ub+TMNIW7WmhLurpZjs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905264; c=relaxed/simple; bh=2/Hvnv87DWEjAWPCh6GhOyHG+/U7r4fp8VuTvBXrb80=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AZssfn1w7xo8wPL82OVd02euWkul286LTTsuOgj5d/ChI+lmUbyZge+lC8VyIuiDrhjsA3gDVNOHKlvjsLVDWrVgUzqFU1USoreLNLMCQYFuvkXJrUFbzfgFJ12pSu7m7yHyRVYlExWIU+vuA1Td73SUo7jjbWZ2M9QgiO6zhZw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rYM8f4EK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rYM8f4EK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C9042C4CEFD; Tue, 25 Mar 2025 12:20:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905263; bh=2/Hvnv87DWEjAWPCh6GhOyHG+/U7r4fp8VuTvBXrb80=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rYM8f4EKTmolkQsSqeHhqrRRc3FcctqKKOZUK+7zLtCEH3VM0e8fUdlOcn4DGvPM6 sA2pUVnmhSHVcMDQIL98SQbwImuMRIIDsxMRiLWROTy0BmE0ExR/VAPNCKuWOimaGs PWxcAUwlqO3pvHtJpeAD/KxJdIAriTpRmI1p5EiOeJ8xRi3osKsRlgc6VYHZmTyiPv ht9k2IJ6o5cqmoT7o20SIbbEa0RqMpoF0x4xB6f3agHa1rKTcz2ztlavsM7uZSGfHP w5/mT/Eeu9Kishhy6edSd2s7CYDDsb7nrj2/nMWTZ1fd0yh7FM8IGkpZDAZQMvQHhe geBTTiA2fBnxw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 18/43] rv64ilp32_abi: riscv: kvm: Initial support Date: Tue, 25 Mar 2025 08:15:59 -0400 Message-Id: <20250325121624.523258-19-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" This is the initial support for rv64ilp32 abi, and haven't passed the kvm self test. It could support rv64ilp32 & rv64lp64 linux guest kernels. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/kvm_aia.h | 32 ++--- arch/riscv/include/asm/kvm_host.h | 192 ++++++++++++------------- arch/riscv/include/asm/kvm_nacl.h | 26 ++-- arch/riscv/include/asm/kvm_vcpu_insn.h | 4 +- arch/riscv/include/asm/kvm_vcpu_pmu.h | 8 +- arch/riscv/include/asm/kvm_vcpu_sbi.h | 4 +- arch/riscv/include/asm/sbi.h | 10 +- arch/riscv/include/uapi/asm/kvm.h | 56 ++++---- arch/riscv/kvm/aia.c | 26 ++-- arch/riscv/kvm/aia_imsic.c | 6 +- arch/riscv/kvm/main.c | 2 +- arch/riscv/kvm/mmu.c | 10 +- arch/riscv/kvm/tlb.c | 76 +++++----- arch/riscv/kvm/vcpu.c | 10 +- arch/riscv/kvm/vcpu_exit.c | 4 +- arch/riscv/kvm/vcpu_insn.c | 12 +- arch/riscv/kvm/vcpu_onereg.c | 18 +-- arch/riscv/kvm/vcpu_pmu.c | 8 +- arch/riscv/kvm/vcpu_sbi_base.c | 2 +- arch/riscv/kvm/vmid.c | 4 +- 20 files changed, 256 insertions(+), 254 deletions(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h index 1f37b600ca47..d7dae9128b5e 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -50,13 +50,13 @@ struct kvm_aia { }; struct kvm_vcpu_aia_csr { - unsigned long vsiselect; - unsigned long hviprio1; - unsigned long hviprio2; - unsigned long vsieh; - unsigned long hviph; - unsigned long hviprio1h; - unsigned long hviprio2h; + xlen_t vsiselect; + xlen_t hviprio1; + xlen_t hviprio2; + xlen_t vsieh; + xlen_t hviph; + xlen_t hviprio1h; + xlen_t hviprio2h; }; struct kvm_vcpu_aia { @@ -95,8 +95,8 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu); #define KVM_RISCV_AIA_IMSIC_TOPEI (ISELECT_MASK + 1) int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask); + xlen_t *val, xlen_t new_val, + xlen_t wr_mask); int kvm_riscv_aia_imsic_rw_attr(struct kvm *kvm, unsigned long type, bool write, unsigned long *val); int kvm_riscv_aia_imsic_has_attr(struct kvm *kvm, unsigned long type); @@ -131,19 +131,19 @@ void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu); void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu); int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu, unsigned long reg_num, - unsigned long *out_val); + xlen_t *out_val); int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu, unsigned long reg_num, - unsigned long val); + xlen_t val); int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, - unsigned long new_val, - unsigned long wr_mask); + xlen_t *val, + xlen_t new_val, + xlen_t wr_mask); int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask); + xlen_t *val, xlen_t new_val, + xlen_t wr_mask); #define KVM_RISCV_VCPU_AIA_CSR_FUNCS \ { .base = CSR_SIREG, .count = 1, .func = kvm_riscv_vcpu_aia_rmw_ireg }, \ { .base = CSR_STOPEI, .count = 1, .func = kvm_riscv_vcpu_aia_rmw_topei }, diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index cc33e35cd628..166cae2c74cf 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -64,8 +64,8 @@ enum kvm_riscv_hfence_type { struct kvm_riscv_hfence { enum kvm_riscv_hfence_type type; - unsigned long asid; - unsigned long order; + xlen_t asid; + xlen_t order; gpa_t addr; gpa_t size; }; @@ -102,8 +102,8 @@ struct kvm_vmid { * Writes to vmid_version and vmid happen with vmid_lock held * whereas reads happen without any lock held. */ - unsigned long vmid_version; - unsigned long vmid; + xlen_t vmid_version; + xlen_t vmid; }; struct kvm_arch { @@ -122,75 +122,75 @@ struct kvm_arch { }; struct kvm_cpu_trap { - unsigned long sepc; - unsigned long scause; - unsigned long stval; - unsigned long htval; - unsigned long htinst; + xlen_t sepc; + xlen_t scause; + xlen_t stval; + xlen_t htval; + xlen_t htinst; }; struct kvm_cpu_context { - unsigned long zero; - unsigned long ra; - unsigned long sp; - unsigned long gp; - unsigned long tp; - unsigned long t0; - unsigned long t1; - unsigned long t2; - unsigned long s0; - unsigned long s1; - unsigned long a0; - unsigned long a1; - unsigned long a2; - unsigned long a3; - unsigned long a4; - unsigned long a5; - unsigned long a6; - unsigned long a7; - unsigned long s2; - unsigned long s3; - unsigned long s4; - unsigned long s5; - unsigned long s6; - unsigned long s7; - unsigned long s8; - unsigned long s9; - unsigned long s10; - unsigned long s11; - unsigned long t3; - unsigned long t4; - unsigned long t5; - unsigned long t6; - unsigned long sepc; - unsigned long sstatus; - unsigned long hstatus; + xlen_t zero; + xlen_t ra; + xlen_t sp; + xlen_t gp; + xlen_t tp; + xlen_t t0; + xlen_t t1; + xlen_t t2; + xlen_t s0; + xlen_t s1; + xlen_t a0; + xlen_t a1; + xlen_t a2; + xlen_t a3; + xlen_t a4; + xlen_t a5; + xlen_t a6; + xlen_t a7; + xlen_t s2; + xlen_t s3; + xlen_t s4; + xlen_t s5; + xlen_t s6; + xlen_t s7; + xlen_t s8; + xlen_t s9; + xlen_t s10; + xlen_t s11; + xlen_t t3; + xlen_t t4; + xlen_t t5; + xlen_t t6; + xlen_t sepc; + xlen_t sstatus; + xlen_t hstatus; union __riscv_fp_state fp; struct __riscv_v_ext_state vector; }; struct kvm_vcpu_csr { - unsigned long vsstatus; - unsigned long vsie; - unsigned long vstvec; - unsigned long vsscratch; - unsigned long vsepc; - unsigned long vscause; - unsigned long vstval; - unsigned long hvip; - unsigned long vsatp; - unsigned long scounteren; - unsigned long senvcfg; + xlen_t vsstatus; + xlen_t vsie; + xlen_t vstvec; + xlen_t vsscratch; + xlen_t vsepc; + xlen_t vscause; + xlen_t vstval; + xlen_t hvip; + xlen_t vsatp; + xlen_t scounteren; + xlen_t senvcfg; }; struct kvm_vcpu_config { u64 henvcfg; u64 hstateen0; - unsigned long hedeleg; + xlen_t hedeleg; }; struct kvm_vcpu_smstateen_csr { - unsigned long sstateen0; + xlen_t sstateen0; }; struct kvm_vcpu_arch { @@ -204,16 +204,16 @@ struct kvm_vcpu_arch { DECLARE_BITMAP(isa, RISCV_ISA_EXT_MAX); /* Vendor, Arch, and Implementation details */ - unsigned long mvendorid; - unsigned long marchid; - unsigned long mimpid; + xlen_t mvendorid; + xlen_t marchid; + xlen_t mimpid; /* SSCRATCH, STVEC, and SCOUNTEREN of Host */ - unsigned long host_sscratch; - unsigned long host_stvec; - unsigned long host_scounteren; - unsigned long host_senvcfg; - unsigned long host_sstateen0; + xlen_t host_sscratch; + xlen_t host_stvec; + xlen_t host_scounteren; + xlen_t host_senvcfg; + xlen_t host_sstateen0; /* CPU context of Host */ struct kvm_cpu_context host_context; @@ -252,8 +252,8 @@ struct kvm_vcpu_arch { /* HFENCE request queue */ spinlock_t hfence_lock; - unsigned long hfence_head; - unsigned long hfence_tail; + xlen_t hfence_head; + xlen_t hfence_tail; struct kvm_riscv_hfence hfence_queue[KVM_RISCV_VCPU_MAX_HFENCE]; /* MMIO instruction details */ @@ -305,24 +305,24 @@ static inline void kvm_arch_sync_events(struct kvm *kvm) {} #define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 -void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, +void kvm_riscv_local_hfence_gvma_vmid_gpa(xlen_t vmid, gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid); + xlen_t order); +void kvm_riscv_local_hfence_gvma_vmid_all(xlen_t vmid); void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, - unsigned long order); + xlen_t order); void kvm_riscv_local_hfence_gvma_all(void); -void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, - unsigned long asid, - unsigned long gva, - unsigned long gvsz, - unsigned long order); -void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, - unsigned long asid); -void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, - unsigned long gva, unsigned long gvsz, - unsigned long order); -void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); +void kvm_riscv_local_hfence_vvma_asid_gva(xlen_t vmid, + xlen_t asid, + xlen_t gva, + xlen_t gvsz, + xlen_t order); +void kvm_riscv_local_hfence_vvma_asid_all(xlen_t vmid, + xlen_t asid); +void kvm_riscv_local_hfence_vvma_gva(xlen_t vmid, + xlen_t gva, xlen_t gvsz, + xlen_t order); +void kvm_riscv_local_hfence_vvma_all(xlen_t vmid); void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu); @@ -332,26 +332,26 @@ void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); void kvm_riscv_fence_i(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + xlen_t hbase, xlen_t hmask); void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, + xlen_t hbase, xlen_t hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order); + xlen_t order); void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + xlen_t hbase, xlen_t hmask); void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); + xlen_t hbase, xlen_t hmask, + xlen_t gva, xlen_t gvsz, + xlen_t order, xlen_t asid); void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long asid); + xlen_t hbase, xlen_t hmask, + xlen_t asid); void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order); + xlen_t hbase, xlen_t hmask, + xlen_t gva, xlen_t gvsz, + xlen_t order); void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + xlen_t hbase, xlen_t hmask); int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, @@ -369,7 +369,7 @@ unsigned long __init kvm_riscv_gstage_mode(void); int kvm_riscv_gstage_gpa_bits(void); void __init kvm_riscv_gstage_vmid_detect(void); -unsigned long kvm_riscv_gstage_vmid_bits(void); +xlen_t kvm_riscv_gstage_vmid_bits(void); int kvm_riscv_gstage_vmid_init(struct kvm *kvm); bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/include/asm/kvm_nacl.h b/arch/riscv/include/asm/kvm_nacl.h index 4124d5e06a0f..59be64c068fc 100644 --- a/arch/riscv/include/asm/kvm_nacl.h +++ b/arch/riscv/include/asm/kvm_nacl.h @@ -68,26 +68,26 @@ int kvm_riscv_nacl_init(void); #define nacl_shmem() \ this_cpu_ptr(&kvm_riscv_nacl)->shmem -#define nacl_scratch_read_long(__shmem, __offset) \ +#define nacl_scratch_read_csr(__shmem, __offset) \ ({ \ - unsigned long *__p = (__shmem) + \ + xlen_t *__p = (__shmem) + \ SBI_NACL_SHMEM_SCRATCH_OFFSET + \ (__offset); \ lelong_to_cpu(*__p); \ }) -#define nacl_scratch_write_long(__shmem, __offset, __val) \ +#define nacl_scratch_write_csr(__shmem, __offset, __val) \ do { \ - unsigned long *__p = (__shmem) + \ + xlen_t *__p = (__shmem) + \ SBI_NACL_SHMEM_SCRATCH_OFFSET + \ (__offset); \ *__p = cpu_to_lelong(__val); \ } while (0) -#define nacl_scratch_write_longs(__shmem, __offset, __array, __count) \ +#define nacl_scratch_write_csrs(__shmem, __offset, __array, __count) \ do { \ unsigned int __i; \ - unsigned long *__p = (__shmem) + \ + xlen_t *__p = (__shmem) + \ SBI_NACL_SHMEM_SCRATCH_OFFSET + \ (__offset); \ for (__i = 0; __i < (__count); __i++) \ @@ -100,7 +100,7 @@ do { \ #define nacl_hfence_mkconfig(__type, __order, __vmid, __asid) \ ({ \ - unsigned long __c = SBI_NACL_SHMEM_HFENCE_CONFIG_PEND; \ + xlen_t __c = SBI_NACL_SHMEM_HFENCE_CONFIG_PEND; \ __c |= ((__type) & SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_MASK) \ << SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_SHIFT; \ __c |= (((__order) - SBI_NACL_SHMEM_HFENCE_ORDER_BASE) & \ @@ -168,7 +168,7 @@ __kvm_riscv_nacl_hfence(__shmem, \ #define nacl_csr_read(__shmem, __csr) \ ({ \ - unsigned long *__a = (__shmem) + SBI_NACL_SHMEM_CSR_OFFSET; \ + xlen_t *__a = (__shmem) + SBI_NACL_SHMEM_CSR_OFFSET; \ lelong_to_cpu(__a[SBI_NACL_SHMEM_CSR_INDEX(__csr)]); \ }) @@ -176,7 +176,7 @@ __kvm_riscv_nacl_hfence(__shmem, \ do { \ void *__s = (__shmem); \ unsigned int __i = SBI_NACL_SHMEM_CSR_INDEX(__csr); \ - unsigned long *__a = (__s) + SBI_NACL_SHMEM_CSR_OFFSET; \ + xlen_t *__a = (__s) + SBI_NACL_SHMEM_CSR_OFFSET; \ u8 *__b = (__s) + SBI_NACL_SHMEM_DBITMAP_OFFSET; \ __a[__i] = cpu_to_lelong(__val); \ __b[__i >> 3] |= 1U << (__i & 0x7); \ @@ -186,9 +186,9 @@ do { \ ({ \ void *__s = (__shmem); \ unsigned int __i = SBI_NACL_SHMEM_CSR_INDEX(__csr); \ - unsigned long *__a = (__s) + SBI_NACL_SHMEM_CSR_OFFSET; \ + xlen_t *__a = (__s) + SBI_NACL_SHMEM_CSR_OFFSET; \ u8 *__b = (__s) + SBI_NACL_SHMEM_DBITMAP_OFFSET; \ - unsigned long __r = lelong_to_cpu(__a[__i]); \ + xlen_t __r = lelong_to_cpu(__a[__i]); \ __a[__i] = cpu_to_lelong(__val); \ __b[__i >> 3] |= 1U << (__i & 0x7); \ __r; \ @@ -210,7 +210,7 @@ do { \ #define ncsr_read(__csr) \ ({ \ - unsigned long __r; \ + xlen_t __r; \ if (kvm_riscv_nacl_available()) \ __r = nacl_csr_read(nacl_shmem(), __csr); \ else \ @@ -228,7 +228,7 @@ do { \ #define ncsr_swap(__csr, __val) \ ({ \ - unsigned long __r; \ + xlen_t __r; \ if (kvm_riscv_nacl_sync_csr_available()) \ __r = nacl_csr_swap(nacl_shmem(), __csr, __val); \ else \ diff --git a/arch/riscv/include/asm/kvm_vcpu_insn.h b/arch/riscv/include/asm/kvm_vcpu_insn.h index 350011c83581..a0da75683894 100644 --- a/arch/riscv/include/asm/kvm_vcpu_insn.h +++ b/arch/riscv/include/asm/kvm_vcpu_insn.h @@ -11,7 +11,7 @@ struct kvm_run; struct kvm_cpu_trap; struct kvm_mmio_decode { - unsigned long insn; + xlen_t insn; int insn_len; int len; int shift; @@ -19,7 +19,7 @@ struct kvm_mmio_decode { }; struct kvm_csr_decode { - unsigned long insn; + xlen_t insn; int return_handled; }; diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 1d85b6617508..e69b102bde49 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -74,8 +74,8 @@ struct kvm_pmu { int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid); int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask); + xlen_t *val, xlen_t new_val, + xlen_t wr_mask); int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata); int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, @@ -106,8 +106,8 @@ struct kvm_pmu { }; static inline int kvm_riscv_vcpu_pmu_read_legacy(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask) + xlen_t *val, xlen_t new_val, + xlen_t wr_mask) { if (csr_num == CSR_CYCLE || csr_num == CSR_INSTRET) { *val = 0; diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h index 4ed6203cdd30..83d786111450 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -27,8 +27,8 @@ struct kvm_vcpu_sbi_context { }; struct kvm_vcpu_sbi_return { - unsigned long out_val; - unsigned long err_val; + xlen_t out_val; + xlen_t err_val; struct kvm_cpu_trap *utrap; bool uexit; }; diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index fd9a9c723ec6..df73a0eb231b 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -343,7 +343,7 @@ enum sbi_ext_nacl_feature { #define SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_SHIFT \ (__riscv_xlen - SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_BITS) #define SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_MASK \ - ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_BITS) - 1) + ((_AC(1, UXL) << SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_BITS) - 1) #define SBI_NACL_SHMEM_HFENCE_CONFIG_PEND \ (SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_MASK << \ SBI_NACL_SHMEM_HFENCE_CONFIG_PEND_SHIFT) @@ -358,7 +358,7 @@ enum sbi_ext_nacl_feature { (SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD1_SHIFT - \ SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_BITS) #define SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_MASK \ - ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_BITS) - 1) + ((_AC(1, UXL) << SBI_NACL_SHMEM_HFENCE_CONFIG_TYPE_BITS) - 1) #define SBI_NACL_SHMEM_HFENCE_TYPE_GVMA 0x0 #define SBI_NACL_SHMEM_HFENCE_TYPE_GVMA_ALL 0x1 @@ -379,7 +379,7 @@ enum sbi_ext_nacl_feature { (SBI_NACL_SHMEM_HFENCE_CONFIG_RSVD2_SHIFT - \ SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_BITS) #define SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_MASK \ - ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_BITS) - 1) + ((_AC(1, UXL) << SBI_NACL_SHMEM_HFENCE_CONFIG_ORDER_BITS) - 1) #define SBI_NACL_SHMEM_HFENCE_ORDER_BASE 12 #if __riscv_xlen == 32 @@ -392,9 +392,9 @@ enum sbi_ext_nacl_feature { #define SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_SHIFT \ SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_BITS #define SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_MASK \ - ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_BITS) - 1) + ((_AC(1, UXL) << SBI_NACL_SHMEM_HFENCE_CONFIG_ASID_BITS) - 1) #define SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_MASK \ - ((1UL << SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_BITS) - 1) + ((_AC(1, UXL) << SBI_NACL_SHMEM_HFENCE_CONFIG_VMID_BITS) - 1) #define SBI_NACL_SHMEM_AUTOSWAP_FLAG_HSTATUS BIT(0) #define SBI_NACL_SHMEM_AUTOSWAP_HSTATUS ((__riscv_xlen / 8) * 1) diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index f06bc5efcd79..9001e8081ce2 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -48,13 +48,13 @@ struct kvm_sregs { /* CONFIG registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ struct kvm_riscv_config { - unsigned long isa; - unsigned long zicbom_block_size; - unsigned long mvendorid; - unsigned long marchid; - unsigned long mimpid; - unsigned long zicboz_block_size; - unsigned long satp_mode; + xlen_t isa; + xlen_t zicbom_block_size; + xlen_t mvendorid; + xlen_t marchid; + xlen_t mimpid; + xlen_t zicboz_block_size; + xlen_t satp_mode; }; /* CORE registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ @@ -69,33 +69,33 @@ struct kvm_riscv_core { /* General CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ struct kvm_riscv_csr { - unsigned long sstatus; - unsigned long sie; - unsigned long stvec; - unsigned long sscratch; - unsigned long sepc; - unsigned long scause; - unsigned long stval; - unsigned long sip; - unsigned long satp; - unsigned long scounteren; - unsigned long senvcfg; + xlen_t sstatus; + xlen_t sie; + xlen_t stvec; + xlen_t sscratch; + xlen_t sepc; + xlen_t scause; + xlen_t stval; + xlen_t sip; + xlen_t satp; + xlen_t scounteren; + xlen_t senvcfg; }; /* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ struct kvm_riscv_aia_csr { - unsigned long siselect; - unsigned long iprio1; - unsigned long iprio2; - unsigned long sieh; - unsigned long siph; - unsigned long iprio1h; - unsigned long iprio2h; + xlen_t siselect; + xlen_t iprio1; + xlen_t iprio2; + xlen_t sieh; + xlen_t siph; + xlen_t iprio1h; + xlen_t iprio2h; }; /* Smstateen CSR for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ struct kvm_riscv_smstateen_csr { - unsigned long sstateen0; + xlen_t sstateen0; }; /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ @@ -207,8 +207,8 @@ enum KVM_RISCV_SBI_EXT_ID { /* SBI STA extension registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ struct kvm_riscv_sbi_sta { - unsigned long shmem_lo; - unsigned long shmem_hi; + xlen_t shmem_lo; + xlen_t shmem_hi; }; /* Possible states for kvm_riscv_timer */ diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 19afd1f23537..77f6943292a3 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -200,31 +200,31 @@ void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu) int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu, unsigned long reg_num, - unsigned long *out_val) + xlen_t *out_val) { struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr; - if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long)) + if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(xlen_t)) return -ENOENT; *out_val = 0; if (kvm_riscv_aia_available()) - *out_val = ((unsigned long *)csr)[reg_num]; + *out_val = ((xlen_t *)csr)[reg_num]; return 0; } int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu, unsigned long reg_num, - unsigned long val) + xlen_t val) { struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr; - if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long)) + if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(xlen_t)) return -ENOENT; if (kvm_riscv_aia_available()) { - ((unsigned long *)csr)[reg_num] = val; + ((xlen_t *)csr)[reg_num] = val; #ifdef CONFIG_32BIT if (reg_num == KVM_REG_RISCV_CSR_AIA_REG(siph)) @@ -237,9 +237,9 @@ int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu, int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, - unsigned long new_val, - unsigned long wr_mask) + xlen_t *val, + xlen_t new_val, + xlen_t wr_mask) { /* If AIA not available then redirect trap */ if (!kvm_riscv_aia_available()) @@ -271,7 +271,7 @@ static int aia_irq2bitpos[] = { static u8 aia_get_iprio8(struct kvm_vcpu *vcpu, unsigned int irq) { - unsigned long hviprio; + xlen_t hviprio; int bitpos = aia_irq2bitpos[irq]; if (bitpos < 0) @@ -396,8 +396,8 @@ static int aia_rmw_iprio(struct kvm_vcpu *vcpu, unsigned int isel, } int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask) + xlen_t *val, xlen_t new_val, + xlen_t wr_mask) { unsigned int isel; @@ -408,7 +408,7 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, /* First try to emulate in kernel space */ isel = ncsr_read(CSR_VSISELECT) & ISELECT_MASK; if (isel >= ISELECT_IPRIO0 && isel <= ISELECT_IPRIO15) - return aia_rmw_iprio(vcpu, isel, val, new_val, wr_mask); + return aia_rmw_iprio(vcpu, isel, (ulong *)val, new_val, wr_mask); else if (isel >= IMSIC_FIRST && isel <= IMSIC_LAST && kvm_riscv_aia_initialized(vcpu->kvm)) return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, isel, val, new_val, diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index a8085cd8215e..3c7f13b7a2ba 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -839,8 +839,8 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu) } int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask) + xlen_t *val, xlen_t new_val, + xlen_t wr_mask) { u32 topei; struct imsic_mrif_eix *eix; @@ -866,7 +866,7 @@ int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel, } } else { r = imsic_mrif_rmw(imsic->swfile, imsic->nr_eix, isel, - val, new_val, wr_mask); + (ulong *)val, (ulong)new_val, (ulong)wr_mask); /* Forward unknown IMSIC register to user-space */ if (r) rc = (r == -ENOENT) ? 0 : KVM_INSN_ILLEGAL_TRAP; diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 1fa8be5ee509..34d053ae09a9 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -152,7 +152,7 @@ static int __init riscv_kvm_init(void) } kvm_info("using %s G-stage page table format\n", str); - kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits()); + kvm_info("VMID %ld bits available\n", (ulong)kvm_riscv_gstage_vmid_bits()); if (kvm_riscv_aia_available()) kvm_info("AIA available with %d guest external interrupts\n", diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 1087ea74567b..a89e5701076d 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -20,7 +20,7 @@ #include #ifdef CONFIG_64BIT -static unsigned long gstage_mode __ro_after_init = (HGATP_MODE_SV39X4 << HGATP_MODE_SHIFT); +static xlen_t gstage_mode __ro_after_init = (HGATP_MODE_SV39X4 << HGATP_MODE_SHIFT); static unsigned long gstage_pgd_levels __ro_after_init = 3; #define gstage_index_bits 9 #else @@ -30,11 +30,11 @@ static unsigned long gstage_pgd_levels __ro_after_init = 2; #endif #define gstage_pgd_xbits 2 -#define gstage_pgd_size (1UL << (HGATP_PAGE_SHIFT + gstage_pgd_xbits)) +#define gstage_pgd_size (_AC(1, UXL) << (HGATP_PAGE_SHIFT + gstage_pgd_xbits)) #define gstage_gpa_bits (HGATP_PAGE_SHIFT + \ (gstage_pgd_levels * gstage_index_bits) + \ gstage_pgd_xbits) -#define gstage_gpa_size ((gpa_t)(1ULL << gstage_gpa_bits)) +#define gstage_gpa_size ((gpa_t)(_AC(1, UXL) << gstage_gpa_bits)) #define gstage_pte_leaf(__ptep) \ (pte_val(*(__ptep)) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)) @@ -623,7 +623,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, vma_pageshift = huge_page_shift(hstate_vma(vma)); else vma_pageshift = PAGE_SHIFT; - vma_pagesize = 1ULL << vma_pageshift; + vma_pagesize = _AC(1, UXL) << vma_pageshift; if (logging || (vma->vm_flags & VM_PFNMAP)) vma_pagesize = PAGE_SIZE; @@ -725,7 +725,7 @@ void kvm_riscv_gstage_free_pgd(struct kvm *kvm) void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu) { - unsigned long hgatp = gstage_mode; + xlen_t hgatp = gstage_mode; struct kvm_arch *k = &vcpu->kvm->arch; hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID; diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 2f91ea5f8493..01d581763849 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -18,9 +18,9 @@ #define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL) -void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, +void kvm_riscv_local_hfence_gvma_vmid_gpa(xlen_t vmid, gpa_t gpa, gpa_t gpsz, - unsigned long order) + xlen_t order) { gpa_t pos; @@ -42,13 +42,13 @@ void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, } } -void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid) +void kvm_riscv_local_hfence_gvma_vmid_all(xlen_t vmid) { asm volatile(HFENCE_GVMA(zero, %0) : : "r" (vmid) : "memory"); } void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, - unsigned long order) + xlen_t order) { gpa_t pos; @@ -75,13 +75,14 @@ void kvm_riscv_local_hfence_gvma_all(void) asm volatile(HFENCE_GVMA(zero, zero) : : : "memory"); } -void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, - unsigned long asid, - unsigned long gva, - unsigned long gvsz, - unsigned long order) +void kvm_riscv_local_hfence_vvma_asid_gva(xlen_t vmid, + xlen_t asid, + xlen_t gva, + xlen_t gvsz, + xlen_t order) { - unsigned long pos, hgatp; + xlen_t pos; + xlen_t hgatp; if (PTRS_PER_PTE < (gvsz >> order)) { kvm_riscv_local_hfence_vvma_asid_all(vmid, asid); @@ -105,10 +106,10 @@ void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, csr_write(CSR_HGATP, hgatp); } -void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, - unsigned long asid) +void kvm_riscv_local_hfence_vvma_asid_all(xlen_t vmid, + xlen_t asid) { - unsigned long hgatp; + xlen_t hgatp; hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT); @@ -117,11 +118,12 @@ void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, csr_write(CSR_HGATP, hgatp); } -void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, - unsigned long gva, unsigned long gvsz, - unsigned long order) +void kvm_riscv_local_hfence_vvma_gva(xlen_t vmid, + xlen_t gva, xlen_t gvsz, + xlen_t order) { - unsigned long pos, hgatp; + xlen_t pos; + xlen_t hgatp; if (PTRS_PER_PTE < (gvsz >> order)) { kvm_riscv_local_hfence_vvma_all(vmid); @@ -145,9 +147,9 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, csr_write(CSR_HGATP, hgatp); } -void kvm_riscv_local_hfence_vvma_all(unsigned long vmid) +void kvm_riscv_local_hfence_vvma_all(xlen_t vmid) { - unsigned long hgatp; + xlen_t hgatp; hgatp = csr_swap(CSR_HGATP, vmid << HGATP_VMID_SHIFT); @@ -158,7 +160,7 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vmid) void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) { - unsigned long vmid; + xlen_t vmid; if (!kvm_riscv_gstage_vmid_bits() || vcpu->arch.last_exit_cpu == vcpu->cpu) @@ -188,7 +190,7 @@ void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu) { struct kvm_vmid *v = &vcpu->kvm->arch.vmid; - unsigned long vmid = READ_ONCE(v->vmid); + xlen_t vmid = READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) nacl_hfence_gvma_vmid_all(nacl_shmem(), vmid); @@ -199,7 +201,7 @@ void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu) void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu) { struct kvm_vmid *v = &vcpu->kvm->arch.vmid; - unsigned long vmid = READ_ONCE(v->vmid); + xlen_t vmid = READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) nacl_hfence_vvma_all(nacl_shmem(), vmid); @@ -258,7 +260,7 @@ static bool vcpu_hfence_enqueue(struct kvm_vcpu *vcpu, void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) { - unsigned long vmid; + xlen_t vmid; struct kvm_riscv_hfence d = { 0 }; struct kvm_vmid *v = &vcpu->kvm->arch.vmid; @@ -310,7 +312,7 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) } static void make_xfence_request(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, + xlen_t hbase, xlen_t hmask, unsigned int req, unsigned int fallback_req, const struct kvm_riscv_hfence *data) { @@ -346,16 +348,16 @@ static void make_xfence_request(struct kvm *kvm, } void kvm_riscv_fence_i(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + xlen_t hbase, xlen_t hmask) { make_xfence_request(kvm, hbase, hmask, KVM_REQ_FENCE_I, KVM_REQ_FENCE_I, NULL); } void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, + xlen_t hbase, xlen_t hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order) + xlen_t order) { struct kvm_riscv_hfence data; @@ -369,16 +371,16 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, } void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + xlen_t hbase, xlen_t hmask) { make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_GVMA_VMID_ALL, KVM_REQ_HFENCE_GVMA_VMID_ALL, NULL); } void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid) + xlen_t hbase, xlen_t hmask, + xlen_t gva, xlen_t gvsz, + xlen_t order, xlen_t asid) { struct kvm_riscv_hfence data; @@ -392,8 +394,8 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, } void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long asid) + xlen_t hbase, xlen_t hmask, + xlen_t asid) { struct kvm_riscv_hfence data; @@ -405,9 +407,9 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, } void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order) + xlen_t hbase, xlen_t hmask, + xlen_t gva, xlen_t gvsz, + xlen_t order) { struct kvm_riscv_hfence data; @@ -421,7 +423,7 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, } void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + xlen_t hbase, xlen_t hmask) { make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, KVM_REQ_HFENCE_VVMA_ALL, NULL); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 60d684c76c58..144e25ead287 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -797,11 +797,11 @@ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu, if (kvm_riscv_nacl_autoswap_csr_available()) { hcntx->hstatus = nacl_csr_read(nsh, CSR_HSTATUS); - nacl_scratch_write_long(nsh, + nacl_scratch_write_csr(nsh, SBI_NACL_SHMEM_AUTOSWAP_OFFSET + SBI_NACL_SHMEM_AUTOSWAP_HSTATUS, gcntx->hstatus); - nacl_scratch_write_long(nsh, + nacl_scratch_write_csr(nsh, SBI_NACL_SHMEM_AUTOSWAP_OFFSET, SBI_NACL_SHMEM_AUTOSWAP_FLAG_HSTATUS); } else if (kvm_riscv_nacl_sync_csr_available()) { @@ -811,7 +811,7 @@ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu, hcntx->hstatus = csr_swap(CSR_HSTATUS, gcntx->hstatus); } - nacl_scratch_write_longs(nsh, + nacl_scratch_write_csrs(nsh, SBI_NACL_SHMEM_SRET_OFFSET + SBI_NACL_SHMEM_SRET_X(1), &gcntx->ra, @@ -821,10 +821,10 @@ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu, SBI_EXT_NACL_SYNC_SRET); if (kvm_riscv_nacl_autoswap_csr_available()) { - nacl_scratch_write_long(nsh, + nacl_scratch_write_csr(nsh, SBI_NACL_SHMEM_AUTOSWAP_OFFSET, 0); - gcntx->hstatus = nacl_scratch_read_long(nsh, + gcntx->hstatus = nacl_scratch_read_csr(nsh, SBI_NACL_SHMEM_AUTOSWAP_OFFSET + SBI_NACL_SHMEM_AUTOSWAP_HSTATUS); } else { diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 6e0c18412795..0f6b80d87825 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -246,11 +246,11 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, /* Print details in-case of error */ if (ret < 0) { kvm_err("VCPU exit error %d\n", ret); - kvm_err("SEPC=0x%lx SSTATUS=0x%lx HSTATUS=0x%lx\n", + kvm_err("SEPC=0x" REG_FMT "SSTATUS=0x" REG_FMT " HSTATUS=0x" REG_FMT "\n", vcpu->arch.guest_context.sepc, vcpu->arch.guest_context.sstatus, vcpu->arch.guest_context.hstatus); - kvm_err("SCAUSE=0x%lx STVAL=0x%lx HTVAL=0x%lx HTINST=0x%lx\n", + kvm_err("SCAUSE=0x" REG_FMT " STVAL=0x" REG_FMT " HTVAL=0x" REG_FMT " HTINST=0x" REG_FMT "\n", trap->scause, trap->stval, trap->htval, trap->htinst); } diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 97dec18e6989..c25415d63d96 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -221,13 +221,13 @@ struct csr_func { * "struct insn_func". */ int (*func)(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask); + xlen_t *val, xlen_t new_val, + xlen_t wr_mask); }; static int seed_csr_rmw(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask) + xlen_t *val, xlen_t new_val, + xlen_t wr_mask) { if (!riscv_isa_extension_available(vcpu->arch.isa, ZKR)) return KVM_INSN_ILLEGAL_TRAP; @@ -275,9 +275,9 @@ static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) int i, rc = KVM_INSN_ILLEGAL_TRAP; unsigned int csr_num = insn >> SH_RS2; unsigned int rs1_num = (insn >> SH_RS1) & MASK_RX; - ulong rs1_val = GET_RS1(insn, &vcpu->arch.guest_context); + xlen_t rs1_val = GET_RS1(insn, &vcpu->arch.guest_context); const struct csr_func *tcfn, *cfn = NULL; - ulong val = 0, wr_mask = 0, new_val = 0; + xlen_t val = 0, wr_mask = 0, new_val = 0; /* Decode the CSR instruction */ switch (GET_FUNCT3(insn)) { diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c index f6d27b59c641..34e11fbe27e8 100644 --- a/arch/riscv/kvm/vcpu_onereg.c +++ b/arch/riscv/kvm/vcpu_onereg.c @@ -448,7 +448,7 @@ static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu, static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu, unsigned long reg_num, - unsigned long *out_val) + xlen_t *out_val) { struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; @@ -494,24 +494,24 @@ static inline int kvm_riscv_vcpu_smstateen_set_csr(struct kvm_vcpu *vcpu, struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr; if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) / - sizeof(unsigned long)) + sizeof(xlen_t)) return -EINVAL; - ((unsigned long *)csr)[reg_num] = reg_val; + ((xlen_t *)csr)[reg_num] = reg_val; return 0; } static int kvm_riscv_vcpu_smstateen_get_csr(struct kvm_vcpu *vcpu, unsigned long reg_num, - unsigned long *out_val) + xlen_t *out_val) { struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr; if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) / - sizeof(unsigned long)) + sizeof(xlen_t)) return -EINVAL; - *out_val = ((unsigned long *)csr)[reg_num]; + *out_val = ((xlen_t *)csr)[reg_num]; return 0; } @@ -519,12 +519,12 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { int rc; - unsigned long __user *uaddr = - (unsigned long __user *)(unsigned long)reg->addr; + xlen_t __user *uaddr = + (xlen_t __user *)(unsigned long)reg->addr; unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_RISCV_CSR); - unsigned long reg_val, reg_subtype; + xlen_t reg_val, reg_subtype; if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) return -EINVAL; diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 2707a51b082c..3bfecda72150 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -198,7 +198,7 @@ static int pmu_get_pmc_index(struct kvm_pmu *pmu, unsigned long eidx, } static int pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx, - unsigned long *out_val) + xlen_t *out_val) { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; @@ -228,7 +228,7 @@ static int pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx, } static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, - unsigned long *out_val) + xlen_t *out_val) { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; @@ -354,8 +354,8 @@ int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid) } int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, - unsigned long *val, unsigned long new_val, - unsigned long wr_mask) + xlen_t *val, xlen_t new_val, + xlen_t wr_mask) { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); int cidx, ret = KVM_INSN_CONTINUE_NEXT_SEPC; diff --git a/arch/riscv/kvm/vcpu_sbi_base.c b/arch/riscv/kvm/vcpu_sbi_base.c index 5bc570b984f4..a243339a73fd 100644 --- a/arch/riscv/kvm/vcpu_sbi_base.c +++ b/arch/riscv/kvm/vcpu_sbi_base.c @@ -18,7 +18,7 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, { struct kvm_cpu_context *cp = &vcpu->arch.guest_context; const struct kvm_vcpu_sbi_extension *sbi_ext; - unsigned long *out_val = &retdata->out_val; + xlen_t *out_val = &retdata->out_val; switch (cp->a6) { case SBI_EXT_BASE_GET_SPEC_VERSION: diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index ddc98714ce8e..17744dfaf008 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -17,7 +17,7 @@ static unsigned long vmid_version = 1; static unsigned long vmid_next; -static unsigned long vmid_bits __ro_after_init; +static xlen_t vmid_bits __ro_after_init; static DEFINE_SPINLOCK(vmid_lock); void __init kvm_riscv_gstage_vmid_detect(void) @@ -40,7 +40,7 @@ void __init kvm_riscv_gstage_vmid_detect(void) vmid_bits = 0; } -unsigned long kvm_riscv_gstage_vmid_bits(void) +xlen_t kvm_riscv_gstage_vmid_bits(void) { return vmid_bits; } From patchwork Tue Mar 25 12:16:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028467 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A62432586C5; Tue, 25 Mar 2025 12:21:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905278; cv=none; b=mEr8USxEYxOrCxyaPiza+sxjz3fT4aQEw5VekROrjf+YJF1RPSLmuKR+OH4/mBshZVnd7kJhRZklrPn+Sy0eDVPvKXI4iExQiyU8qd3xaVo9XKDxOtsosyIAMHjtcnX9x7XK8qKbGC92LrwDD+HL/Zn+9QV8vbpjR1B59V+iqcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905278; c=relaxed/simple; bh=2UWsq5ym5VI7/vPr12wsi4xJfQOhojxJV/AkLgh/hzg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fhizAkYnN+iL3ekWGxKB8XbM0ZhdzYRkD34vg280Q/B1zQL6q4jnPX77CzziIE++LVW0cd5tw+1zlJqmXE/r6+D6qDoew730caks0FoJ/GHWW++A5cXVkdA3Tv6fkMZMum8rMnJHa/loZ+h8Et8S+q3662V2a7VlfyZdk1f+wAY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CASldfZK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CASldfZK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE0C8C4CEED; Tue, 25 Mar 2025 12:21:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905277; bh=2UWsq5ym5VI7/vPr12wsi4xJfQOhojxJV/AkLgh/hzg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CASldfZKFy6JxMfog2czdRUW400bjKu3qkIqf2g0bO4mAfdCAZJOHjQKj39cz8aqg XG1civv4o3gD3MK48Yhqik6WW027RRpl8DQvIB9Yob7C9kX6E9qV9uYudT8hhev5cL QTVrUoI6O6nkCMoTVAOPFZqdY9GzjntjKQO6ayze5oXkt5L0UWJbyNb4+aKdlWWzz3 3BCk6+WJF49ZitzC7I5AW48N1eiiEebNx/f6MnTwAH2grSuKMLPrEypZXRY8Sm16qK mLSvyciL4KYAbVkJERD8dZPZNRmSFb8RH3/1NqleyW9uvgrIwTuaYlC1t3hgHEcd/Z SqwIoUbHe8uUw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 19/43] rv64ilp32_abi: irqchip: irq-riscv-intc: Use xlen_t instead of ulong Date: Tue, 25 Mar 2025 08:16:00 -0400 Message-Id: <20250325121624.523258-20-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on CONFIG_64BIT, so use xlen/xlen_t instead of BITS_PER_LONG/ulong. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/irqchip/irq-riscv-intc.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c index f653c13de62b..4fc7d5704acf 100644 --- a/drivers/irqchip/irq-riscv-intc.c +++ b/drivers/irqchip/irq-riscv-intc.c @@ -20,18 +20,19 @@ #include #include +#include static struct irq_domain *intc_domain; -static unsigned int riscv_intc_nr_irqs __ro_after_init = BITS_PER_LONG; -static unsigned int riscv_intc_custom_base __ro_after_init = BITS_PER_LONG; +static unsigned int riscv_intc_nr_irqs __ro_after_init = __riscv_xlen; +static unsigned int riscv_intc_custom_base __ro_after_init = __riscv_xlen; static unsigned int riscv_intc_custom_nr_irqs __ro_after_init; static void riscv_intc_irq(struct pt_regs *regs) { - unsigned long cause = regs->cause & ~CAUSE_IRQ_FLAG; + xlen_t cause = regs->cause & ~CAUSE_IRQ_FLAG; if (generic_handle_domain_irq(intc_domain, cause)) - pr_warn_ratelimited("Failed to handle interrupt (cause: %ld)\n", cause); + pr_warn_ratelimited("Failed to handle interrupt (cause: " REG_FMT ")\n", cause); } static void riscv_intc_aia_irq(struct pt_regs *regs) From patchwork Tue Mar 25 12:16:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028468 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8716625BABA; Tue, 25 Mar 2025 12:21:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905292; cv=none; b=PB0PfrEdgTKft4MW+bs5nvIdfjFMHre0I7VuPTucnAjiEG73HicuDcq5PFF3fwo3PIc6850qLzKICcxcFx9Sm4jUM33VgRoR8VZpeFBtxQH/IPQap3CSy5xDFvghqFLL4ev67Iki2nCGTbeEfq8zWbrYlajyqNQ1PEHz3BkvIqw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905292; c=relaxed/simple; bh=2jJgyb4FoTTfcF3Budy1LUhXG9H2Y2tw1xz71PydyxU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hke3DK61uio4zxsiA60dCTf2RDhYqnzRk7gqoFU2bGLzp3gYhw6YQBOHhbU2762xB2uVdYNDHRpdz61gtq3+LKn6AnxMZtyREuarNFLSkoqlSrgoHPnut9LNejXPmw4NVy49h7/1VpmoZRim4BmhU6QVbIyF+YP/oFq6ymmoXYE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Gz9tM3Jy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Gz9tM3Jy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA472C4CEE4; Tue, 25 Mar 2025 12:21:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905292; bh=2jJgyb4FoTTfcF3Budy1LUhXG9H2Y2tw1xz71PydyxU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Gz9tM3JywmMT22uwAGn8qB+80WAYV3v7Tgh0B3cb9T/BoswxR3T/k//AbqcuWEKLF L4edD2clUau1Rpi1qbdeJXdbmBWHEi+zr+K+bY+S5OVJT+Z7Ij8LqPN4+oB1wJmn4L TwL4dvh9smbkloOkN5N7/q5+M1EhCXtMHNIajyoIJo/07VkhxeQQZJKSQHjYyaQ+Fa KHrViSi3S5tb9gQhPdFzLrO7vlzHmaGxIcQIxVasrFiSnuiKC9ZY2mJKzPZPtsmO7E gi4u3P8cxdVd4W23j785rOpnmEAPzxlKxOy5Pd/d9VTUepXHGjCRYsCE81kr/SXvGT b+Gh+GEiotvJg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 20/43] rv64ilp32_abi: drivers/perf: Adapt xlen_t of sbiret Date: Tue, 25 Mar 2025 08:16:01 -0400 Message-Id: <20250325121624.523258-21-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Because rv64ilp32_abi change the sbiret struct, from: struct sbiret { long error; long value; }; to: struct sbiret { xlen_t error; xlen_t value; }; So, use a cast long to prevent compile warning. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/perf/riscv_pmu_sbi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index 698de8ddf895..e2c4d1bcbc7c 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -638,7 +638,7 @@ static int pmu_sbi_snapshot_setup(struct riscv_pmu *pmu, int cpu) /* Free up the snapshot area memory and fall back to SBI PMU calls without snapshot */ if (ret.error) { if (ret.error != SBI_ERR_NOT_SUPPORTED) - pr_warn("pmu snapshot setup failed with error %ld\n", ret.error); + pr_warn("pmu snapshot setup failed with error %ld\n", (long)ret.error); return sbi_err_map_linux_errno(ret.error); } @@ -679,7 +679,7 @@ static u64 pmu_sbi_ctr_read(struct perf_event *event) val |= ((u64)ret.value << 32); else WARN_ONCE(1, "Unable to read upper 32 bits of firmware counter error: %ld\n", - ret.error); + (long)ret.error); } } else { val = riscv_pmu_ctr_read_csr(info.csr); From patchwork Tue Mar 25 12:16:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028469 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46C0625C6EA; Tue, 25 Mar 2025 12:21:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905307; cv=none; b=MABosUlkCPqRlUQsblXKz01FcQNIQG69EdgsnthhJudVm7SYLHnzwjYQovKuoe1j2WcMCZDJP0/BZhYsO+27E8zLB6u8IruBiZpGn5hxMmy3rcdS7yYz5ivwQuwX4BtzLW3xC7x9VXsCxZCPJ/B9qDehioJRISHODH22gAVvoa8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905307; c=relaxed/simple; bh=CZdS/GvkA90YU/kEAB36wBveXCqjeNawAvE607HO4+w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=scYlMC+YY0HEBJMCbFygnmXvnD1YYhxzoyYz0ZLifnqfq0PPA0JxyLfp7kTrCSA0d3gfwQfJrSrq70Sq9NQIZtWSiazi2CB1Xv/YajWf5zkaAoQwD296iCPJ+lzhHK9ypE4xjEJZqYT7EOO/woj7F/8UJr1D8SMN7qqDgvu9z74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Yclr6uQ+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Yclr6uQ+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBB6BC4CEEE; Tue, 25 Mar 2025 12:21:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905306; bh=CZdS/GvkA90YU/kEAB36wBveXCqjeNawAvE607HO4+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Yclr6uQ+H+fwk4FeZuO/o1L+d44Ijt8+G1fy933XmRQHqYsRSsd0sTScvUWf8jhFn posU+nxR0I0CJubB8JpreHpWy/aH5nvKFU2hJMpWF6yA979un6JU7n1rYErPHMB+OE 1HtLAfDLBJWL5lUcfbf0A+xaNH26BIwc5uqCly6rbYZnlIoX+horQoW9v4j5Sryd9B XXw3PX/jNNQJEowJrGs9ycLTX55S8tjuI6nqYWKt5HT846Iutyoxz4uQ/nads2ZfRN d2AZWWzbzkjb2I7at8nkI5Hq2azsuTC7jh77lOOfoH2dxmDMXNH64ns6YpTd/KRsOD WXp0YdnLwIufw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 21/43] rv64ilp32_abi: asm-generic: Add custom BITS_PER_LONG definition Date: Tue, 25 Mar 2025 08:16:02 -0400 Message-Id: <20250325121624.523258-22-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, but BITS_PER_LONG is 32. So, give a custom architectural definition of BITS_PER_LONG to match the correct macro definition. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/uapi/asm/bitsperlong.h | 6 ++++++ include/asm-generic/bitsperlong.h | 2 ++ 2 files changed, 8 insertions(+) diff --git a/arch/riscv/include/uapi/asm/bitsperlong.h b/arch/riscv/include/uapi/asm/bitsperlong.h index 7d0b32e3b701..fec2ad91597c 100644 --- a/arch/riscv/include/uapi/asm/bitsperlong.h +++ b/arch/riscv/include/uapi/asm/bitsperlong.h @@ -9,6 +9,12 @@ #define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8) +#if __BITS_PER_LONG == 64 +#define BITS_PER_LONG 64 +#else +#define BITS_PER_LONG 32 +#endif + #include #endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */ diff --git a/include/asm-generic/bitsperlong.h b/include/asm-generic/bitsperlong.h index 1023e2a4bd37..7ccbb7ce6610 100644 --- a/include/asm-generic/bitsperlong.h +++ b/include/asm-generic/bitsperlong.h @@ -6,7 +6,9 @@ #ifdef CONFIG_64BIT +#ifndef BITS_PER_LONG #define BITS_PER_LONG 64 +#endif #else #define BITS_PER_LONG 32 #endif /* CONFIG_64BIT */ From patchwork Tue Mar 25 12:16:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028814 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F0F625BADE; Tue, 25 Mar 2025 12:22:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905322; cv=none; b=lh329s+fIten52W4NJN9PJNYL/Wf6h3RXkzpB6ty/3E19uMF585Lv4fU0hAPTbMSDskbPQG2FP9ndlS1G3hp/gved6Cqg5KasL8KaUB5lOpC/5RPk9LsfU1bYTdzgFkbEwqSmzXa9pedXHsD6TyZ0Voa2Fxss7KFcNJYLLfs5es= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905322; c=relaxed/simple; bh=EOiooLZvwzRM6FHGKB7Ghuvoq/JlTjUJTDs6W9lVozk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cQbTfQfFeVMK/HJW1P4xKe4Py6pWFYw4Pd2RXyIZOS0NRJVgDprQKKPux2BJ6xxHOBOoQ1V17helpws+A27p2xJYGW8aH+8P8sLZeuKNhpe0zTY/B+zd4+loveTNijPg9gxNzjYaqA2wGPZmyPyXPZZAGv2fsGq2GjbjxaY0Z9k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YiNwDozb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YiNwDozb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3ED79C4CEE9; Tue, 25 Mar 2025 12:21:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905322; bh=EOiooLZvwzRM6FHGKB7Ghuvoq/JlTjUJTDs6W9lVozk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YiNwDozbmDOug84APDztOZ2kil/O/36KZmxFYgseIvSYpf80jP5BkPovMatjpEwF7 9Mq3kmZ4ePc7RYLpMw9NEaSm0LGSMxns79AzH45x1KmUkZqWo4Skq6zfdG3Tlcxage +k81YCl4yto0KufO7fU24VbbmT5HpsiwnQ/LJNEbWscleUqJq17+j1b39fWz8LWMSg Uuit3b1JS+cJhBplzgf1jYyOjOrjET/O1rJQILxC4gg7nPXrpSCtouxe46dwFeFif2 9u1eZQAL0xJs37uoPehOJHR0ac4C+89etNoi3+6KrTYRMMOEqLazYSIyboQBbBmP0S sxGJTKjz/HGow== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 22/43] rv64ilp32_abi: bpf: Change KERN_ARENA_SZ to 256MiB Date: Tue, 25 Mar 2025 08:16:03 -0400 Message-Id: <20250325121624.523258-23-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI limits the vmalloc range to 512MB, hence the arena kernel range is set to 256MiB instead of 4GiB. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- kernel/bpf/arena.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c index 870aeb51d70a..4eb99f83d4a1 100644 --- a/kernel/bpf/arena.c +++ b/kernel/bpf/arena.c @@ -40,7 +40,14 @@ /* number of bytes addressable by LDX/STX insn with 16-bit 'off' field */ #define GUARD_SZ (1ull << sizeof_field(struct bpf_insn, off) * 8) -#define KERN_VM_SZ (SZ_4G + GUARD_SZ) + +#if BITS_PER_LONG == 64 +#define KERN_ARENA_SZ SZ_4G +#else +#define KERN_ARENA_SZ SZ_256M +#endif + +#define KERN_VM_SZ (KERN_ARENA_SZ + GUARD_SZ) struct bpf_arena { struct bpf_map map; @@ -115,7 +122,7 @@ static struct bpf_map *arena_map_alloc(union bpf_attr *attr) return ERR_PTR(-EINVAL); vm_range = (u64)attr->max_entries * PAGE_SIZE; - if (vm_range > SZ_4G) + if (vm_range > KERN_ARENA_SZ) return ERR_PTR(-E2BIG); if ((attr->map_extra >> 32) != ((attr->map_extra + vm_range - 1) >> 32)) @@ -321,7 +328,7 @@ static unsigned long arena_get_unmapped_area(struct file *filp, unsigned long ad if (pgoff) return -EINVAL; - if (len > SZ_4G) + if (len > KERN_ARENA_SZ) return -E2BIG; /* if user_vm_start was specified at arena creation time */ @@ -337,12 +344,14 @@ static unsigned long arena_get_unmapped_area(struct file *filp, unsigned long ad ret = mm_get_unmapped_area(current->mm, filp, addr, len * 2, 0, flags); if (IS_ERR_VALUE(ret)) return ret; +#if BITS_PER_LONG == 64 if ((ret >> 32) == ((ret + len - 1) >> 32)) return ret; +#endif if (WARN_ON_ONCE(arena->user_vm_start)) /* checks at map creation time should prevent this */ return -EFAULT; - return round_up(ret, SZ_4G); + return round_up(ret, KERN_ARENA_SZ); } static int arena_map_mmap(struct bpf_map *map, struct vm_area_struct *vma) @@ -366,7 +375,7 @@ static int arena_map_mmap(struct bpf_map *map, struct vm_area_struct *vma) return -EBUSY; /* Earlier checks should prevent this */ - if (WARN_ON_ONCE(vma->vm_end - vma->vm_start > SZ_4G || vma->vm_pgoff)) + if (WARN_ON_ONCE(vma->vm_end - vma->vm_start > KERN_ARENA_SZ || vma->vm_pgoff)) return -EFAULT; if (remember_vma(arena, vma)) From patchwork Tue Mar 25 12:16:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028815 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC76B257AC7; Tue, 25 Mar 2025 12:22:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905339; cv=none; b=tYhfYU0SXsT4a/io4e7LbhrdrYW4pX0QhPx3oY1HfAm7Bz3dQgSWL+3srsivDJhtZLBiruOQ7nH8zBopVGAavIcc+F33486aPOwFuhZSjPWwqcTvbt99MrESNKUt7qTHO9asU45mVFZt+t5K3KTnIRu2B1NWg6qLRyDrSAVBCCw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905339; c=relaxed/simple; bh=Gba3zEmCGRsT82kSH4PvBGfFD1Y0z5mGbjQJQ/8lKIw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CJfc3yZIkIkb07ftnX+Cinr/5xAhRtom6BuQIBS6LNMu6VzWtaRlnDbhCdlW8KbxlgSScC1sumBto/3hihifXouJXyBY12sZ9O24T4x6MtEgjSvDkIBRkV3o4QVH7OuYsIbFkYieVjT8UrzmnIEnAdpZTky/Gm1A5n4XBVgghv8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YKbe7u9X; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YKbe7u9X" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6BE58C4CEED; Tue, 25 Mar 2025 12:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905335; bh=Gba3zEmCGRsT82kSH4PvBGfFD1Y0z5mGbjQJQ/8lKIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YKbe7u9Xvh8isSihGt6kXF+4MLeubRmVXpDt87uyRN/SP7UllOiStMyrtlJQjIfsu 1Hqtk5naMe0UheGZS80Sfa0zl11d6cST2bQbzQbQP6zp3JTlEhxFZ1+erShyWgpVv5 m4OgxrFrdZWkRi8e5adLBRWw11xX6PZl4Esqh79r65lvkXVdx7361AQY5CaHF/wGsB E83s0iVJpTljmHXSCPSP2d0uzzX5JuGLn7J9OB6gQRtgkhW2o+6R7WFKdYQYdMxY96 tsyLis/6rOdw68n8FLG2k9n+ahfkJr3B2G35UK8EUKDj+I6/B4BHklYbO/OqqvREWx amTO7iSypzjUw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 23/43] rv64ilp32_abi: compat: Correct compat_ulong_t cast Date: Tue, 25 Mar 2025 08:16:04 -0400 Message-Id: <20250325121624.523258-24-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI systems have BITS_PER_LONG set to 32, matching sizeof(compat_ulong_t). Adjust code involving compat_ulong_t accordingly. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/uapi/linux/auto_fs.h | 6 ++++++ kernel/compat.c | 15 ++++++++++++--- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/auto_fs.h b/include/uapi/linux/auto_fs.h index 8081df849743..7d925ee810b6 100644 --- a/include/uapi/linux/auto_fs.h +++ b/include/uapi/linux/auto_fs.h @@ -80,9 +80,15 @@ enum { #define AUTOFS_IOC_SETTIMEOUT32 _IOWR(AUTOFS_IOCTL, \ AUTOFS_IOC_SETTIMEOUT_CMD, \ compat_ulong_t) +#if __riscv_xlen == 64 +#define AUTOFS_IOC_SETTIMEOUT _IOWR(AUTOFS_IOCTL, \ + AUTOFS_IOC_SETTIMEOUT_CMD, \ + unsigned long long) +#else #define AUTOFS_IOC_SETTIMEOUT _IOWR(AUTOFS_IOCTL, \ AUTOFS_IOC_SETTIMEOUT_CMD, \ unsigned long) +#endif #define AUTOFS_IOC_EXPIRE _IOR(AUTOFS_IOCTL, \ AUTOFS_IOC_EXPIRE_CMD, \ struct autofs_packet_expire) diff --git a/kernel/compat.c b/kernel/compat.c index fb50f29d9b36..46ffdc5e7cc4 100644 --- a/kernel/compat.c +++ b/kernel/compat.c @@ -203,11 +203,17 @@ long compat_get_bitmap(unsigned long *mask, const compat_ulong_t __user *umask, return -EFAULT; while (nr_compat_longs > 1) { - compat_ulong_t l1, l2; + compat_ulong_t l1; unsafe_get_user(l1, umask++, Efault); + nr_compat_longs -= 1; +#if BITS_PER_LONG == 64 + compat_ulong_t l2; unsafe_get_user(l2, umask++, Efault); *mask++ = ((unsigned long)l2 << BITS_PER_COMPAT_LONG) | l1; - nr_compat_longs -= 2; + nr_compat_longs -= 1; +#else + *mask++ = l1; +#endif } if (nr_compat_longs) unsafe_get_user(*mask, umask++, Efault); @@ -234,8 +240,11 @@ long compat_put_bitmap(compat_ulong_t __user *umask, unsigned long *mask, while (nr_compat_longs > 1) { unsigned long m = *mask++; unsafe_put_user((compat_ulong_t)m, umask++, Efault); + nr_compat_longs -= 1; +#if BITS_PER_LONG == 64 unsafe_put_user(m >> BITS_PER_COMPAT_LONG, umask++, Efault); - nr_compat_longs -= 2; + nr_compat_longs -= 1; +#endif } if (nr_compat_longs) unsafe_put_user((compat_ulong_t)*mask, umask++, Efault); From patchwork Tue Mar 25 12:16:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028816 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F392A25D8F7; Tue, 25 Mar 2025 12:22:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905350; cv=none; b=h5l9MacviEInDUl1jU2UjQuuVtE24qbsUvtou0aDFz+AW/dqVqxX033q1xN8DgtqkIS45Cb4xnTk9hPgJlvPspMRSAzoVT32U/qBBW4IAUHO6jWYsCJsb/5jVo+jPlRJViS6CJ9n7wg+vmy88q3mAzefaazsichjUylOcsb6ItI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905350; c=relaxed/simple; bh=eZJhhLnjsJ5KjWkXGFxCn3MEvfIMBL+8sG27XCid5KE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dmkUsoEpCb1W9mjVlHaQjBy44RZBnNG6CKWsghVSqNsz4VKZYIiKJnNNQmMbq/qL/6voBLX7vsWJa5wKy0eAES7YBAPMuUxOXqhJ4BDmRmaaq7i8K3UrMIx2QzKl+HwuosK62n3jklM/1sHrzzj2jtFNALAGi2TZDOMm/nnzEd8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qevNB9uN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qevNB9uN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6F4EC4CEE4; Tue, 25 Mar 2025 12:22:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905349; bh=eZJhhLnjsJ5KjWkXGFxCn3MEvfIMBL+8sG27XCid5KE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qevNB9uNRXUhGTfVJr+DKl6lS8FQeFJOY4f7ttg3jO2P7YXXcyp32FuamCgRbpfp6 Tj8ZRiyJrxpkmlppbuR0aTZlagq4HpBxvwcxFyDkvXMwezEgXXrlUpqBLGrm+vhQVE 9kwRr/BG5obSIi/E5vhu4AY8QmWoUzLcx2UvdyDSQLaiwaxvQJ5CeLSKOf3FwJJ5E/ hXAzq1VCYp+EoL2e1cTTQa5pm9NoxQr4RcYtb1wSopiaMM/m+ZruEqXbAWZjV+7luh k16fsX64eUOJWO5eXZk1Sat/sU8lm9uDUQzukBAAtX2breq8iLsn6lrcBr2XGHy3Dn z3KzNteqnEguQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 24/43] rv64ilp32_abi: compiler_types: Add "long long" into __native_word() Date: Tue, 25 Mar 2025 08:16:05 -0400 Message-Id: <20250325121624.523258-25-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi supports native atomic64 operations. The atomic64_t is defined by "long long," so add "long long" into __native_word(). Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/compiler_types.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 981cc3d7e3aa..6cf36a8e9570 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -505,9 +505,16 @@ struct ftrace_likely_data { default: (x))) /* Is this type a native word size -- useful for atomic operations */ +#ifdef CONFIG_64BIT +#define __native_word(t) \ + (sizeof(t) == sizeof(char) || sizeof(t) == sizeof(short) || \ + sizeof(t) == sizeof(int) || sizeof(t) == sizeof(long) || \ + sizeof(t) == sizeof(long long)) +#else #define __native_word(t) \ (sizeof(t) == sizeof(char) || sizeof(t) == sizeof(short) || \ sizeof(t) == sizeof(int) || sizeof(t) == sizeof(long)) +#endif #ifdef __OPTIMIZE__ # define __compiletime_assert(condition, msg, prefix, suffix) \ From patchwork Tue Mar 25 12:16:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028817 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C141C25D8F7; Tue, 25 Mar 2025 12:22:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905364; cv=none; b=O5iLTxfcEvGc7WLje7XHMSkWVcQAxRDCzzO24ZIb+ZWIJ4sPo+gGiW1WS+W6G7FQE5C7ZeRJ1abLlvSXq4LMpzhjIgRRqiTWO3THaFc2365anEGogGWFVkkKUIFiF6XNSA44HO7MeZR4mdkXrLeMqZlNV3DOuLZ+8O2mcLtc1yE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905364; c=relaxed/simple; bh=pRtaUMiFeAu+95a3CJwIpLTg/ZslxDoQ016dR5ldst0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=H2HKm6OcOHwOglAcARoacMwd/Xp5LJfL+ZTDpOoLRCVCBxnLJj4tPqopfcRmXTFl17tSLUzkSfKEgijapwSYbVv3IiW5n4m4rMoF8kNKt6Df/5011bUNODgNmduibrKArfiQsRfUSQSV3m3lO3PwRPg+pjjTPGGmytT7vvsZ3mo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=p2sof5de; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="p2sof5de" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED6B6C4CEE9; Tue, 25 Mar 2025 12:22:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905363; bh=pRtaUMiFeAu+95a3CJwIpLTg/ZslxDoQ016dR5ldst0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p2sof5deaUnKxXsdbuejLa7cMfsQ/jfo9NR4yzxltIbGt9I7LGg3viUZpXT1k8RFw 9WTRwz7YQbBmvTKk30sv6fC7V/LIHte4FG9qKAIedqe9Y/ickF0YMfyR54zTwhm5BQ obpUACKtQgLFBumRgfvC3WYEeXtPc/EwoiH4j3URzIqNlAo4TJWEvolJgwku1sqkJx 7rKaKFTW+8U8evbqHZ+v5cGHZoXKojJ2sQX8sVgxL7kldYux1laci0G+UGSLEZ94Hd tb6MX1/sRkfPX52VC52FXAgkrmJ6qByYnTyzQlY4MscJzEO+hWOnTRGT/1TDXE8C38 5yNfgh3K5oAwg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 25/43] rv64ilp32_abi: exec: Adapt 64lp64 env and argv Date: Tue, 25 Mar 2025 08:16:06 -0400 Message-Id: <20250325121624.523258-26-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi reuses the env and argv memory layout of the lp64 abi, so leave the space to fit the lp64 struct layout. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- fs/exec.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/exec.c b/fs/exec.c index 506cd411f4ac..548d18b7ae92 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -424,6 +424,10 @@ static const char __user *get_user_arg_ptr(struct user_arg_ptr argv, int nr) } #endif +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) + nr = nr * 2; +#endif + if (get_user(native, argv.ptr.native + nr)) return ERR_PTR(-EFAULT); From patchwork Tue Mar 25 12:16:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028818 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 015DB258CD6; Tue, 25 Mar 2025 12:23:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905381; cv=none; b=fxvwXUA8D1jD3DFNE+iQK6Z/q0Wn9A/RE3IgEItHM3UU+AhxJacV3gFfbZTu8I13c5AzdwezHrdTonWpFr3XQLRZ7R3cj+X68qIfmFtWPNFnPHpyo9B7s+y6VY9N/wJQdNiiAoVJMmUQ/pRjki/iz71FYZrMgBzSxmMKTvOMGmg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905381; c=relaxed/simple; bh=kGPsHFFMg2CAbXtRnGFCoMZoeXyVXOmvw2ykjsD95WE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Vd7wR9xRSzglgqhy+urllQIHZyvC20E4PjPJlpqV+Dz8lCr7MXvAPpgJJG5wO7l+EA1h66acIQWpgmSIUIQVRFm+G50zGHnBQ3kDnt3ZVbvD504BHp0SbRx960Wo0wq6rJLpjIPlbjzuy8QZvcKGL2joHYBthnkosd5wxSbz47s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ntclLWyc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ntclLWyc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E4CEC4CEE4; Tue, 25 Mar 2025 12:22:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905380; bh=kGPsHFFMg2CAbXtRnGFCoMZoeXyVXOmvw2ykjsD95WE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ntclLWycxIrsN91q9/Iq7DZLV3nwTWOf4Qo5f6R8yJlXo1s0ypvyO9gc8nw1n9iJa FXE4HvXg9jbtr74bzaGhsm46h50G8b9MlQPjOkwWiuLMEejz6lYx8r2CpJVGHd+i84 Y9n4c1aA4xtTudNn6+y4oe79jba9Mxkk0kZFXd/7xCyMom+qmIyy4T/0ysXPzI7bay 5sdbYuoQbrTw4El0JuMiLQwd1ke34f9kjkg3HXYA3l/Iwam2xe4E80RyKMurhKZ1jo s5Lzild3//u1NkkMe5ZEFU8hjrRZDyJHVc5UpBWJ9x1eZ2XWDKu2b3FH9QfncuRVF3 1ztYGTU3Jqe1Q== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 26/43] rv64ilp32_abi: file_ref: Use 32-bit width for refcnt Date: Tue, 25 Mar 2025 08:16:07 -0400 Message-Id: <20250325121624.523258-27-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The sizeof(atomic_t) is 4 in rv64ilp32 abi linux kernel, which could provide a higher density of cache and a smaller memory footprint. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/file_ref.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/file_ref.h b/include/linux/file_ref.h index 9b3a8d9b17ab..ce9b47359e14 100644 --- a/include/linux/file_ref.h +++ b/include/linux/file_ref.h @@ -27,7 +27,7 @@ * 0xFFFFFFFFFFFFFFFFUL */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define FILE_REF_ONEREF 0x0000000000000000UL #define FILE_REF_MAXREF 0x7FFFFFFFFFFFFFFFUL #define FILE_REF_SATURATED 0xA000000000000000UL @@ -44,7 +44,7 @@ #endif typedef struct { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 atomic64_t refcnt; #else atomic_t refcnt; From patchwork Tue Mar 25 12:16:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028819 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA94725E835; Tue, 25 Mar 2025 12:23:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905395; cv=none; b=uHsa8SXbLiSJaYASZqj+on3/tMeNZs7DV04sZTJSiZDpQWvMlz+U8duXa25d3e8YNY9pdIxroBATlbravz/+0171yhoDYSwq54EvO07u5zYev5bc7q/2py1fQqThbgNvr/VcO/eAgypVb6OGE6leErYlgiNIlQuEuiQyyrx7jjo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905395; c=relaxed/simple; bh=ZdAkreXNjU924DvNhCbpfQtujvhWVn4yr+yoUPb7XPY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TnjyBy2nhX9ByN5KAlQ0D3xvV0kCIQ3eu0eQExDwsUCK4Uve7OYEWov6sXj2Til+YLvgoI4xAFJ6VDIoha9TruWSx+fxyIFx6ert7ffcQ7U9TYogKe92hq1c1FaeMT4djWJMuKWnN+vF9KkOu2MutCAwso3HKRezuOW3c8o9BtQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=e0wh/8Tx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="e0wh/8Tx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EADAAC4CEED; Tue, 25 Mar 2025 12:23:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905394; bh=ZdAkreXNjU924DvNhCbpfQtujvhWVn4yr+yoUPb7XPY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e0wh/8TxUniEX/ZB9PGz8ZaGiNSQaTZGsJyssDgFsfp+2sbfBlRKcQH+lkBCu3zaB +q5JKkf/6/6G9uTXxsQGTjo+cVtnmWczJLZwsleSvMn8bvWHhxQEszuBw4x52yi9Bk RNfQry6flnLoDYkb77T9WYq6b9r6RmFWBsHS2ig0gx5/AlMuJzcHEncwWe7YEL1iNP 87fpjQaGqgvuIa+FZFiDtwpz20AgIa6I0L4yQIwx2Bw4QTCN9Co6vjT0+kAHFSHwyB wE+o43JlKf9U+bBptVGvCfBIgJ0CtZ70Z/obq8KrvHdjwDYUww471PY6i6UOaWxc62 UYzxYGeDDf+ug== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 27/43] rv64ilp32_abi: input: Adapt BITS_PER_LONG to dword Date: Tue, 25 Mar 2025 08:16:08 -0400 Message-Id: <20250325121624.523258-28-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, but BITS_PER_LONG is 32. So, adapt bits to dword with BITS_PER_LONG. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/input/input.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/input/input.c b/drivers/input/input.c index c9e3ac64bcd0..7af5e8c66f25 100644 --- a/drivers/input/input.c +++ b/drivers/input/input.c @@ -1006,7 +1006,11 @@ static int input_bits_to_string(char *buf, int buf_size, int len = 0; if (in_compat_syscall()) { +#if BITS_PER_LONG == 64 u32 dword = bits >> 32; +#else + u32 dword = bits; +#endif if (dword || !skip_empty) len += snprintf(buf, buf_size, "%x ", dword); From patchwork Tue Mar 25 12:16:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028820 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3DF825FA12; Tue, 25 Mar 2025 12:23:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905408; cv=none; b=uQsMZo0Bm8ndLdF2ik0NN25hoIqPdVMrr1p9qlVt0BX0cEHeSlIJ4LDJj5tGDdoL4HyPY7gz/Ul87E8dZAgYgcQlSdnrLmNLp5GpUeB0jJEIrQWoctC31BERqXfqU+OMGlD63TmgdUs7zgl4Cqz+2e1KBgN73lmJXt0xxXbX2AY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905408; c=relaxed/simple; bh=pIv4vnuS3dsIZ+vCl3ehqADgm2CZ2tBy3QR8swaXSZQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BUASAtCzVyNhBZULnImSGkXqPIxyxj1lDMAMqhhC71sKeAIxtKqgdlCAuW+J5JyhEEM1WMzQcPpFt8KZPer+UkaUp67uT+dfAhFy7fxKHS4iEDyKq4c1h7rx8qKXcmuQow4Bug8JW1HhMmy3XoepWUgIdcRoFDBsyjLpxIu3FOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ecDCGr6O; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ecDCGr6O" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2976FC4CEE4; Tue, 25 Mar 2025 12:23:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905408; bh=pIv4vnuS3dsIZ+vCl3ehqADgm2CZ2tBy3QR8swaXSZQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ecDCGr6OLJrXP13nyG/CgfeqC9ouA1qT3tlj+OsrVH1NtYD+Twp5WMbEHjMMkk81f stvP5ulWY1CYT8IOPOz08uY1HZBIlRPl1RvD/lfIzSGNe84mV4QvJl7p4JgmuvzAGK 4RQVcdL24R3V34bLEfSycUxh++RgrXULYrYbovXoNTKWlQFJodYvno3cZge1QELjR8 9C6TOI1VyQMTYJ6+SrsJ+n3smhuFVy+uvJ4yY+gtiJBw2C4LdtFBWY4B9hpCJDn6Wt XZ5TT6fYnWqqxrxfJ2ivZTzD3naWTrT9gp922T0ATA8xVyN89LV9zmP9RucwuPViGB 6t4JQnchHEBxw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 28/43] rv64ilp32_abi: iov_iter: Resize kvec to match iov_iter's size Date: Tue, 25 Mar 2025 08:16:09 -0400 Message-Id: <20250325121624.523258-29-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" As drivers/vhost/vringh.c uses BUILD_BUG_ON(sizeof(struct iovec) != sizeof(struct kvec)), make them the same. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/uio.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index 8ada84e85447..0e1ca023374c 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -17,7 +17,13 @@ typedef unsigned int __bitwise iov_iter_extraction_t; struct kvec { void *iov_base; /* and that should *never* hold a userland pointer */ +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) + u32 __pad1; +#endif size_t iov_len; +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) + u32 __pad2; +#endif }; enum iter_type { From patchwork Tue Mar 25 12:16:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028821 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6B5F258CE4; Tue, 25 Mar 2025 12:23:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905423; cv=none; b=f5oS/lpZG7jrApvgmApBozBIAW3qi9Rsh8439iojC8PRL2AXTV+YcfOBtj1nWfpFxJqQpWBTFwW01AtvwJGBKZFn7P+l3WvVxwqhoPLUChMLqxZLZSVLrh9mZXTE1UzgrhHwF/arft1gPgO5sbpcUs1upWPQl0c59jhhuQCI0XE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905423; c=relaxed/simple; bh=UIaebIfG0R9TIRnbdttpSpkeO01MnZQH6Hjy2o37EAI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fzKrZFrRNVSJCWibAqqXqfg3QD54dd226rKN9Q+8kum8/3xR7vChlXV8lbrXxZkqq68ZwH7wkWwf+dyNB6JILMSt2geNStkJ8d3cqqmecvTRvr1KdUICGkZNAo+rX7b5DYCGNQwgU+SD+GwkSi2Q06Xk8ariI5Tm1Mk+ES+QWvM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Db/QgCVy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Db/QgCVy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2AABC4CEE9; Tue, 25 Mar 2025 12:23:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905422; bh=UIaebIfG0R9TIRnbdttpSpkeO01MnZQH6Hjy2o37EAI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Db/QgCVyTP1KMjCiu2milpvC3qkLDNwk3mVRrlKUi/lDDidPq6IZsGUssESgSnXHT 8tlLCgF2DLpUf5AyzRz+b6VUbZToe3eNsINg1A5sW1DeN4FI3h6hqjvjGodEGNG3Mh rXbNckGFXXUat8nF+OjELG2T7OAsRhxP5N1Ozf+PiFG4kcl8ToY6X3gRUFGuT59V6Y YP+P1HiohUl8j+H6MaYzerQ0EVmocVtWqxn3ORYHJ6nPM6nrsNgTMbiFmnMTsB/FvM VXYwY9CxVXMJeL/4Uaem46fFV73JfQLtYq07lKQaPXQyFsbq4ou++k7Q2Nf95Q6DAD 7iBflbn0swm5A== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 29/43] rv64ilp32_abi: locking/atomic: Use BITS_PER_LONG for scripts Date: Tue, 25 Mar 2025 08:16:10 -0400 Message-Id: <20250325121624.523258-30-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" In RV64ILP32 ABI systems, BITS_PER_LONG equals 32 and determines code selection, not CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/atomic/atomic-long.h | 174 ++++++++++++++--------------- scripts/atomic/gen-atomic-long.sh | 4 +- 2 files changed, 89 insertions(+), 89 deletions(-) diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index f86b29d90877..e31e0bdf9e26 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -9,7 +9,7 @@ #include #include -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 typedef atomic64_t atomic_long_t; #define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) #define atomic_long_cond_read_acquire atomic64_cond_read_acquire @@ -34,7 +34,7 @@ typedef atomic_t atomic_long_t; static __always_inline long raw_atomic_long_read(const atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_read(v); #else return raw_atomic_read(v); @@ -54,7 +54,7 @@ raw_atomic_long_read(const atomic_long_t *v) static __always_inline long raw_atomic_long_read_acquire(const atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_read_acquire(v); #else return raw_atomic_read_acquire(v); @@ -75,7 +75,7 @@ raw_atomic_long_read_acquire(const atomic_long_t *v) static __always_inline void raw_atomic_long_set(atomic_long_t *v, long i) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_set(v, i); #else raw_atomic_set(v, i); @@ -96,7 +96,7 @@ raw_atomic_long_set(atomic_long_t *v, long i) static __always_inline void raw_atomic_long_set_release(atomic_long_t *v, long i) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_set_release(v, i); #else raw_atomic_set_release(v, i); @@ -117,7 +117,7 @@ raw_atomic_long_set_release(atomic_long_t *v, long i) static __always_inline void raw_atomic_long_add(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_add(i, v); #else raw_atomic_add(i, v); @@ -138,7 +138,7 @@ raw_atomic_long_add(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return(i, v); #else return raw_atomic_add_return(i, v); @@ -159,7 +159,7 @@ raw_atomic_long_add_return(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_acquire(i, v); #else return raw_atomic_add_return_acquire(i, v); @@ -180,7 +180,7 @@ raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_release(i, v); #else return raw_atomic_add_return_release(i, v); @@ -201,7 +201,7 @@ raw_atomic_long_add_return_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_relaxed(i, v); #else return raw_atomic_add_return_relaxed(i, v); @@ -222,7 +222,7 @@ raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add(i, v); #else return raw_atomic_fetch_add(i, v); @@ -243,7 +243,7 @@ raw_atomic_long_fetch_add(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_acquire(i, v); #else return raw_atomic_fetch_add_acquire(i, v); @@ -264,7 +264,7 @@ raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_release(i, v); #else return raw_atomic_fetch_add_release(i, v); @@ -285,7 +285,7 @@ raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_relaxed(i, v); #else return raw_atomic_fetch_add_relaxed(i, v); @@ -306,7 +306,7 @@ raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_sub(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_sub(i, v); #else raw_atomic_sub(i, v); @@ -327,7 +327,7 @@ raw_atomic_long_sub(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return(i, v); #else return raw_atomic_sub_return(i, v); @@ -348,7 +348,7 @@ raw_atomic_long_sub_return(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_acquire(i, v); #else return raw_atomic_sub_return_acquire(i, v); @@ -369,7 +369,7 @@ raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_release(i, v); #else return raw_atomic_sub_return_release(i, v); @@ -390,7 +390,7 @@ raw_atomic_long_sub_return_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_relaxed(i, v); #else return raw_atomic_sub_return_relaxed(i, v); @@ -411,7 +411,7 @@ raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub(i, v); #else return raw_atomic_fetch_sub(i, v); @@ -432,7 +432,7 @@ raw_atomic_long_fetch_sub(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_acquire(i, v); #else return raw_atomic_fetch_sub_acquire(i, v); @@ -453,7 +453,7 @@ raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_release(i, v); #else return raw_atomic_fetch_sub_release(i, v); @@ -474,7 +474,7 @@ raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_relaxed(i, v); #else return raw_atomic_fetch_sub_relaxed(i, v); @@ -494,7 +494,7 @@ raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_inc(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_inc(v); #else raw_atomic_inc(v); @@ -514,7 +514,7 @@ raw_atomic_long_inc(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return(v); #else return raw_atomic_inc_return(v); @@ -534,7 +534,7 @@ raw_atomic_long_inc_return(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_acquire(v); #else return raw_atomic_inc_return_acquire(v); @@ -554,7 +554,7 @@ raw_atomic_long_inc_return_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_release(v); #else return raw_atomic_inc_return_release(v); @@ -574,7 +574,7 @@ raw_atomic_long_inc_return_release(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_relaxed(v); #else return raw_atomic_inc_return_relaxed(v); @@ -594,7 +594,7 @@ raw_atomic_long_inc_return_relaxed(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc(v); #else return raw_atomic_fetch_inc(v); @@ -614,7 +614,7 @@ raw_atomic_long_fetch_inc(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_acquire(v); #else return raw_atomic_fetch_inc_acquire(v); @@ -634,7 +634,7 @@ raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_release(v); #else return raw_atomic_fetch_inc_release(v); @@ -654,7 +654,7 @@ raw_atomic_long_fetch_inc_release(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_relaxed(v); #else return raw_atomic_fetch_inc_relaxed(v); @@ -674,7 +674,7 @@ raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) static __always_inline void raw_atomic_long_dec(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_dec(v); #else raw_atomic_dec(v); @@ -694,7 +694,7 @@ raw_atomic_long_dec(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return(v); #else return raw_atomic_dec_return(v); @@ -714,7 +714,7 @@ raw_atomic_long_dec_return(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_acquire(v); #else return raw_atomic_dec_return_acquire(v); @@ -734,7 +734,7 @@ raw_atomic_long_dec_return_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_release(v); #else return raw_atomic_dec_return_release(v); @@ -754,7 +754,7 @@ raw_atomic_long_dec_return_release(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_relaxed(v); #else return raw_atomic_dec_return_relaxed(v); @@ -774,7 +774,7 @@ raw_atomic_long_dec_return_relaxed(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec(v); #else return raw_atomic_fetch_dec(v); @@ -794,7 +794,7 @@ raw_atomic_long_fetch_dec(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_acquire(v); #else return raw_atomic_fetch_dec_acquire(v); @@ -814,7 +814,7 @@ raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_release(v); #else return raw_atomic_fetch_dec_release(v); @@ -834,7 +834,7 @@ raw_atomic_long_fetch_dec_release(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_relaxed(v); #else return raw_atomic_fetch_dec_relaxed(v); @@ -855,7 +855,7 @@ raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) static __always_inline void raw_atomic_long_and(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_and(i, v); #else raw_atomic_and(i, v); @@ -876,7 +876,7 @@ raw_atomic_long_and(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and(i, v); #else return raw_atomic_fetch_and(i, v); @@ -897,7 +897,7 @@ raw_atomic_long_fetch_and(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_acquire(i, v); #else return raw_atomic_fetch_and_acquire(i, v); @@ -918,7 +918,7 @@ raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_release(i, v); #else return raw_atomic_fetch_and_release(i, v); @@ -939,7 +939,7 @@ raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_relaxed(i, v); #else return raw_atomic_fetch_and_relaxed(i, v); @@ -960,7 +960,7 @@ raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_andnot(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_andnot(i, v); #else raw_atomic_andnot(i, v); @@ -981,7 +981,7 @@ raw_atomic_long_andnot(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot(i, v); #else return raw_atomic_fetch_andnot(i, v); @@ -1002,7 +1002,7 @@ raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_acquire(i, v); #else return raw_atomic_fetch_andnot_acquire(i, v); @@ -1023,7 +1023,7 @@ raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_release(i, v); #else return raw_atomic_fetch_andnot_release(i, v); @@ -1044,7 +1044,7 @@ raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_relaxed(i, v); #else return raw_atomic_fetch_andnot_relaxed(i, v); @@ -1065,7 +1065,7 @@ raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_or(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_or(i, v); #else raw_atomic_or(i, v); @@ -1086,7 +1086,7 @@ raw_atomic_long_or(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or(i, v); #else return raw_atomic_fetch_or(i, v); @@ -1107,7 +1107,7 @@ raw_atomic_long_fetch_or(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_acquire(i, v); #else return raw_atomic_fetch_or_acquire(i, v); @@ -1128,7 +1128,7 @@ raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_release(i, v); #else return raw_atomic_fetch_or_release(i, v); @@ -1149,7 +1149,7 @@ raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_relaxed(i, v); #else return raw_atomic_fetch_or_relaxed(i, v); @@ -1170,7 +1170,7 @@ raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_xor(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_xor(i, v); #else raw_atomic_xor(i, v); @@ -1191,7 +1191,7 @@ raw_atomic_long_xor(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor(i, v); #else return raw_atomic_fetch_xor(i, v); @@ -1212,7 +1212,7 @@ raw_atomic_long_fetch_xor(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_acquire(i, v); #else return raw_atomic_fetch_xor_acquire(i, v); @@ -1233,7 +1233,7 @@ raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_release(i, v); #else return raw_atomic_fetch_xor_release(i, v); @@ -1254,7 +1254,7 @@ raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_relaxed(i, v); #else return raw_atomic_fetch_xor_relaxed(i, v); @@ -1275,7 +1275,7 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_xchg(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg(v, new); #else return raw_atomic_xchg(v, new); @@ -1296,7 +1296,7 @@ raw_atomic_long_xchg(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_acquire(v, new); #else return raw_atomic_xchg_acquire(v, new); @@ -1317,7 +1317,7 @@ raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_release(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_release(v, new); #else return raw_atomic_xchg_release(v, new); @@ -1338,7 +1338,7 @@ raw_atomic_long_xchg_release(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_relaxed(v, new); #else return raw_atomic_xchg_relaxed(v, new); @@ -1361,7 +1361,7 @@ raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg(v, old, new); #else return raw_atomic_cmpxchg(v, old, new); @@ -1384,7 +1384,7 @@ raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_acquire(v, old, new); #else return raw_atomic_cmpxchg_acquire(v, old, new); @@ -1407,7 +1407,7 @@ raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_release(v, old, new); #else return raw_atomic_cmpxchg_release(v, old, new); @@ -1430,7 +1430,7 @@ raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_relaxed(v, old, new); #else return raw_atomic_cmpxchg_relaxed(v, old, new); @@ -1454,7 +1454,7 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg(v, (int *)old, new); @@ -1478,7 +1478,7 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new); @@ -1502,7 +1502,7 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_release(v, (int *)old, new); @@ -1526,7 +1526,7 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new); @@ -1547,7 +1547,7 @@ raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_and_test(i, v); #else return raw_atomic_sub_and_test(i, v); @@ -1567,7 +1567,7 @@ raw_atomic_long_sub_and_test(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_dec_and_test(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_and_test(v); #else return raw_atomic_dec_and_test(v); @@ -1587,7 +1587,7 @@ raw_atomic_long_dec_and_test(atomic_long_t *v) static __always_inline bool raw_atomic_long_inc_and_test(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_and_test(v); #else return raw_atomic_inc_and_test(v); @@ -1608,7 +1608,7 @@ raw_atomic_long_inc_and_test(atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative(i, v); #else return raw_atomic_add_negative(i, v); @@ -1629,7 +1629,7 @@ raw_atomic_long_add_negative(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_acquire(i, v); #else return raw_atomic_add_negative_acquire(i, v); @@ -1650,7 +1650,7 @@ raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_release(i, v); #else return raw_atomic_add_negative_release(i, v); @@ -1671,7 +1671,7 @@ raw_atomic_long_add_negative_release(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_relaxed(i, v); #else return raw_atomic_add_negative_relaxed(i, v); @@ -1694,7 +1694,7 @@ raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_unless(v, a, u); #else return raw_atomic_fetch_add_unless(v, a, u); @@ -1717,7 +1717,7 @@ raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_unless(v, a, u); #else return raw_atomic_add_unless(v, a, u); @@ -1738,7 +1738,7 @@ raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool raw_atomic_long_inc_not_zero(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_not_zero(v); #else return raw_atomic_inc_not_zero(v); @@ -1759,7 +1759,7 @@ raw_atomic_long_inc_not_zero(atomic_long_t *v) static __always_inline bool raw_atomic_long_inc_unless_negative(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_unless_negative(v); #else return raw_atomic_inc_unless_negative(v); @@ -1780,7 +1780,7 @@ raw_atomic_long_inc_unless_negative(atomic_long_t *v) static __always_inline bool raw_atomic_long_dec_unless_positive(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_unless_positive(v); #else return raw_atomic_dec_unless_positive(v); @@ -1801,7 +1801,7 @@ raw_atomic_long_dec_unless_positive(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_if_positive(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_if_positive(v); #else return raw_atomic_dec_if_positive(v); @@ -1809,4 +1809,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) } #endif /* _LINUX_ATOMIC_LONG_H */ -// eadf183c3600b8b92b91839dd3be6bcc560c752d +// 1b27315f1248fc8d43401372db7dd5895889c5be diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index 9826be3ba986..7667305381fc 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -55,7 +55,7 @@ cat < #include -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 typedef atomic64_t atomic_long_t; #define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) #define atomic_long_cond_read_acquire atomic64_cond_read_acquire From patchwork Tue Mar 25 12:16:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028822 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43AEB258CE2; Tue, 25 Mar 2025 12:23:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905436; cv=none; b=J9GFi4k5a/Uw9H7KbtURpPN1G52XnbA60hFJI8t+wvhlaos6pzOuEJeqXC7tiPI5o65Yr1xFSSYprNC8wO4I8JywwGjpdEucfIYkZG6KpleCU3drqX0kiQ1+MCp37UccPgAeqabjQsL5O8Vk7+z8jDyzOvzwpgfJ3CHDMRnelOs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905436; c=relaxed/simple; bh=1gg5o3afeYXr2oDy1XVLnTyki+lz38CMW137QmWraJ4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=t8RfNzrciYnWgQcc7xnNq4Am+KUv5uru5OWqAfXj6XsxFKJnfCMQOzPjnQ+NcUsVlVJvBbgx9r7hU7jThFVX3tvzTxiinObHLLA99BxZ60neEHzslFCfbRw3+Wcsi6C68lbbBCznvjRbcBZlaVbyY42waVCGwyZ3iTjfSPU8JxY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=L4cw2mln; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="L4cw2mln" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF58FC4CEE4; Tue, 25 Mar 2025 12:23:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905435; bh=1gg5o3afeYXr2oDy1XVLnTyki+lz38CMW137QmWraJ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L4cw2mlnpKO9zzlBV0K7R2SnFVRKniFFdGTiR1XJiWloP049xrRfsGdsekAZNO6iT 0IyPu7HYxWtcNr/GVwIsyoiG0WQGDVHF9znzdKgcSLP4/Mx+qe6abUvE609AinUueV 0KJNmISd/m0sVDU0CIIc8bPKS7tsLxYBWl6C9Q4zukQ4gJ5jyGZZAxFc2SGU/Bu4a5 45akXD+WZ2MUCfkqONcTDqGPuoH+ifGT/z4RaEjXSFoHsvMQYWwCSPkJnIvizKw1Rw hDbQZdSoaf8SOQJv2hZJOxEN9nVfFJY2tF55Gmc3rr6AyEV5ls4O48mzUFO7UNigWe wLQU7WgDVZaDw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 30/43] rv64ilp32_abi: kernel/smp: Disable CSD_LOCK_WAIT_DEBUG Date: Tue, 25 Mar 2025 08:16:11 -0400 Message-Id: <20250325121624.523258-31-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi is based on CONFIG_64BIT, but uses ILP32 for a smaller cache & memory footprint. So, disable CSD_LOCK_WAIT_DEBUG to get smaller csd struct. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/smp_types.h | 2 +- lib/Kconfig.debug | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/smp_types.h b/include/linux/smp_types.h index 2e8461af8df6..5912b694059f 100644 --- a/include/linux/smp_types.h +++ b/include/linux/smp_types.h @@ -61,7 +61,7 @@ struct __call_single_node { unsigned int u_flags; atomic_t a_flags; }; -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 u16 src, dst; #endif }; diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 1af972a92d06..f55f0ded826c 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1613,6 +1613,7 @@ config CSD_LOCK_WAIT_DEBUG depends on DEBUG_KERNEL depends on SMP depends on 64BIT + depends on !ABI_RV64ILP32 default n help This option enables debug prints when CPUs are slow to respond From patchwork Tue Mar 25 12:16:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028823 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6860325DD1A; Tue, 25 Mar 2025 12:24:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905450; cv=none; b=e2FVTDJJLdbcPhTPz6p3fz8q+Moqbv2pAC5chieJSExnXqShApflZyYuKShUMjYZvb5piqrJbp2Wf1nUCOEZzjYMTTNurbOvjnwyZKmoGMD2/kQCtX34MdY8tx3X7RwixRhkWc0AkT53/JfVEXbKESAIIzlLmyO/vugbzld5JYs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905450; c=relaxed/simple; bh=YlFb6KkzVSGSuegSI2MJMuDKnOb+jmQgRm/aIyv4ASM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=o6cvTxyHqJDtHSpgpjFNjfiIFW/AmxaRmlS82d96BDgutnbUiINb+8zlBCMGdtqT1wqrfN/fRURAgkZ6tyFIu/AjEJzrQXZsSGWPyVfLN4WfMq9aImmi6AppSmhvqkOHSfGP9Fj2hQKugHK9CCy7OM8+oaMb8FPXgx6nMqIjD4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sWpvaUIx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sWpvaUIx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33EE3C4CEE9; Tue, 25 Mar 2025 12:23:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905449; bh=YlFb6KkzVSGSuegSI2MJMuDKnOb+jmQgRm/aIyv4ASM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sWpvaUIxJ2xmK0FQF0ljHhlr/YashdPyYsTn7al4PxnIvUdy5/V+e9JCSqYjPiQ5E qVMLggXWg+4qFNOeniAE53fLZ1Prkl0/M6GeEuOJ+uT+3t3q7fV6UUS2UIgffUSeso wRSxhB5nlUaDz5ji9smEUpkoHf9ykZfP/NySchrJUTDEg5MhkdVVvAnNX4EACaRX4T pdKSFNThVJ1sR53X8HbMYhyVDRiX7Q6sFsJkcrdiRsi7pFRqIxw20GaRxB8ae45rL7 7llPuKtxuxdA0NoRclxRUGq3TI+0zr3JNpZA7Af2b2pgVGyaoWF87wcU8ebw0LTt3d JLIfeJGTL7Z9g== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 31/43] rv64ilp32_abi: maple_tree: Use BITS_PER_LONG instead of CONFIG_64BIT Date: Tue, 25 Mar 2025 08:16:12 -0400 Message-Id: <20250325121624.523258-32-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The Maple tree algorithm uses ulong type for each element. The number of slots is based on BITS_PER_LONG for RV64ILP32 ABI, so use BITS_PER_LONG instead of CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/maple_tree.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h index cbbcd18d4186..ff6265b6468b 100644 --- a/include/linux/maple_tree.h +++ b/include/linux/maple_tree.h @@ -24,7 +24,7 @@ * * Nodes in the tree point to their parent unless bit 0 is set. */ -#if defined(CONFIG_64BIT) || defined(BUILD_VDSO32_64) +#if (BITS_PER_LONG == 64) || defined(BUILD_VDSO32_64) /* 64bit sizes */ #define MAPLE_NODE_SLOTS 31 /* 256 bytes including ->parent */ #define MAPLE_RANGE64_SLOTS 16 /* 256 bytes */ From patchwork Tue Mar 25 12:16:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028824 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC6A02571D1; Tue, 25 Mar 2025 12:24:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905465; cv=none; b=EF2psVj5s1HE6A8O5B5llUA8J9wuInuPVV4IvT+m0eAaST/Oa2T+0RGYXkk1mn+fqx72nudVCLOB1Yw1itkJrLROyFSOzKuzeyGnXp29D1bfIoTarNkb6gin0udSgpuUbzBr/4rEq5bpBgnWIAkNcKZdXlrxBtfcAfpFdadjaUE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905465; c=relaxed/simple; bh=JPoN7MLEf8eR2nMyp8/thu8bst5//5Hk8VxE062lQ9Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CTD6yvbjNSiCvzv6UfFE01CEwtWH/M7HVDPOsH1JzqIpOlK55CcfztyIIITrex76uLXc0etOlWV1TgONRizg1/npWI3flMTZsNtRXNEKOauBYK2fsJKxXVLCBrAl60mTLUOcFVtxakndcnjsEozohUJRc7isa2qzyQkuX7Hovco= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dEoFxhRX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dEoFxhRX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B04FAC4CEE4; Tue, 25 Mar 2025 12:24:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905464; bh=JPoN7MLEf8eR2nMyp8/thu8bst5//5Hk8VxE062lQ9Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dEoFxhRXAgIXTk+izxMj8NAc58Z150abzTA52wVaUo6oYVdkPjOE5SSRWPRWeVTRZ U1/G1qYs1uREuzAoBzG0S14o2UVbqgxlA890HlC1yYQsuSNAzyB+Rmkj4zJV2ekZPy fzIcdDjyk24aWTyz20XnUJhgLKC2ybNiG3fcpWxbHC+p5onyj0QMYQr8LexFkeoHXi ika5eYC4afRp1Kftijmmo0sAT2pI8cXHBmJitbAMMAFNHKZ8/nrfoTyLZqlkraub3q 7VNeoISO8HTPBn0sSZs3s+QMMmmB4sh2Efhd9ZKgZQ2gPqONiWgdp4ZF195RgbZqwa EQz1rLnxWUczg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 32/43] rv64ilp32_abi: mm: Remove _folio_nr_pages Date: Tue, 25 Mar 2025 08:16:13 -0400 Message-Id: <20250325121624.523258-33-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" BITS_PER_LONG defines the layout of the struct page, and _folio_nr_pages would conflict with page->_mapcount for RV64ILP32 ABI. Here is problem: BUG: Bad page state in process kworker/u4:0 pfn:61eed page: refcount:0 mapcount:5 mapping:00000000 index:0x0 pfn:0x61eed flags: 0x0(zone=0) raw: 00000000 00000000 ffffffff 00000000 00000000 00000000 00000004 00000000 raw: 00000000 00000000 page dumped because: nonzero mapcount Modules linked in: CPU: 0 UID: 0 PID: 11 Comm: kworker/u4:0 Not tainted 6.13.0-rc4 Hardware name: riscv-virtio,qemu (DT) Workqueue: async async_run_entry_fn Call Trace: [] dump_backtrace+0x1e/0x26 [] show_stack+0x2a/0x38 [] dump_stack_lvl+0x4a/0x68 [] dump_stack+0x16/0x1e [] bad_page+0x120/0x142 [] free_unref_page+0x510/0x5f8 [] __folio_put+0x6a/0xbc [] free_large_kmalloc+0x6a/0xb8 [] kfree+0x23c/0x300 [] unpack_to_rootfs+0x27c/0x2c0 [] do_populate_rootfs+0x24/0x12e [] async_run_entry_fn+0x26/0xcc [] process_one_work+0x136/0x224 [] worker_thread+0x234/0x30a [] kthread+0xca/0xe6 [] ret_from_fork+0xe/0x18 Disabling lock debugging due to kernel taint So, remove _folio_nr_pages just like CONFIG_32BIT and use "_flags_1 & 0xff" instead. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/mm.h | 4 ++-- include/linux/mm_types.h | 2 +- mm/internal.h | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7b1068ddcbb7..454fb8ca724c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2058,7 +2058,7 @@ static inline long folio_nr_pages(const struct folio *folio) { if (!folio_test_large(folio)) return 1; -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return folio->_folio_nr_pages; #else return 1L << (folio->_flags_1 & 0xff); @@ -2083,7 +2083,7 @@ static inline unsigned long compound_nr(struct page *page) if (!test_bit(PG_head, &folio->flags)) return 1; -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return folio->_folio_nr_pages; #else return 1L << (folio->_flags_1 & 0xff); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6b27db7f9496..da3ba1a79ad5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -370,7 +370,7 @@ struct folio { atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; atomic_t _pincount; -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 unsigned int _folio_nr_pages; #endif /* private: the union with struct page is transitional */ diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..c9372a8552ba 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -682,7 +682,7 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) return; folio->_flags_1 = (folio->_flags_1 & ~0xffUL) | order; -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 folio->_folio_nr_pages = 1U << order; #endif } From patchwork Tue Mar 25 12:16:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028825 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF0F72571D1; Tue, 25 Mar 2025 12:24:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905480; cv=none; b=dwjwB+P9TxzatwpxIBvFq4OCDzWmebzS7P3Yj0b55SsW4aPs0agXPmmNH/G/iVTaUcdlvR+K8Myz7Kxv2sRNgOt1gQ7ypNvCcYO3xzxZO38YkFCSP1Jv2yuWonK/wtjla0K3f502sjAtB+RRGnEZmwBrvbWu8XHjfg+qghoypso= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905480; c=relaxed/simple; bh=+rpm/N8/UHXP5n3hRK8c2vpAMzIx7Ul4qNvFlNw8mss=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HagG87bgnClocGxdlKM7dnFgKl7Qpnr1tQhnoUvR3jGCvirCyt742Glw2jC04Ne8sHJr45OmGN5tjDcaYndfDcJ3cAut/RYHgM0LTG8V9DSjN93A5bbACva01tIgw1a9b6Q5bWClFcbKxIEVD71EfbT1S2LGLg+0B5fLWZ+LlAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fJ61zsFN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fJ61zsFN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2D02C4CEED; Tue, 25 Mar 2025 12:24:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905479; bh=+rpm/N8/UHXP5n3hRK8c2vpAMzIx7Ul4qNvFlNw8mss=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fJ61zsFNPv0tVZchhiONptI9S+phpKdkaxrIZlJXfqP8ZFhX5npdPIZKutJs7jnqd cCfXzR1Lhw26JF6STGdUIKZRIQjVQ3FNgrDHdOvq/LtsFHWE3zxFE/GTBVb31ANpi6 beEwlo8W1sTeTn0cFAZJSTH8Xx2n6hy1MZKcm5ZKAENN6VnIKjbEvFrXx/UwseWpbr aofW9gTlCfs1lWzyuR3LRO5UKXtkNqYmP8M+/YlMc+6Le8QBFSTCqH374JFrd1SAX6 1sinYuNojS4RMnq31N3OYBaEEMx5tdvpoFDLmiGmnn4c78h2enMsQS6DDCJxF83lgY sgAsBfhlY0xYg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 33/43] rv64ilp32_abi: mm/auxvec: Adapt mm->saved_auxv[] to Elf64 Date: Tue, 25 Mar 2025 08:16:14 -0400 Message-Id: <20250325121624.523258-34-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Unable to handle kernel paging request at virtual address 60723de0 Oops [#1] Modules linked in: CPU: 0 UID: 0 PID: 1 Comm: init Not tainted 6.13.0-rc4-00031-g01dc3ca797b3-dirty #161 Hardware name: riscv-virtio,qemu (DT) epc : percpu_counter_add_batch+0x38/0xc4 ra : filemap_map_pages+0x3ec/0x54c epc : ffffffffbc4ea02e ra : ffffffffbc1722e4 sp : ffffffffc1c4fc60 gp : ffffffffbd6d3918 tp : ffffffffc1c50000 t0 : 0000000000000000 t1 : 000000003fffefff t2 : 0000000000000000 s0 : ffffffffc1c4fca0 s1 : 0000000000000022 a0 : ffffffffc25c8250 a1 : 0000000000000003 a2 : 0000000000000020 a3 : 000000003fffefff a4 : 000000000b1c2000 a5 : 0000000060723de0 a6 : ffffffffbffff000 a7 : 000000003fffffff s2 : ffffffffc25c8250 s3 : ffffffffc246e240 s4 : ffffffffc2138240 s5 : ffffffffbd70c4d0 s6 : 0000000000000003 s7 : 0000000000000000 s8 : ffffffff9a02d780 s9 : 0000000000000100 s10: ffffffffc1c4fda8 s11: 0000000000000003 t3 : 0000000000000000 t4 : 00000000000004f7 t5 : 0000000000000000 t6 : 0000000000000001 status: 0000000200000100 badaddr: 0000000060723de0 cause: 000000000000000d [] percpu_counter_add_batch+0x38/0xc4 [] filemap_map_pages+0x3ec/0x54c [] handle_mm_fault+0xb6c/0xe9c [] handle_page_fault+0xd0/0x418 [] do_page_fault+0x20/0x3a [] _new_vmalloc_restore_context_a0+0xb0/0xbc Code: 8a93 4baa 511c 171b 0027 873b 00ea 4318 2481 9fb9 (aa03) 0007 Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/mm_types.h | 4 ++++ kernel/sys.c | 8 ++++++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index da3ba1a79ad5..0d436b0217fd 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -962,7 +962,11 @@ struct mm_struct { unsigned long start_brk, brk, start_stack; unsigned long arg_start, arg_end, env_start, env_end; +#ifdef CONFIG_64BIT + unsigned long long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */ +#else unsigned long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */ +#endif struct percpu_counter rss_stat[NR_MM_COUNTERS]; diff --git a/kernel/sys.c b/kernel/sys.c index cb366ff8703a..81c0d94ff50d 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -2008,7 +2008,11 @@ static int validate_prctl_map_addr(struct prctl_mm_map *prctl_map) static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data_size) { struct prctl_mm_map prctl_map = { .exe_fd = (u32)-1, }; +#ifdef CONFIG_64BIT + unsigned long long user_auxv[AT_VECTOR_SIZE]; +#else unsigned long user_auxv[AT_VECTOR_SIZE]; +#endif struct mm_struct *mm = current->mm; int error; @@ -2122,7 +2126,11 @@ static int prctl_set_auxv(struct mm_struct *mm, unsigned long addr, * up to the caller to provide sane values here, otherwise userspace * tools which use this vector might be unhappy. */ +#ifdef CONFIG_64BIT + unsigned long long user_auxv[AT_VECTOR_SIZE] = {}; +#else unsigned long user_auxv[AT_VECTOR_SIZE] = {}; +#endif if (len > sizeof(user_auxv)) return -EINVAL; From patchwork Tue Mar 25 12:16:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028826 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC1332594B7; Tue, 25 Mar 2025 12:24:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905495; cv=none; b=mexu96zlOzhrGaqxw/Vg6F5mcX0bgbabxPbgsgIxRQXFsqQUT8p3y1Q8RZqpIHUlUZI5I9QJyvKmUWBKUQC3i/hQOwcVZ5RH4zlIo/SagfmkeJKq7QAMMhISIreN9FyhfiCbwCsaCecoGs/QCKpl/8fSV4q9XqsY9WdYRYYlriI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905495; c=relaxed/simple; bh=JwtECzOHq6ymjHklbfhF05tHrVz/dVoFwQnBuog4LCc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OkJB5aUK0m0GsdcCf4gj5pv6rEgxe6PzTI6PMHZpRWUixyM1tZIAPKxZ4W1LRSQso83QYaURxQNPjkjo0CwAuo0hWbnv413x0pZKQB1z46kKxyi7nbCf40RMANcGLQBpriWzGpdpIDPfKlG08IzKIdTstm8PYJFcRaSYarZlX/0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NBdZnQ6w; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NBdZnQ6w" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3BE93C4CEE4; Tue, 25 Mar 2025 12:24:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905494; bh=JwtECzOHq6ymjHklbfhF05tHrVz/dVoFwQnBuog4LCc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NBdZnQ6wGqSC6QfVLwGWKEPwB7OJ2JptqWNe6hKJxvNlO7hfByhpmxMvHthIUgk61 dNUcCE7m/h28WuqOcXJmgPZQk3Hy8cSc4aiS4wgdEwf37RF7fMts6mVPBiUNrWNCWt oWzAZgLDhGX29uKl8dWvRO9Mm1JoxE4/TpLaaQgKCG0EvxU9De7+6S+NM9bXKazZUh iiJaT6DEIOs44LKfjd1QeKJBQDjIo3VxGUUBkRirfVvYipwwzvO9PksRO3PoHIeu8G 685HBjMd/zqHvy1H0+rm2OtgsMHs8OXxw2D6U5NNsckTsb6/vNidB7w2u683wCRv8q kinJAJyufOtuA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 34/43] rv64ilp32_abi: mm: Adapt vm_flags_t struct Date: Tue, 25 Mar 2025 08:16:15 -0400 Message-Id: <20250325121624.523258-35-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, so uses unsigned long long as vm_flags struct type. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- fs/proc/task_mmu.c | 9 +++++++-- include/linux/mm.h | 10 +++++++--- include/linux/mm_types.h | 4 ++++ mm/debug.c | 4 ++++ mm/memory.c | 4 ++++ 5 files changed, 26 insertions(+), 5 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index f02cd362309a..6c4eaba794da 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -905,6 +905,11 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, return 0; } +#ifdef CONFIG_64BIT +#define MNEMONICS_SZ 64 +#else +#define MNEMONICS_SZ 32 +#endif static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) { /* @@ -917,11 +922,11 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) * -Werror=unterminated-string-initialization warning * with GCC 15 */ - static const char mnemonics[BITS_PER_LONG][3] = { + static const char mnemonics[MNEMONICS_SZ][3] = { /* * In case if we meet a flag we don't know about. */ - [0 ... (BITS_PER_LONG-1)] = "??", + [0 ... (MNEMONICS_SZ-1)] = "??", [ilog2(VM_READ)] = "rd", [ilog2(VM_WRITE)] = "wr", diff --git a/include/linux/mm.h b/include/linux/mm.h index 454fb8ca724c..d9735cd7efe9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -412,7 +412,11 @@ extern unsigned int kobjsize(const void *objp); #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR # define VM_UFFD_MINOR_BIT 38 +#ifdef CONFIG_64BIT +# define VM_UFFD_MINOR BIT_ULL(VM_UFFD_MINOR_BIT) /* UFFD minor faults */ +#else # define VM_UFFD_MINOR BIT(VM_UFFD_MINOR_BIT) /* UFFD minor faults */ +#endif #else /* !CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ # define VM_UFFD_MINOR VM_NONE #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ @@ -426,14 +430,14 @@ extern unsigned int kobjsize(const void *objp); */ #ifdef CONFIG_64BIT #define VM_ALLOW_ANY_UNCACHED_BIT 39 -#define VM_ALLOW_ANY_UNCACHED BIT(VM_ALLOW_ANY_UNCACHED_BIT) +#define VM_ALLOW_ANY_UNCACHED BIT_ULL(VM_ALLOW_ANY_UNCACHED_BIT) #else #define VM_ALLOW_ANY_UNCACHED VM_NONE #endif #ifdef CONFIG_64BIT #define VM_DROPPABLE_BIT 40 -#define VM_DROPPABLE BIT(VM_DROPPABLE_BIT) +#define VM_DROPPABLE BIT_ULL(VM_DROPPABLE_BIT) #elif defined(CONFIG_PPC32) #define VM_DROPPABLE VM_ARCH_1 #else @@ -442,7 +446,7 @@ extern unsigned int kobjsize(const void *objp); #ifdef CONFIG_64BIT /* VM is sealed, in vm_flags */ -#define VM_SEALED _BITUL(63) +#define VM_SEALED _BITULL(63) #endif /* Bits set in the VMA until the stack is in its final location */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0d436b0217fd..900665c5eca8 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -571,7 +571,11 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } +#ifdef CONFIG_64BIT +typedef unsigned long long vm_flags_t; +#else typedef unsigned long vm_flags_t; +#endif /* * A region containing a mapping of a non-memory backed file under NOMMU diff --git a/mm/debug.c b/mm/debug.c index 8d2acf432385..0fcb85e6efea 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -181,7 +181,11 @@ void dump_vma(const struct vm_area_struct *vma) pr_emerg("vma %px start %px end %px mm %px\n" "prot %lx anon_vma %px vm_ops %px\n" "pgoff %lx file %px private_data %px\n" +#ifdef CONFIG_64BIT + "flags: %#llx(%pGv)\n", +#else "flags: %#lx(%pGv)\n", +#endif vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_mm, (unsigned long)pgprot_val(vma->vm_page_prot), vma->anon_vma, vma->vm_ops, vma->vm_pgoff, diff --git a/mm/memory.c b/mm/memory.c index 539c0f7c6d54..3c4a9663c094 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -533,7 +533,11 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr, (long long)pte_val(pte), (long long)pmd_val(*pmd)); if (page) dump_page(page, "bad pte"); +#ifdef CONFIG_64BIT + pr_alert("addr:%px vm_flags:%08llx anon_vma:%px mapping:%px index:%lx\n", +#else pr_alert("addr:%px vm_flags:%08lx anon_vma:%px mapping:%px index:%lx\n", +#endif (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index); pr_alert("file:%pD fault:%ps mmap:%ps read_folio:%ps\n", vma->vm_file, From patchwork Tue Mar 25 12:16:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028827 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7831A262811; Tue, 25 Mar 2025 12:25:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905510; cv=none; b=MiChJk0XGEOXbNyHKx6HnnRsIiSg0ltdTVAi0vWwxIvSu/4YrpWJOLDGd6P1OKrwdr4khTZRlvhhf5Td0P2+D+otTQrfzZK9NeGTvysZ20oTOrMJK2Ql6JEvgKW1/4ldouW2YrGzwu65eXFQU5Ptc/cwCEuXFZPXCuYlInBOJrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905510; c=relaxed/simple; bh=uqoJTO8TQ6On8+9R1MYgJxGk7xYpfwdXChiuqug5XKY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=C/CdgXTGN/LQteLk88R6CO/9NNzr4GFN2m571anakaMHYn0zJPudLTOhrluGCAHlWivgPDKXK5Rb+hs4CIE1tUHNnanKV+GK3lFXl84Thi7oX+YbsIkHx5BweXZkNGZldZ5uvtZu89q+tWT41/HW53byTfRAt56NoTIZDbtxgGw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Tbm8Xu/x; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Tbm8Xu/x" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2981CC4CEE4; Tue, 25 Mar 2025 12:24:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905510; bh=uqoJTO8TQ6On8+9R1MYgJxGk7xYpfwdXChiuqug5XKY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tbm8Xu/xL9LwAkup8nr/CZGwgZ/HtBocjjMNivSDeL5hMNIibkzNWBIjcdug8EvfD QidFgTYi+HMsMkeR8+VRTweKTrWgmou5CzsndMXzlO4oZEivwlhPhDjj4fjOc56Qc9 LEKFd8XS0hLN2de0NvYOj9MFS3ZC1n3uwR87P/ChdO+Qxq566Apd5rBSyR6mxaQqhR uYwPY7KfQYrAJZbgGx2D1hYimJ9ECj2lRGYUD9xqOKdph3HacGa/jE9YJiGcpZdWWY sfTkSdpa1Iac6eeiW6fFY3790kvmIaOHme0uYaLwkVlbwify8gYY2fjL3qNT/qkevh HrFm01xwFxqLQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 35/43] rv64ilp32_abi: net: Use BITS_PER_LONG in struct dst_entry Date: Tue, 25 Mar 2025 08:16:16 -0400 Message-Id: <20250325121624.523258-36-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 ABI depends on CONFIG_64BIT for its ILP32 data type, which is smaller. To align with ILP32 requirements, CONFIG_64BIT was changed to BITS_PER_LONG in struct dts_entry. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/net/dst.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/net/dst.h b/include/net/dst.h index 78c78cdce0e9..af1c74c4836e 100644 --- a/include/net/dst.h +++ b/include/net/dst.h @@ -65,7 +65,7 @@ struct dst_entry { * __rcuref wants to be on a different cache line from * input/output/ops or performance tanks badly */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 rcuref_t __rcuref; /* 64-bit offset 64 */ #endif int __use; @@ -74,7 +74,7 @@ struct dst_entry { short error; short __pad; __u32 tclassid; -#ifndef CONFIG_64BIT +#if BITS_PER_LONG == 32 struct lwtunnel_state *lwtstate; rcuref_t __rcuref; /* 32-bit offset 64 */ #endif @@ -89,7 +89,7 @@ struct dst_entry { */ struct list_head rt_uncached; struct uncached_list *rt_uncached_list; -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 struct lwtunnel_state *lwtstate; #endif }; From patchwork Tue Mar 25 12:16:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028828 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7103A25743E; Tue, 25 Mar 2025 12:25:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905525; cv=none; b=tvtvcy8TQnRqKGSvWfiOSpQjkdSdS7KVGV/bzyplZJkwuqX/wBAD4at1i9+iRMqrVwa5CQtW6CUE/MHxqc43dZSxwVSn/JhlrFaYsUn0zFH2iQ9wcK4QEwL75qUE6g0A6jGd4EtgyhLu8Cc/xtuz4sqCOxdke+D7GE1scSdI0/w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905525; c=relaxed/simple; bh=YhFQ2ibshi4VdwpUCryIeDZCxGJd5L8LcBHKHU0qt6Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RDRBZkdXy+GY89rkFcFB0SyuYeWX82s2/n/xXW3RDHIRWcSZEL3xc9EDY7LE8R1WGjpbdUswyJOzK7yMi0A8eCREHbkYMcKTNOghD3tKUmUqPrIedrehz2lCz0cNN946K1dkngGS1VpLdMJkTyLABw72bsPSiyr7P7BN7vZFRo0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=MseyejDn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MseyejDn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7793AC4CEEE; Tue, 25 Mar 2025 12:25:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905525; bh=YhFQ2ibshi4VdwpUCryIeDZCxGJd5L8LcBHKHU0qt6Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MseyejDnF7mwqmEqN7tEG0RqwtwCeVyNtPH/896j91wVIegZa2jtI2EZ83uU/0V2E 9lABAyLdbPYzFsVhyfmGnTTu37x50QYjuV8s+jH6Amg7TKgjIpjJN8Z1yU4VwkbWUT x5BXsnAu3qJ+98zrmXtBO952PUMQ22v+Yt9pce53Mph3EDro8EXgBHN3FBLaE3Dems iwO2PikBqjaPdls2M1/9azOD9y8bGLfzPBTVJus/mPdsTHro2WluEYbis3rUZIYFMZ ayOSUhoxogSQEsa2y3V0uhRHOG/GCXPjaG6XijbjVMJSyE4jTNrm5nZ34MPsJ6W1+h mVg9mRnoG2F4A== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 36/43] rv64ilp32_abi: printf: Use BITS_PER_LONG instead of CONFIG_64BIT Date: Tue, 25 Mar 2025 08:16:17 -0400 Message-Id: <20250325121624.523258-37-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI systems have BITS_PER_LONG set to 32. Use BITS_PER_LONG instead of CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- lib/vsprintf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/vsprintf.c b/lib/vsprintf.c index 56fe96319292..2d719be86945 100644 --- a/lib/vsprintf.c +++ b/lib/vsprintf.c @@ -771,7 +771,7 @@ static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out) /* Pairs with smp_wmb() after writing ptr_key. */ smp_rmb(); -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key); /* * Mask off the first 32 bits, this makes explicit that we have From patchwork Tue Mar 25 12:16:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028829 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4967C257AC7; Tue, 25 Mar 2025 12:25:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905540; cv=none; b=LiyN/KAaJpya6Z0CXQ849vt7qR6BVNhGS6Q/2LwQTa/Eg806o++jEf/XTiK29/t6+PGsZ5thRK9r2uC/J8909lglcKZIGub5zhycygKwQiuo7NZWmNYg1MlK149JDjhLIdqLWuS8nHgInzpihxN6AuhZh1V9kXBJnFpGCw3uGc0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905540; c=relaxed/simple; bh=sm654hPYjWJMXJD/nEEXxI+s+R28mm6voQq4ZrjJStU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fMe7PiSXydF+rGbYnyvinNjlvNVjYHybAfRu9q2marLG7xZlRm+xWmmyndQq06NRj2pGVv0cUdwOH0+Rj8FzzU1A8zaNLkSQPeIfJXc6rzyfJ2aC/yOnhWGiRvEm+GPFqH6a22XH4b4Qz3KqOcGnUYaoFHKrtdvH3jGod7EIyRc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Kb/eV2Rw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Kb/eV2Rw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2764C4CEE4; Tue, 25 Mar 2025 12:25:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905540; bh=sm654hPYjWJMXJD/nEEXxI+s+R28mm6voQq4ZrjJStU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kb/eV2Rwu3zYNU7Jt3dNKcNXYL8nMRHt5pyAJqK6Q/FHeFvW6XiemJEA3uvEk7fVs 3EdHOm9rkT05EzHBa5buxdZyLrstfBsowWufM7HqrOlG+CIUjr96jk/fDQXjnFPt+R dWfnumlqW9XRN8RlSa4I28acYRg721cxluLl8IEARR3ZQbcXBk3NZPefJ2WwhFLcUe OiQ0cG9dLZ0sgksiS+3kI0Gf4fzOMT9Xk2MfF9taYcuO0Y49REu0hLiOZAG6x8XWZ3 isLS319gnMj4Vs30saFH4Hmz96Q2P/kZ1dTH5mgBIdyz3rnr7TCbzfvoPIkyCS5b43 0M+WNWc7E/U2Q== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 37/43] rv64ilp32_abi: random: Adapt fast_pool struct Date: Tue, 25 Mar 2025 08:16:18 -0400 Message-Id: <20250325121624.523258-38-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI systems have BITS_PER_LONG set to 32, matching sizeof(compat_ulong_t). Adjust code Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/char/random.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2581186fa61b..0bfbe02ee255 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1015,7 +1015,11 @@ EXPORT_SYMBOL_GPL(unregister_random_vmfork_notifier); #endif struct fast_pool { +#ifdef CONFIG_64BIT + u64 pool[4]; +#else unsigned long pool[4]; +#endif unsigned long last; unsigned int count; struct timer_list mix; @@ -1040,7 +1044,11 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { * and therefore this has no security on its own. s represents the * four-word SipHash state, while v represents a two-word input. */ +#ifdef CONFIG_64BIT +static void fast_mix(u64 s[4], u64 v1, u64 v2) +#else static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2) +#endif { s[3] ^= v1; FASTMIX_PERM(s[0], s[1], s[2], s[3]); From patchwork Tue Mar 25 12:16:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028830 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4282265618; Tue, 25 Mar 2025 12:25:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905554; cv=none; b=EC3zoYCKR267p7G5SvpwrW/WQVe56fH13yfmabwrnZg2ZpDnIAjZxb9WwnblRTh+zARcaBxJ16o497F9Xi1WIIH81mluDLQEaZzSnYi09R3dsn1D8O9CWvUKfYaWJcUOaLwuHejTLdD+F2IocLRVPHRnbsminSrwiJd7o8keJXM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905554; c=relaxed/simple; bh=eb9pFbZS3Y9m5tP2e5oU2ev4kJ8aZOyHuzMgMflfDxM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RqjGZuG33/QYFt4LqjKmx9V9ufMU9l8R40cvUjzRmgrbN/peTjv1LimR61OfpeoOrxNmIakXhmEeytubObWW/hGTY5jR4Jhn1ZEIUhgo54UprZ2MbqzFNeFTEV1J3P1EawlzX6GRWAkFlJOvsZJafIaGufB6/LjvSYw5D3HaOPI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TKIbxeDg; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TKIbxeDg" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98700C4CEE4; Tue, 25 Mar 2025 12:25:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905554; bh=eb9pFbZS3Y9m5tP2e5oU2ev4kJ8aZOyHuzMgMflfDxM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TKIbxeDg/4MGDoxGYhtsr77x8nD9+TpCG8oy/mBv1GHzr8ZWepo0+9u/qgnTYwhM7 9FoabPbbN9S6IfOf26oxS3m5p0frx/eThfy2pNkvlIe2k1adYFkjCwm3E3QYrPnnXY qg1+2WeIhc8bsbGmrVaN8x2QJEpf+pJnOK1bpb9Cm3LPqRTJwuy9bTAStpIJuNhcng U6oe1mYNc6XNsLp33CR0gMzzC0NbL9mf8zaoGaRdRBmnSf8sdgpDmPkIT5iWLXo5Lq f6F/efG1+AqbmifuFranEddUppFwWnYxtFJSXivX7reLfW7B+/YSGNRM9ytoYkkqRu QY3BUYNUiqCrw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 38/43] rv64ilp32_abi: syscall: Use CONFIG_64BIT instead of BITS_PER_LONG Date: Tue, 25 Mar 2025 08:16:19 -0400 Message-Id: <20250325121624.523258-39-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI adopts the syscall rules from CONFIG_64BIT and directly uses 64BIT, replacing BITS_PER_LONG representation. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/syscall_table.h | 2 +- arch/riscv/include/asm/unistd.h | 4 ++-- scripts/checksyscalls.sh | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/syscall_table.h b/arch/riscv/include/asm/syscall_table.h index 0c2d61782813..aab2bc0ddf4e 100644 --- a/arch/riscv/include/asm/syscall_table.h +++ b/arch/riscv/include/asm/syscall_table.h @@ -1,6 +1,6 @@ #include -#if __BITS_PER_LONG == 64 +#ifdef CONFIG_64BIT #include #else #include diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h index e6d904fa67c5..86b9c1712f24 100644 --- a/arch/riscv/include/asm/unistd.h +++ b/arch/riscv/include/asm/unistd.h @@ -16,10 +16,10 @@ #define __ARCH_WANT_COMPAT_FADVISE64_64 #endif -#if defined(__LP64__) && !defined(__SYSCALL_COMPAT) +#if defined(CONFIG_64BIT) && !defined(__SYSCALL_COMPAT) #define __ARCH_WANT_NEW_STAT #define __ARCH_WANT_SET_GET_RLIMIT -#endif /* __LP64__ */ +#endif /* CONFIG_64BIT */ #define __ARCH_WANT_MEMFD_SECRET diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh index 1e5d2eeb726d..9cc4f9086dfe 100755 --- a/scripts/checksyscalls.sh +++ b/scripts/checksyscalls.sh @@ -76,7 +76,7 @@ cat << EOF #endif /* System calls for 32-bit kernels only */ -#if BITS_PER_LONG == 64 +#ifdef CONFIG_64BIT #define __IGNORE_sendfile64 #define __IGNORE_ftruncate64 #define __IGNORE_truncate64 From patchwork Tue Mar 25 12:16:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028831 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC9CA25A2A0; Tue, 25 Mar 2025 12:26:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905573; cv=none; b=PbOYDN4UXO5UFlr88bNL+emUT1hOyjG2X17rsiXTjLxqercHJ9fOWyXrlwHOUVB5IaUBzLH19Sm0b4nspzwniNko3aBGs3McRQQQfYuGfuWWxZWsqwTRzyz15N+n1DLFFJrzeefcLjIGtZYRFArP64b7fWxcvGy3CZVfa/J3jjk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905573; c=relaxed/simple; bh=yrVg1aQo8nvzx1orbzCiGBpDINXbK2p/bzc74O/ZG7Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KpxTMCcj04DsTcesaA056UJfqKRw3AVdLwk6VTdznUeWKDzb3c5l8o1/fMONEj9Hs6NZuwbQoXbK2769ZgmmNhhg20cpi5pflCH5VrZvnnXOHDqy/eew8+Px0XPctfXQUB6lnH1if/zWcj8F1Wd1zcgKL2JOBWLiUNnAf8RjpHw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dXHVR37B; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dXHVR37B" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA822C4CEE9; Tue, 25 Mar 2025 12:25:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905572; bh=yrVg1aQo8nvzx1orbzCiGBpDINXbK2p/bzc74O/ZG7Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dXHVR37BYjtnP226wRmqGD/+5q9LGrYaanpzXXOav8r/BHJV5tQivSCfdRSIzthse 8trpo5JD1iz/5OecDTcnupCBJBQX33vaw/hAbD4qk4LeJVqvqHHpw9NHBT1ZFlMkn0 uCFXCw3w+yxAMYAhQHMWBx2FPCBEyeY1A19l2GvG5jVXNeDCNpsZ+dZy8FBqDa7uo/ Zsma4xcYZbwnzagW4sOr51uBOwZp+LjobLSp/6HGruAVKHWCA8RItDHKa1cnmepcuH KTSRJuNlahXURYcKWJr2ZhoKXLWH1u/JzJrtINfTgSwDQFMRVa5eB4TERwwIW5IXtr Utc6YiuHqvQsA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 39/43] rv64ilp32_abi: sysinfo: Adapt sysinfo structure to lp64 uapi Date: Tue, 25 Mar 2025 08:16:20 -0400 Message-Id: <20250325121624.523258-40-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RISC-V 64ilp32 ABI leverages LP64 uapi and accommodates LP64 ABI userspace directly, necessitating updates to the sysinfo struct's unsigned long and array types with u64. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- fs/proc/loadavg.c | 10 +++++++--- include/linux/sched/loadavg.h | 4 ++++ include/uapi/linux/sysinfo.h | 20 ++++++++++++++++++++ kernel/sched/loadavg.c | 4 ++++ 4 files changed, 35 insertions(+), 3 deletions(-) diff --git a/fs/proc/loadavg.c b/fs/proc/loadavg.c index 817981e57223..643e06de3446 100644 --- a/fs/proc/loadavg.c +++ b/fs/proc/loadavg.c @@ -13,14 +13,18 @@ static int loadavg_proc_show(struct seq_file *m, void *v) { +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) + unsigned long long avnrun[3]; +#else unsigned long avnrun[3]; +#endif get_avenrun(avnrun, FIXED_1/200, 0); seq_printf(m, "%lu.%02lu %lu.%02lu %lu.%02lu %u/%d %d\n", - LOAD_INT(avnrun[0]), LOAD_FRAC(avnrun[0]), - LOAD_INT(avnrun[1]), LOAD_FRAC(avnrun[1]), - LOAD_INT(avnrun[2]), LOAD_FRAC(avnrun[2]), + LOAD_INT((ulong)avnrun[0]), LOAD_FRAC((ulong)avnrun[0]), + LOAD_INT((ulong)avnrun[1]), LOAD_FRAC((ulong)avnrun[1]), + LOAD_INT((ulong)avnrun[2]), LOAD_FRAC((ulong)avnrun[2]), nr_running(), nr_threads, idr_get_cursor(&task_active_pid_ns(current)->idr) - 1); return 0; diff --git a/include/linux/sched/loadavg.h b/include/linux/sched/loadavg.h index 83ec54b65e79..8f2d6a827ee9 100644 --- a/include/linux/sched/loadavg.h +++ b/include/linux/sched/loadavg.h @@ -13,7 +13,11 @@ * 11 bit fractions. */ extern unsigned long avenrun[]; /* Load averages */ +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +extern void get_avenrun(unsigned long long *loads, unsigned long offset, int shift); +#else extern void get_avenrun(unsigned long *loads, unsigned long offset, int shift); +#endif #define FSHIFT 11 /* nr of bits of precision */ #define FIXED_1 (1< #define SI_LOAD_SHIFT 16 + +#if (__riscv_xlen == 64) && (__BITS_PER_LONG == 32) +struct sysinfo { + __s64 uptime; /* Seconds since boot */ + __u64 loads[3]; /* 1, 5, and 15 minute load averages */ + __u64 totalram; /* Total usable main memory size */ + __u64 freeram; /* Available memory size */ + __u64 sharedram; /* Amount of shared memory */ + __u64 bufferram; /* Memory used by buffers */ + __u64 totalswap; /* Total swap space size */ + __u64 freeswap; /* swap space still available */ + __u16 procs; /* Number of current processes */ + __u16 pad; /* Explicit padding for m68k */ + __u64 totalhigh; /* Total high memory size */ + __u64 freehigh; /* Available high memory size */ + __u32 mem_unit; /* Memory unit size in bytes */ + char _f[20-2*sizeof(__u64)-sizeof(__u32)]; /* Padding: libc5 uses this.. */ +}; +#else struct sysinfo { __kernel_long_t uptime; /* Seconds since boot */ __kernel_ulong_t loads[3]; /* 1, 5, and 15 minute load averages */ @@ -21,5 +40,6 @@ struct sysinfo { __u32 mem_unit; /* Memory unit size in bytes */ char _f[20-2*sizeof(__kernel_ulong_t)-sizeof(__u32)]; /* Padding: libc5 uses this.. */ }; +#endif #endif /* _LINUX_SYSINFO_H */ diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c index c48900b856a2..f1f5abc64dea 100644 --- a/kernel/sched/loadavg.c +++ b/kernel/sched/loadavg.c @@ -68,7 +68,11 @@ EXPORT_SYMBOL(avenrun); /* should be removed */ * * These values are estimates at best, so no need for locking. */ +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +void get_avenrun(unsigned long long *loads, unsigned long offset, int shift) +#else void get_avenrun(unsigned long *loads, unsigned long offset, int shift) +#endif { loads[0] = (avenrun[0] + offset) << shift; loads[1] = (avenrun[1] + offset) << shift; From patchwork Tue Mar 25 12:16:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028832 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09A4326563D; Tue, 25 Mar 2025 12:26:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905587; cv=none; b=ZFiX8Rp3M6E4w259m2oQVIE/QlDmXFz3etXVRrAQmcTd9PcjeK1lzRDk9ZwjFLnhxQ/FGU/yMye0RLRkXZtZJicpagx5W7Ujt2UNxP3v7jaILoTBd2g0ujIr7Ne9M14GCV0FPas2OUXgbsvthysf04Bz2wrPMu4yHIonvrRXix4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905587; c=relaxed/simple; bh=CN5WMBcSq4ybkzYSZlZQi/bVyuOph6iTDFSpBV5o4N8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=B0jE2DtG90AS7PTM07xmHlDDAGC4AGMKW/Sr1QzbjaFzN6vGPLmcggdd1Q7y+kjw69/9dt9S1OtRk2cKHnIh/DATGdkijOcMHASQxPHhqL+NSYIgOyf+FWtJ4Mwg3KVoL0QviyJE3QdBoQHn2jQhA40uRbiSqjVMoXEHu4MvNxw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JUZLlaeI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JUZLlaeI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B719EC4CEED; Tue, 25 Mar 2025 12:26:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905586; bh=CN5WMBcSq4ybkzYSZlZQi/bVyuOph6iTDFSpBV5o4N8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JUZLlaeIY8Hwh6lRUruQPwQasNliWPwz7cJWB27ghj4+IZjN2b4GbBZ/fSSZ6XMji us+Hkvcz4IetM23aVYMu/AYs8cVXwsZBW7NeOsZgDbCgt6g5xqJd48sWMPOg1jFmZO dTGKmsr8CcSEXGjzQ6k8LM44Bk1+QDQseKg+PcX8rq6xmuSZhGjpiAtIF0iRessEj3 aRKlIJp+k0H+q7J7Nwhehdmf4wV0LPBD/VdM8zp20s4PoYdcxn128IXf7LF6zj8Ai+ 3WimUJp5bM0Px6NuLSmw5Ey3muirnPyNTnJB+ycI51xStVOVnALDfDoGBxJW8uVnSm PDPGAqzidOZVg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 40/43] rv64ilp32_abi: tracepoint-defs: Using u64 for trace_print_flags.mask Date: Tue, 25 Mar 2025 08:16:21 -0400 Message-Id: <20250325121624.523258-41-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 ABI relies on CONFIG_64BIT, and mmflags.h defines VMA flags with BIT_ULL. Consequently, use "unsigned long long" for trace_print_flags.mask to align with VMAflag's type size. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/tracepoint-defs.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h index aebf0571c736..3b51ede18e32 100644 --- a/include/linux/tracepoint-defs.h +++ b/include/linux/tracepoint-defs.h @@ -14,7 +14,11 @@ struct static_call_key; struct trace_print_flags { +#ifdef CONFIG_64BIT + unsigned long long mask; +#else unsigned long mask; +#endif const char *name; }; From patchwork Tue Mar 25 12:16:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028833 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6617525745C; Tue, 25 Mar 2025 12:26:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905603; cv=none; b=i6TphsgKWTOCs4hkVTuMbuDuScl7kHolxQtdVmIdmrBzZWwvZJLabWc6xFYf73EtTXz2wBi2Opr1FtwfkKNuZfyyHRRfODRBjSnL7djM0XsWaSJ6Is4lkQEZs3b7B+dkBLfaz3rsKcRs63C0t8VY2veobCcQ8uRaDutcNL//4fo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905603; c=relaxed/simple; bh=gGEDexkonBv9xkqlklXH6H0siev5It/WclAJgWuLAaE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZJz+SqQonA4Emf+lRFy347EqEP/b9jer/HUpWI1fKDSN0pLBOq8MqIQb5VfDcZvsQu+jDyqTzlbhwqs6VdZv08yt984u0y0qRM/aucA4w/ov+1St/a8jG5T2xjdmBg4etTkZTKpIDd86t8mh8SVQuwhq71WiwYDDoVND31FAB4s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ihVlwKbl; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ihVlwKbl" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58230C4CEE4; Tue, 25 Mar 2025 12:26:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905600; bh=gGEDexkonBv9xkqlklXH6H0siev5It/WclAJgWuLAaE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ihVlwKblKsuWFNcMtDJ1k3ctKSbBwY49h9IjoH9KGH6RI/zq8BylDBqvJ9wyno/qp IUv3RwZeYhK0sWieeIu1X7UarNylQxXDlKivCnDKhn4bx62KBOApARfUIXTkLXXsFn 3rI4E6bpvA/elj8kLNZnsdeON0EYb/TTcZHXFJrgLVFjWNZTsDHQaIIhPnPY6YDHgo 7rJORY6u+zOpHejnqnKv38Ge2nOcCABkagysYa3pxGYsHk/Lvc8HyUvKwE4GThzBZl GffrGstDg5zLMeOPdXBQ0RXicXDZjiJ6nMyN1iFUR0hjCFd9gaaZECI2ylLLzOHNni HKltn+V1I4Cmg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 41/43] rv64ilp32_abi: tty: Adapt ptr_to_compat Date: Tue, 25 Mar 2025 08:16:22 -0400 Message-Id: <20250325121624.523258-42-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on 64-bit ISA, but BITS_PER_LONG is 32. So, the size of unsigned long is the same as compat_ulong_t and no need "(unsigned long)v.iomem_base >> 32 ? 0xfffffff : ..." detection. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/tty/tty_io.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index 449dbd216460..75e256e879d0 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -2873,8 +2873,12 @@ static int compat_tty_tiocgserial(struct tty_struct *tty, err = tty->ops->get_serial(tty, &v); if (!err) { memcpy(&v32, &v, offsetof(struct serial_struct32, iomem_base)); +#if BITS_PER_LONG == 64 v32.iomem_base = (unsigned long)v.iomem_base >> 32 ? 0xfffffff : ptr_to_compat(v.iomem_base); +#else + v32.iomem_base = ptr_to_compat(v.iomem_base); +#endif v32.iomem_reg_shift = v.iomem_reg_shift; v32.port_high = v.port_high; if (copy_to_user(ss, &v32, sizeof(v32))) From patchwork Tue Mar 25 12:16:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028834 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39AC425A2C0; Tue, 25 Mar 2025 12:26:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905616; cv=none; b=KWRmue95ZmwWF08cLkg27KpsG31njfBe+yci+lYzec7Qonx2HOsDt53Ga72Sz13dzG/TcvygT/QcFAmDa3VNELYUdp2scy+AcMXO8+c6kI2ruszhSK4pjQP9IYBvtjFi926PBLnNMVeV7IUhLPLeDdsPNxP6ZMM1JI9OUyB+AGY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905616; c=relaxed/simple; bh=9rClAibepR5410hMymY5QyjFNfu9myyrJTkBJ7yJs84=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RxBua4H0dWDogKpleXnKcVygmGTcbymhQrq32sqktiQaQklcHQ61TGtpRLtm26A721MIhwGdqvKi1WxgLZRdn7RoOBntTlpxkVj2WlF4S22cXDXextvAvTGzsoMwaZwd+Tp/VVulQxzhYQOO45puQ/sH6Xkv4jfObA3/oIP9GkE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PSXvc+q0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PSXvc+q0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4AD52C4CEED; Tue, 25 Mar 2025 12:26:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905616; bh=9rClAibepR5410hMymY5QyjFNfu9myyrJTkBJ7yJs84=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PSXvc+q07mYxURgabvhC0EDZ07GzmsdGsbFL99gOWwWn2P5z8FZ/BlJ02MvKw0oi0 DebZmNXKJ2CXOdMnsWEWR3OoVOJMtpx5upcoqoGzimuBC7isnkYS1TqP5L1E6cxZU6 4XrYv8tJiQ/yVO+KpHHDW9WJtvg1IAC6n5lPmXBEkrmtDpQIZKAsCw9ZnEWbQ7Gm8D EFr+OQdexB1ClgCWhZJhpf6cX91lUXmdACsoE7/AQufDa2ng7DPJR1k3C71ynpd+D/ KrFEXsFArC7JDeWksZBrKc+/l/OmyTWBnHSzZmaXAgyqRcEOww2gygQgud3qZdqgXr KuA9Abd2ez09A== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 42/43] rv64ilp32_abi: memfd: Use vm_flag_t Date: Tue, 25 Mar 2025 08:16:23 -0400 Message-Id: <20250325121624.523258-43-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, and uses unsigned long long as vm_flags_t. Using unsigned long would break rv64ilp32 abi. The definition of vm_flag_t exists, hence its usage is preferred even if it's not essential. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/memfd.h | 4 ++-- mm/memfd.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/memfd.h b/include/linux/memfd.h index 246daadbfde8..6f606d9573c3 100644 --- a/include/linux/memfd.h +++ b/include/linux/memfd.h @@ -14,7 +14,7 @@ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx); * We also update VMA flags if appropriate by manipulating the VMA flags pointed * to by vm_flags_ptr. */ -int memfd_check_seals_mmap(struct file *file, unsigned long *vm_flags_ptr); +int memfd_check_seals_mmap(struct file *file, vm_flags_t *vm_flags_ptr); #else static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned int a) { @@ -25,7 +25,7 @@ static inline struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx) return ERR_PTR(-EINVAL); } static inline int memfd_check_seals_mmap(struct file *file, - unsigned long *vm_flags_ptr) + vm_flags_t *vm_flags_ptr) { return 0; } diff --git a/mm/memfd.c b/mm/memfd.c index 37f7be57c2f5..50dad90ffedc 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -332,10 +332,10 @@ static inline bool is_write_sealed(unsigned int seals) return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE); } -static int check_write_seal(unsigned long *vm_flags_ptr) +static int check_write_seal(vm_flags_t *vm_flags_ptr) { - unsigned long vm_flags = *vm_flags_ptr; - unsigned long mask = vm_flags & (VM_SHARED | VM_WRITE); + vm_flags_t vm_flags = *vm_flags_ptr; + vm_flags_t mask = vm_flags & (VM_SHARED | VM_WRITE); /* If a private matting then writability is irrelevant. */ if (!(mask & VM_SHARED)) @@ -357,7 +357,7 @@ static int check_write_seal(unsigned long *vm_flags_ptr) return 0; } -int memfd_check_seals_mmap(struct file *file, unsigned long *vm_flags_ptr) +int memfd_check_seals_mmap(struct file *file, vm_flags_t *vm_flags_ptr) { int err = 0; unsigned int *seals_ptr = memfd_file_seals_ptr(file); From patchwork Tue Mar 25 12:16:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 14028835 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30A5D26656F; Tue, 25 Mar 2025 12:27:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905630; cv=none; b=lSA3gxwGMP/D51a9ShRV/Oui4UEy61exe2Mv+BKBak0UgB7m4jJIHGjj0+7TLuFR2KNhEFA/mczgbvnRp03mEMupxoD1KJHWdo+Eujo1UCBc+T2KCqPNJp3JJWS/nBdfOfzxjPuPTxGe3gYu5R7Ep3kT/Q4o1XEk6hLYrPSeR0c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905630; c=relaxed/simple; bh=AWjl4tBrnhRkcRiex/88q6k6fRAmxNwISnnYEf7182Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=m7H9qapBaAXxp+Tk/PZCNHDJ6C4mD/pSePwrd1a/6/Tpx1ArkIgEI3BWkrsbOuqwD6m/wzfajLNCAzXD44K6TbUyVOB/5XfCwq9++a2oUEsNuZanagW71We/UNy58cXcTT44MAPbwL1lWHgx6/qYWxj98cbkeF7XQEDU70gebQA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BOK/wPKS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BOK/wPKS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8EF18C4CEE4; Tue, 25 Mar 2025 12:26:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905629; bh=AWjl4tBrnhRkcRiex/88q6k6fRAmxNwISnnYEf7182Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BOK/wPKSymoONoK8d8zXPGGMy5MG+aYpHf3TkadCLwDPTOQJGUAW9et07ZG7zCZWv ef4pdsDxtkJTHjY+SAKtAcW26il80oDtB38MozaTEhXlGd4IkrTlN3tIWkXmAwnoLn BzHu8ImCylu3Yb81WL68Buni1g0ODfLOGd7OBrnKV/M2af0y77JoFV43LSNATjnN4G 9CzIE3AozXUaiD7HuaDWAl0wAkWF1HUbFGm1CJxGQ45QPczxfx9HKr6TYKmSS9kPAD gZpFZHUBEZFz8k7KsyAguseekaOJzmeEib25mONLZRMhFlnpUorLwGTrqPnI6I56MM 2bv1z0/9P4X3A== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 43/43] riscv: Fixup address space overlay of print_mlk Date: Tue, 25 Mar 2025 08:16:24 -0400 Message-Id: <20250325121624.523258-44-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" If phyical memory is 1GiB for ilp32 linux, then print_mlk would be: lowmem : 0xc0000000 - 0x00000000 ( 1024 MB) After fixup: lowmem : 0xc0000000 - 0xffffffff ( 1024 MB) Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/mm/init.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 3cdbb033860e..e09286d4916a 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -105,26 +105,26 @@ static void __init zone_sizes_init(void) static inline void print_mlk(char *name, unsigned long b, unsigned long t) { - pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld kB)\n", name, b, t, + pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld kB)\n", name, b, t - 1, (((t) - (b)) >> LOG2_SZ_1K)); } static inline void print_mlm(char *name, unsigned long b, unsigned long t) { - pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld MB)\n", name, b, t, + pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld MB)\n", name, b, t - 1, (((t) - (b)) >> LOG2_SZ_1M)); } static inline void print_mlg(char *name, unsigned long b, unsigned long t) { - pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld GB)\n", name, b, t, + pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld GB)\n", name, b, t - 1, (((t) - (b)) >> LOG2_SZ_1G)); } #if BITS_PER_LONG == 64 static inline void print_mlt(char *name, unsigned long b, unsigned long t) { - pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld TB)\n", name, b, t, + pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld TB)\n", name, b, t - 1, (((t) - (b)) >> LOG2_SZ_1T)); } #else