From patchwork Fri Sep 13 21:00:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13804070 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 563CB13CF86 for ; Fri, 13 Sep 2024 21:01:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726261262; cv=none; b=kLEGoqy0ZaKUwAZNTTtxaUNLqir51bKJaEx+X8df1ncXBSh4aY79pp9NWbcZfcgAQC1TkHHEGL5okDa5AimefAPu5Eum0qKKiSUQV0KDJcL/ChNQJkkuEi0MtcgBGIxahk7fYiU4ErTjyO90Wi8PMQ6kb3IALVD44Bchto6lJp4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726261262; c=relaxed/simple; bh=Bsq1xEuw6ejf6xS3xANARgXaND9lpmiO/KJqmLEaZQw=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=Bm5serrhyqqP9m2i0dqxPr7hkdg+CM74zAPaGxZSvfAZcWao2CyB9O6yVsEIBtvM1MD85TaSlhyOZXBDoLqFe/sHqBUH7sbUZt2LoN1GjfG+Ta3MkunZtqLm0vLYIzmVhS1a2pDlccOe9lqbZbhj0e1DB4H9wAFcIb+r5Ys39Zc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qN98nIxD; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qN98nIxD" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e02a4de4f4eso5047387276.1 for ; Fri, 13 Sep 2024 14:01:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726261259; x=1726866059; darn=vger.kernel.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=JOjhO68VsW+Q44c9vxTifNEWOZBiXq84OSMhawJSCm8=; b=qN98nIxDBGH5L15YrPo+991vk1PQaM2Mn5/5NwBms9WgZTTidmgcIM2rQTV8GFAjaz xswoYMnR98hgyJjReN5rVFQs4ZF1B6eDTMPw0SimnCrXgXyw6bSOxXp8J6XHBo77WiAH D/EhQ9mei6+4XCYXM6nLdygopgneamiE8eX5w3d2K24a4+RWhCMacpSty9wodlX3QP97 FYiu+yIdY2GGugwQ6XA5xgfWexqOfZp/5s3DAKWbvckAsPrv5GLlowJfFcLDWOGErrIq HC4QE75u0Nnal03i4zxOJ5P7XarcdOFIX8E/6ehfM6T4r17sYDTbC20V0Vvix1ELJJNF sPZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726261259; x=1726866059; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=JOjhO68VsW+Q44c9vxTifNEWOZBiXq84OSMhawJSCm8=; b=NrPfRsf5gCZuyxTwy5Oy2Zi/k/UjExi3hVS83U2c3VoulRiuwqsZ3BCzFy2ALZZQS0 l0MG/68Q+1NE0NYPqUf7PDB/aKN4KcnEM3DyJOesdohgdTFmbMkEn0QJfHvGwFluk9Kq SNiENKCNex3cJD3RuMFg2RPfXNHZEzcctMDsDSDAgK9fYfHVbl3hAXXArGTQSSpYjMwT vNe0L05FgqSgj5yzq1j0caGrcr5y/1tK0pC0MEXSZ9gVlHaJGZwKRKNcAw/1zyKdStPZ tuW8sfj9OBhBwH8gWti5uNIM/6G4gk5q6EhemWUupMX9UZPbcoRDXiN73v3hXNH0d/Zv jemg== X-Forwarded-Encrypted: i=1; AJvYcCUF6vem1CrF1tRtCIdrXo4TMZUZQkj38ZFt0ieB8OHNSiUKob5t96sbkJC9xUYozmUD7vCiQ7d/pjI9q62bEww=@vger.kernel.org X-Gm-Message-State: AOJu0YzTSuNXGFSboO9erJ1bLeVrKL8uGrmp1XP2Y3I/VqRvAgSd6y5p WyEAfSUmv3uePuz60N6OnM5JaO9uNmn0BlylCa/fWslX+PdsxMJj5/fOjUErkHLTgX27ZQ+Hchv 91xT7U8+GCZyERw== X-Google-Smtp-Source: AGHT+IE78DqEMn1x4D65+Oh79baqNb4ga4Il2b4TZCRvJqsZSc62M4hKIA8WtuEnZU3vVeN7NcM802z1zQlvJD0= X-Received: from aliceryhl.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:35bd]) (user=aliceryhl job=sendgmr) by 2002:a05:6902:1743:b0:e03:3cfa:1aa7 with SMTP id 3f1490d57ef6-e1d9db9e1b8mr9952276.1.1726261259177; Fri, 13 Sep 2024 14:00:59 -0700 (PDT) Date: Fri, 13 Sep 2024 21:00:29 +0000 Precedence: bulk X-Mailing-List: linux-hardening@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=13917; i=aliceryhl@google.com; h=from:subject; bh=Bsq1xEuw6ejf6xS3xANARgXaND9lpmiO/KJqmLEaZQw=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBm5KVfe1mlk3eGa5gVeLGqw7Xv6jTFstXhpxq8H 9z6t88w6qeJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZuSlXwAKCRAEWL7uWMY5 Ro9SEACX5vjaGSujAzvJt0UxP23WeMGhy5m7v99K5ys+26eCi4SpFoLEHSLEzKLMHXqtep1MOo+ EOR+OY/OX+xjyf8HdzzqjM6mPp0EIsq4rcsDTupFWFK84MApjkVRJuboXg8x5Ugp163Ne3tcMwF mlzxR7STFFmBDD2RYTlJI073bwUTM3GuMTKvgls7456bC8hyZdYKhZCy0COXoDyXBwF1kCBh61z 0aofaXa7d+102I0EZ7rkVaVX/2fZfteT9nzhXCEZF0WiflBI4BhRploL/N1D7i3WfRWucOlIrG5 oVKTBrh22MPKbS35fzzGsfoAE8L15FesD8X0tGGWh0I49f6gtXEdttdNha7uoL8mJH3HScHpokv 8RzvMbVAgpL8JvEcTKyNvY6WiJkmStfUaR4PbtHWZCD5GR3NuiJ90NHE3L27Q2OHnjNiw64YFOB vTIRPzIJji66/9kDdd7VdEyOAcgaRXbx4FXc6pYSyhYyqc8QtxJzWv+A5Gz2UX8d+qwtJRUeBki PI2+pxmPvghNLOi1ATzQHNwZ2kV9+2jFZtMRVv82ObP7zj2QBtQruPeKgGs4OdWyvXmwidP5cg3 9Zo8we3EbgMEp5g1n2U/7RLM2ZFn0d6+ew40YKCLIoO8XZq2c0knYjfIeQ2VIAKIGMfTfFK4pUZ BI4avkwxTjPAnWQ== X-Mailer: git-send-email 2.46.0.662.g92d0881bb0-goog Message-ID: <20240913210031.20802-1-aliceryhl@google.com> Subject: [PATCH 1/2] rust: harden index manipulation using ownership From: Alice Ryhl To: Miguel Ojeda , Greg Kroah-Hartman , Kees Cook Cc: Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Trevor Gross , Carlos Llamas , rust-for-linux@vger.kernel.org, linux-hardening@vger.kernel.org, linux-kernel@vger.kernel.org, Alice Ryhl Rust is very effective at preventing memory safety bugs such as use-after-free bugs. Can we also make it effective at preventing bugs that arise from using indices incorrectly? As it stands today, accessing an array out of bounds is caught and results in a kernel panic, but Rust will not stop you from doing it. Accessing an array at the wrong index is not caught at all. This patch explores ways to catch these kinds of indexing bugs. The Android Binder driver has a component where it "translates" the contents of transactions sent using the driver, which involves complex and error-prone index manipulation. The C driver has had several bugs in this area. A few examples: 4df153652cc4 ("binder: fix UAF caused by offsets overwrite") bdc1c5fac982 ("binder: fix UAF caused by faulty buffer cleanup") 16981742717b ("binder: fix incorrect calculation for num_valid") a56587065094 ("binder: Set end of SG buffer area properly.") Rust is not immune to these issues either. In fact, I've already encountered and fixed two bugs like this in Rust Binder. In both cases, the bugs could result in `Allocation::cleanup_object` calls on invalid data. Rust's safety guarantees prevented these bugs from affecting the integrity of kernel itself, but an attacker could use them to e.g. manipulate handles that are present in other processes and for example obtaining IApplicationThread handle belonging to another app from system_server, which in turn allows loading code into that app. Ultimately, the root cause of all of these bugs is that there is some index in the destination buffer that gets written to twice. The primary idea of this new Range API is to utilize Rust's ownership semantics to prevent reuse of indices. The idea is that you create one big Range for the entire range of indices, and then there are various methods to split it into smaller ranges. An example from Rust Binder, where several kinds of data are stored next to each other: // Split the buffer size into sub-ranges. let full_range = Range::new_area(len); let (data_range, after_data) = full_range.split_within(aligned_data_size)?; let (offsets_range, after_offsets) = after_data.split_within(aligned_offsets_size)?; let (buffers_range, after_buffers) = after_offsets.split_within(aligned_buffers_size)?; let (secctx_range, after_secctx) = after_buffers.split_within(aligned_secctx_size)?; after_secctx.assert_empty()?; The Range type is set up so that it cannot be copied or cloned, which means that any attempts to use a Range more than once will fail to compile. For example, if the line creating `buffers_range` accidentally tried to split `after_data` instead of `after_offsets`, then that would lead to this compilation error: error[E0382]: use of moved value: `after_data` --> /usr/local/google/home/aliceryhl/rust-android-mainline/common/drivers/android/binder/thread.rs:1101:50 | 1099 | let (data_range, after_data) = full_range.split_within(aligned_data_size)?; | ---------- move occurs because `after_data` has type `kernel::range::Range`, which does not implement the `Copy` trait 1100 | let (offsets_range, after_offsets) = after_data.split_within(aligned_offsets_size)?; | ---------------------------------- `after_data` moved due to this method call 1101 | let (buffers_range, after_buffers) = after_data.split_within(aligned_buffers_size)?; | ^^^^^^^^^^ value used here after move | note: `kernel::range::Range::split_within` takes ownership of the receiver `self`, which moves `after_data` --> /usr/local/google/home/aliceryhl/rust-android-mainline/common/rust/kernel/range.rs:108:29 | 108 | pub fn split_within(mut self, length: usize) -> Result<(Range, Range), RangeError> { | ^^^^ Basically, the error says that you tried to use the `after_data` range twice, which is not allowed because `split_within` destroys the object you call it on. In Rust Binder, I am using the ranges to prevent reuse of indices in the *destination* buffer. To enforce that, all methods for writing to the destination buffer are modified to take a `Range` as an argument, consuming the range being written to. As ranges can only be split into smaller pieces, this enforces that you will never write to the same index twice. Of course, the API is not completely bullet-proof. If you call `Range::new_area` twice, you'll get two overlapping ranges. But we don't need it to be completely impossible to misuse. It just needs to be really difficult to do so accidentally. Binder has one case where it intentionally writes to the same location twice: when sending file descriptors over Binder, the driver does not know what value the fd will have when transferring the data, so it will first write u32::MAX. Much later, it will overwrite it with the real fd. There is a `duplicate_range_careful` method for this case. It is interesting to compare with the uaccess patchset [1]. The uaccess API takes a userspace pointer and length representing a range of bytes in userspace, and lets you read the range incrementally. The API design prevents reading the same address twice. This helps prevent time-of-check-to-time-of-use bugs where userspace modifies the data in between two reads, which can cause bugs if the driver assumes that the memory is unchanged. In fact, both Rust Binder bugs mentioned earlier are caught by this part of the uaccess API, as both bugs eventually attempt to read past the end of the userspace region, triggering an error. Unfortunately, this happens too late, as the previously translated data has already been overwritten by the time the error is triggered. This patchset is also simliar to Benno's untrusted data patchset [2], which explores a different way to write misuse-resistant APIs. Link: https://lore.kernel.org/r/20240528-alice-mm-v7-0-78222c31b8f4@google.com [1] Link: https://lore.kernel.org/r/20240913112643.542914-1-benno.lossin@proton.me [2] Signed-off-by: Alice Ryhl --- rust/kernel/lib.rs | 1 + rust/kernel/range.rs | 236 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 237 insertions(+) create mode 100644 rust/kernel/range.rs diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs index a7fefc4ae725..72a998cd02e0 100644 --- a/rust/kernel/lib.rs +++ b/rust/kernel/lib.rs @@ -48,6 +48,7 @@ pub mod page; pub mod prelude; pub mod print; +pub mod range; pub mod rbtree; pub mod security; pub mod seq_file; diff --git a/rust/kernel/range.rs b/rust/kernel/range.rs new file mode 100644 index 000000000000..8fb9d96ac724 --- /dev/null +++ b/rust/kernel/range.rs @@ -0,0 +1,236 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Copyright (C) 2024 Google LLC. + +//! Utilities for working with ranges of indices. + +/// A range of indices. +/// +/// This utility is useful for ensuring that no index in the range is used more than once. +#[derive(Debug)] +pub struct Range { + offset: usize, + length: usize, +} + +impl Range { + /// Creates a new `Range` for an area of the given size. + pub fn new_area(size: usize) -> Self { + Self { + offset: 0, + length: size, + } + } + + /// Use this range of indices. + /// + /// This destroys the `Range` object, so these indices cannot be used again after this call. + pub fn use_range(self) -> UsedRange { + UsedRange { + offset: self.offset, + length: self.length, + } + } + + /// Duplicate this range. + /// + /// Be careful when using this method, as it allows you to use a range twice. + pub fn duplicate_range_careful(&self) -> Range { + Range { + offset: self.offset, + length: self.length, + } + } + + /// Peek at the offset without using the range. + /// + /// This doesn't destroy the `Range` object, so be careful that the range is not used twice. + pub fn peek_offset(&self) -> usize { + self.offset + } + + /// Peek at the length without using the range. + /// + /// This doesn't destroy the `Range` object, so be careful that the range is not used twice. + pub fn peek_length(&self) -> usize { + self.length + } + + /// Peek at the end without using the range. + /// + /// This doesn't destroy the `Range` object, so be careful that the range is not used twice. + pub fn peek_end(&self) -> Result { + self.offset.checked_add(self.length).ok_or(RangeError) + } + + /// Truncates this range to the given length. + pub fn truncate(&mut self, length: usize) -> Result<(), RangeError> { + if length > self.length { + return Err(RangeError); + } + self.length = length; + Ok(()) + } + + /// Assert that this range is aligned properly. + pub fn assert_aligned(&self, alignment: usize) -> Result<(), RangeError> { + if self.offset % alignment == 0 { + Ok(()) + } else { + Err(RangeError) + } + } + + /// Assert that this range has the expected length. + pub fn assert_length_eq(&self, length: usize) -> Result<(), RangeError> { + if self.length == length { + Ok(()) + } else { + Err(RangeError) + } + } + + /// Assert that this range is empty. + pub fn assert_empty(self) -> Result<(), RangeError> { + self.assert_length_eq(0) + } + + /// Splits the range into two sub-ranges. + /// + /// Fails if the `length` is greater than the range being split. + pub fn split_within(mut self, length: usize) -> Result<(Range, Range), RangeError> { + let left = self.take_from_start(length)?; + Ok((left, self)) + } + + /// Splits the range into two sub-ranges. + /// + /// Fails if the `position` is not within the current range. + pub fn split_at(mut self, position: usize) -> Result<(Range, Range), RangeError> { + let left = self.take_until(position)?; + Ok((left, self)) + } + + /// Modify this range by taking the first `length` bytes. + pub fn take_until(&mut self, position: usize) -> Result { + let from_start = Range { + offset: self.offset, + length: position.checked_sub(self.offset).ok_or(RangeError)?, + }; + + let new_self = Range { + offset: position, + length: self + .length + .checked_sub(from_start.length) + .ok_or(RangeError)?, + }; + + *self = new_self; + + Ok(from_start) + } + + /// Modify this range by taking the first `length` bytes. + pub fn take_from_start(&mut self, length: usize) -> Result { + let from_start = Range { + offset: self.offset, + length: length, + }; + + let new_self = Range { + offset: self.offset.checked_add(length).ok_or(RangeError)?, + length: self.length.checked_sub(length).ok_or(RangeError)?, + }; + + *self = new_self; + + Ok(from_start) + } + + /// Split this range into sub-ranges of the given size. + pub fn iter_chunks(self, chunk_size: usize) -> Result { + if self.length % chunk_size != 0 { + return Err(RangeError); + } + + Ok(ChunkIter { + pos: self.offset, + end: self.offset.checked_add(self.length).ok_or(RangeError)?, + chunk_size, + }) + } +} + +/// An iterator over ranges of the same size. +pub struct ChunkIter { + pos: usize, + end: usize, + chunk_size: usize, +} + +impl Iterator for ChunkIter { + type Item = Range; + fn next(&mut self) -> Option { + if self.pos >= self.end { + return None; + } + + let range = Range { + offset: self.pos, + length: self.chunk_size, + }; + self.pos = self.pos + self.chunk_size; + + Some(range) + } +} + +/// A version of [`Range`] where the length is a compile-time constant. +/// +/// This can be used to store a `Range` without using up space to store the length. +pub struct RangeFixedSize { + offset: usize, +} + +impl RangeFixedSize { + /// Create a `RangeFixedSize` from a `Range`. + pub fn from_range(range: Range) -> Result { + if range.length == LENGTH { + Ok(Self { + offset: range.offset, + }) + } else { + Err(RangeError) + } + } + + /// Convert this back into a `Range`. + pub fn into_range(self) -> Range { + Range { + offset: self.offset, + length: LENGTH, + } + } +} + +/// The return value of [`Range::use_range`]. +/// +/// The only way to access the indices in a range is to mark it "used", which converts it into this +/// type, destroying the original [`Range`] object. +#[derive(Copy, Clone)] +pub struct UsedRange { + /// The offset of this range. + pub offset: usize, + /// The length of this range. + pub length: usize, +} + +/// The error type returned when ranges are used incorrectly. +pub struct RangeError; + +impl From for crate::error::Error { + fn from(_range: RangeError) -> crate::error::Error { + crate::error::code::EINVAL + } +} From patchwork Fri Sep 13 21:00:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 13804071 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 994B714883F for ; Fri, 13 Sep 2024 21:01:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726261274; cv=none; b=HLl5+CKbGel/roXjIyBeLrKMM2fj24jYm0em2Jr6RxU68UuOjKtxRzyaYtL9Adk7l/kPZ9IA7vr4yXWGcasRiNu8pFo9Qn6uprYKGIX5VI4peYmfW2toEROL5qC3chlCNeJFz1bFZ52puvrqI0ZMv7Jn8eYmaUn45BT0BtgFrqQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726261274; c=relaxed/simple; bh=PJOYGH8sm3tPcfyZPl+7nw5IqByaiIfj8mCvzJRbYk4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OX2hAyht1aOfVOKiKIRcs3HVJNs30JnCZmg9fpuEOhXgs451juCazvcGfkIdvJGMzUUD8Dd50rlHA0RFIjB22sXTPNpbhQ/PjNph9GbBEunQwNKF8/bhnlJFLAKeiXU/tax1cTCmpfF9DssuQS7mXx6qazlOs3MTXfRlLVUo1yg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Iz/Ysl5I; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Iz/Ysl5I" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6db70bf67faso67686387b3.0 for ; Fri, 13 Sep 2024 14:01:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726261271; x=1726866071; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Z1bBBq9kikheL0dQ08W3SkeOQIzgOcUwHx4tH8ueueI=; b=Iz/Ysl5I/rlkD+q0fAO3RAbgySqG+TyKTEqDuotnsxv93fg7CaY0vjdtFjCGI6s/m0 mr/3tXysLA0PQ6sPbcs1r7Hq5K88NFvW/Ee6zyLpSQguAys5VxxOK/rINq8oUxSVYQkl ikWrQaGan06uG7ZuA6V7wKi73WPeGzDuMKNU0U5kznZp1jniZtUwmSt3iFwJGp7BdREH 9w69do0T7J522eUbpCB26paFHrXcrfK8imtQxH4JpOZ59i5ZTEtLMp056OEzyx3Ng8ly vdEYrMUHzA1NpNUJWEVCC7BLaxUKr2CbYfyb7hE8MH2DRHGGjexmmPYBargoEyNDNTE7 Ufew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726261271; x=1726866071; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Z1bBBq9kikheL0dQ08W3SkeOQIzgOcUwHx4tH8ueueI=; b=TPJVTl/+vah6fGGb8eUsjAPnpkG9ICEV+DV9cMCu5Udp6Xod+41pS54k38g8bN4sA5 WOueAC8fDMKHAXBv+qJ3oZle1nEv6Atjwl1vDq/tSs3nb5K7iHmwAPPVfwlh3c/kcaj+ 9pC2MODq9ak8izJfOcSUEeXEihHqIAUBjtBOrXlObQKNLVU01PMqLRGVvOsdzid1eIPQ p6+ebQNGfFEzWaTOgHY9qH79iLGuIKFu069qS/8HLI7EepKqs8AzGoZBp34Kz47DkZpe GA1tCdsWrfAD+YC2fQh/kF989AN4FX3a0V9SWSpdYN1C9O0VounzFONRFuI0UhtY8b2G +HoQ== X-Forwarded-Encrypted: i=1; AJvYcCVVlVYmlvouPlHp1m5HycpNgjvr1jajjS8qRvIBCJDRSxMw7+biYAcumrOpTFwsv9X+aUblAYarfOWV7WQ3pi8=@vger.kernel.org X-Gm-Message-State: AOJu0YxKBJCTowquS88lBAe4ZCFvsVxtPX3nc3bJbE5ciUnwCOwKK9Aq mBwo6qI2qELE52kz9gTsphAZxXsmIzCQ85RfyTWMQqEaQMcYh7TP0fztS3dJUXOPRlwYjiwoi3f +LyjUVFHXNdW1yw== X-Google-Smtp-Source: AGHT+IFZN1KUhMwk8NdpZbk/+kD9l28IZLOsRm8n3dE8dVatCbwG9TrOt7m17nPie8ND8m7tHCK+uBDIy2CjM2g= X-Received: from aliceryhl.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:35bd]) (user=aliceryhl job=sendgmr) by 2002:a05:690c:48c4:b0:6b2:7ff8:ca3 with SMTP id 00721157ae682-6dbb6b85fe8mr5662537b3.4.1726261271572; Fri, 13 Sep 2024 14:01:11 -0700 (PDT) Date: Fri, 13 Sep 2024 21:00:30 +0000 In-Reply-To: <20240913210031.20802-1-aliceryhl@google.com> Precedence: bulk X-Mailing-List: linux-hardening@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240913210031.20802-1-aliceryhl@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=31783; i=aliceryhl@google.com; h=from:subject; bh=PJOYGH8sm3tPcfyZPl+7nw5IqByaiIfj8mCvzJRbYk4=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBm5KVq/xjCm4VYCFACE7cMpANI65oTv4h0MJH0M SfQVwoVUqCJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZuSlagAKCRAEWL7uWMY5 RvmPEACiIn6oYHUr3bkQzzy06kHrlFlg/y8cfjrC7XpO8HumRE93t+ttwLy+HY0af+KdjWx1oEF y62RN8oTu1ebhvZbmb7IzDbE8opzHaht+85qrfAOClQdx4/HGKfy6jYh0X1yGaBthxLMN2+k23/ 0LDIqX+j22+0muIDjFaR1HWqKeiXPY+pDdUUsTF32BvuBGPZKigrjtGVPWxrBX7ReCPyGVqcQy9 w5DbN53pNwBzUJbKJQOjCU6jhEBBPM2l7ng00YtEZoikI2TCpnOtVBe01iw9mlH/kz471kxRo5s QtdSNgemg7zJFj2aGnoEW65RPoFb+ZfuK53FzfJmPpTI1MsLC1JykCk2nGufytytVjNyCT/hQm1 hE17wYffvJ5H4YoJkPIsZ80dSPMKhdtLhUxuAUgObLag6FrZ6Wu7dh2yPekqZB4HBPOCuc9JDxz uhK1AWHiaZb0dYnFQIgdhvj+BQazo4SXJAeJ159gNyzdPYL7LtWap83RoiEEaDeTpHHbnUBCHVs gzW72sj4iL9vY4QY846Cy54vpyEyEYiwLSuudxDM65uFybSSH6pBhl0YpXXBYpwveZ+1JKu+jm0 naPs6E/U+YYqNOui1SY01PDt3JhiElHadW8auOGsQaBasfhoqkI2FWQlBxl8o4f+JKKykKvMHmw bzFdtvH5VRKry3w== X-Mailer: git-send-email 2.46.0.662.g92d0881bb0-goog Message-ID: <20240913210031.20802-2-aliceryhl@google.com> Subject: [RFC PATCH 2/2] rust_binder: use `Range` API when translating transactions From: Alice Ryhl To: Miguel Ojeda , Greg Kroah-Hartman , Kees Cook Cc: Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Trevor Gross , Carlos Llamas , rust-for-linux@vger.kernel.org, linux-hardening@vger.kernel.org, linux-kernel@vger.kernel.org, Alice Ryhl This RFC illustrates how my `Range` API can be used in Rust Binder. Please see the other patch in this series for more information. View this change on the web: https://r.android.com/3267276 Change-Id: I77b7ee3e734dc54975331620c49afe7713efc8a1 Signed-off-by: Alice Ryhl --- drivers/android/binder/allocation.rs | 73 +++++---- drivers/android/binder/error.rs | 8 + drivers/android/binder/thread.rs | 237 ++++++++++++++------------- 3 files changed, 178 insertions(+), 140 deletions(-) diff --git a/drivers/android/binder/allocation.rs b/drivers/android/binder/allocation.rs index e552f3350f18..b8d6b27169e0 100644 --- a/drivers/android/binder/allocation.rs +++ b/drivers/android/binder/allocation.rs @@ -3,12 +3,12 @@ // Copyright (C) 2024 Google LLC. use core::mem::{size_of, size_of_val, MaybeUninit}; -use core::ops::Range; use kernel::{ bindings, fs::file::{File, FileDescriptorReservation}, prelude::*, + range::{Range, RangeFixedSize, UsedRange}, sync::Arc, types::{ARef, AsBytes, FromBytes}, uaccess::UserSliceReader, @@ -25,7 +25,7 @@ #[derive(Default)] pub(crate) struct AllocationInfo { /// Range within the allocation where we can find the offsets to the object descriptors. - pub(crate) offsets: Option>, + pub(crate) offsets: Option>, /// The target node of the transaction this allocation is associated to. /// Not set for replies. pub(crate) target_node: Option, @@ -81,7 +81,17 @@ pub(crate) fn new( } } - fn size_check(&self, offset: usize, size: usize) -> Result { + fn size_check(&self, range: Range) -> Result { + let range = range.use_range(); + let overflow_fail = range.offset.checked_add(range.length).is_none(); + let cmp_size_fail = range.offset.wrapping_add(range.length) > self.size; + if overflow_fail || cmp_size_fail { + return Err(EFAULT); + } + Ok(range) + } + + fn size_check2(&self, offset: usize, size: usize) -> Result<()> { let overflow_fail = offset.checked_add(size).is_none(); let cmp_size_fail = offset.wrapping_add(size) > self.size; if overflow_fail || cmp_size_fail { @@ -90,37 +100,35 @@ fn size_check(&self, offset: usize, size: usize) -> Result { Ok(()) } - pub(crate) fn copy_into( - &self, - reader: &mut UserSliceReader, - offset: usize, - size: usize, - ) -> Result { - self.size_check(offset, size)?; + pub(crate) fn copy_into(&self, reader: &mut UserSliceReader, range: Range) -> Result { + let range = self.size_check(range)?; // SAFETY: While this object exists, the range allocator will keep the range allocated, and // in turn, the pages will be marked as in use. unsafe { - self.process - .pages - .copy_from_user_slice(reader, self.offset + offset, size) + self.process.pages.copy_from_user_slice( + reader, + self.offset + range.offset, + range.length, + ) } } pub(crate) fn read(&self, offset: usize) -> Result { - self.size_check(offset, size_of::())?; + self.size_check2(offset, size_of::())?; // SAFETY: While this object exists, the range allocator will keep the range allocated, and // in turn, the pages will be marked as in use. unsafe { self.process.pages.read(self.offset + offset) } } - pub(crate) fn write(&self, offset: usize, obj: &T) -> Result { - self.size_check(offset, size_of_val::(obj))?; + pub(crate) fn write(&self, range: Range, obj: &T) -> Result { + range.assert_length_eq(size_of_val::(obj))?; + let range = self.size_check(range)?; // SAFETY: While this object exists, the range allocator will keep the range allocated, and // in turn, the pages will be marked as in use. - unsafe { self.process.pages.write(self.offset + offset, obj) } + unsafe { self.process.pages.write(self.offset + range.offset, obj) } } pub(crate) fn fill_zero(&self) -> Result { @@ -143,7 +151,7 @@ pub(crate) fn get_or_init_info(&mut self) -> &mut AllocationInfo { self.allocation_info.get_or_insert_with(Default::default) } - pub(crate) fn set_info_offsets(&mut self, offsets: Range) { + pub(crate) fn set_info_offsets(&mut self, offsets: core::ops::Range) { self.get_or_init_info().offsets = Some(offsets); } @@ -172,13 +180,13 @@ pub(crate) fn info_add_fd_reserve(&mut self, num_fds: usize) -> Result { pub(crate) fn info_add_fd( &mut self, file: ARef, - buffer_offset: usize, + range: Range, close_on_free: bool, ) -> Result { self.get_or_init_info().file_list.files_to_translate.push( FileEntry { file, - buffer_offset, + range: RangeFixedSize::from_range(range)?, close_on_free, }, GFP_KERNEL, @@ -206,8 +214,9 @@ pub(crate) fn translate_fds(&mut self) -> Result { for file_info in files { let res = FileDescriptorReservation::get_unused_fd_flags(bindings::O_CLOEXEC)?; let fd = res.reserved_fd(); - self.write::(file_info.buffer_offset, &fd)?; - crate::trace::trace_transaction_fd_recv(self.debug_id, fd, file_info.buffer_offset); + let range = file_info.range.into_range(); + crate::trace::trace_transaction_fd_recv(self.debug_id, fd, range.peek_offset()); + self.write::(range, &fd)?; reservations.push( Reservation { @@ -343,20 +352,26 @@ pub(crate) fn read(&self, offset: usize) -> Result { self.alloc.read(offset) } - pub(crate) fn write(&self, offset: usize, obj: &T) -> Result { - if offset.checked_add(size_of::()).ok_or(EINVAL)? > self.limit { + pub(crate) fn write(&self, range: Range, obj: &T) -> Result { + range.assert_length_eq(size_of::())?; + let end = range + .peek_offset() + .checked_add(size_of::()) + .ok_or(EINVAL)?; + if end > self.limit { return Err(EINVAL); } - self.alloc.write(offset, obj) + self.alloc.write(range, obj) } pub(crate) fn transfer_binder_object( &self, - offset: usize, + range: Range, obj: &bindings::flat_binder_object, strong: bool, node_ref: NodeRef, ) -> Result { + range.assert_length_eq(size_of::())?; let mut newobj = FlatBinderObject::default(); let node = node_ref.node.clone(); if Arc::ptr_eq(&node_ref.node.owner, &self.alloc.process) { @@ -371,7 +386,7 @@ pub(crate) fn transfer_binder_object( newobj.flags = obj.flags; newobj.__bindgen_anon_1.binder = ptr as _; newobj.cookie = cookie as _; - self.write(offset, &newobj)?; + self.write(range, &newobj)?; // Increment the user ref count on the node. It will be decremented as part of the // destruction of the buffer, when we see a binder or weak-binder object. node.update_refcount(true, 1, strong); @@ -390,7 +405,7 @@ pub(crate) fn transfer_binder_object( }; newobj.flags = obj.flags; newobj.__bindgen_anon_1.handle = handle; - if self.write(offset, &newobj).is_err() { + if self.write(range, &newobj).is_err() { // Decrement ref count on the handle we just created. let _ = self .alloc @@ -561,7 +576,7 @@ struct FileEntry { /// The file for which a descriptor will be created in the recipient process. file: ARef, /// The offset in the buffer where the file descriptor is stored. - buffer_offset: usize, + range: RangeFixedSize<4>, /// Whether this fd should be closed when the allocation is freed. close_on_free: bool, } diff --git a/drivers/android/binder/error.rs b/drivers/android/binder/error.rs index dfce0c6270ca..8267134ff74b 100644 --- a/drivers/android/binder/error.rs +++ b/drivers/android/binder/error.rs @@ -67,6 +67,14 @@ fn from(source: kernel::fs::file::BadFdError) -> Self { } } +impl From for BinderError { + #[track_caller] + fn from(source: kernel::range::RangeError) -> Self { + unsafe { kernel::bindings::dump_stack() }; + BinderError::from(Error::from(source)) + } +} + impl From for BinderError { fn from(_: kernel::alloc::AllocError) -> Self { Self { diff --git a/drivers/android/binder/thread.rs b/drivers/android/binder/thread.rs index ca1c85815bf8..ae796700cd6c 100644 --- a/drivers/android/binder/thread.rs +++ b/drivers/android/binder/thread.rs @@ -12,6 +12,7 @@ fs::{File, LocalFile}, list::{AtomicTracker, List, ListArc, ListLinks, TryNewListArc}, prelude::*, + range::Range, security, seq_file::SeqFile, seq_print, @@ -56,12 +57,10 @@ struct ScatterGatherState { struct ScatterGatherEntry { /// The index in the offset array of the BINDER_TYPE_PTR that this entry originates from. obj_index: usize, - /// Offset in target buffer. - offset: usize, + /// Range in the target buffer. + range: Range, /// User address in source buffer. sender_uaddr: usize, - /// Number of bytes to copy. - length: usize, /// The minimum offset of the next fixup in this buffer. fixup_min_offset: usize, /// The offsets within this buffer that contain pointers which should be translated. @@ -170,15 +169,19 @@ fn validate_parent_fixup( return Err(EINVAL); } let new_min_offset = parent_offset.checked_add(length).ok_or(EINVAL)?; - if new_min_offset > sg_entry.length { + if new_min_offset > sg_entry.range.peek_length() { pr_warn!( "validate_parent_fixup: new_min_offset={}, sg_entry.length={}", new_min_offset, - sg_entry.length + sg_entry.range.peek_length() ); return Err(EINVAL); } - let target_offset = sg_entry.offset.checked_add(parent_offset).ok_or(EINVAL)?; + let target_offset = sg_entry + .range + .peek_offset() + .checked_add(parent_offset) + .ok_or(EINVAL)?; // The `ancestors_i + 1` operation can't overflow since the output of the addition is at // most `self.ancestors.len()`, which also fits in a usize. Ok(ParentFixupInfo { @@ -194,26 +197,20 @@ fn validate_parent_fixup( /// requested by the user using the `buffers_size` field of `binder_transaction_data_sg`. Each time /// we translate an object of type `BINDER_TYPE_PTR`, some of the unused buffer space is consumed. struct UnusedBufferSpace { - /// The start of the remaining space. - offset: usize, - /// The end of the remaining space. - limit: usize, + range: Range, } impl UnusedBufferSpace { /// Claim the next `size` bytes from the unused buffer space. The offset for the claimed chunk /// into the buffer is returned. - fn claim_next(&mut self, size: usize) -> Result { + fn claim_next(&mut self, size: usize) -> Result { // We require every chunk to be aligned. - let size = ptr_align(size); - let new_offset = self.offset.checked_add(size).ok_or(EINVAL)?; + let mut range = self.range.take_from_start(ptr_align(size))?; - if new_offset <= self.limit { - let offset = self.offset; - self.offset = new_offset; - Ok(offset) - } else { - Err(EINVAL) - } + // Truncate any extra bytes added for alignment. + range.truncate(size)?; + + range.assert_aligned(size_of::())?; + Ok(range) } } @@ -759,7 +756,7 @@ pub(crate) fn restore_priority(&self, desired: &BinderPriority) { fn translate_object( &self, obj_index: usize, - offset: usize, + obj_range: Range, object: BinderObjectRef<'_>, view: &mut AllocationView<'_>, allow_fds: bool, @@ -778,7 +775,7 @@ fn translate_object( .as_arc_borrow() .get_node(ptr, cookie, flags, strong, self)?; security::binder_transfer_binder(&self.process.cred, &view.alloc.process.cred)?; - view.transfer_binder_object(offset, obj, strong, node)?; + view.transfer_binder_object(obj_range, obj, strong, node)?; } BinderObjectRef::Handle(obj) => { let strong = obj.hdr.type_ == BINDER_TYPE_HANDLE; @@ -786,7 +783,7 @@ fn translate_object( let handle = unsafe { obj.__bindgen_anon_1.handle } as _; let node = self.process.get_node_from_handle(handle, strong)?; security::binder_transfer_binder(&self.process.cred, &view.alloc.process.cred)?; - view.transfer_binder_object(offset, obj, strong, node)?; + view.transfer_binder_object(obj_range, obj, strong, node)?; } BinderObjectRef::Fd(obj) => { if !allow_fds { @@ -805,52 +802,61 @@ fn translate_object( &file, )?; + const FD_FIELD_OFFSET: usize = + ::core::mem::offset_of!(bindings::binder_fd_object, __bindgen_anon_1.fd) + as usize; + + let fd_field_range = { + // We're going to write to the fd field twice. Once now with u32::MAX, and + // again later once we know what the actual fd is going to be. + let mut range = obj_range.duplicate_range_careful(); + range.take_from_start(FD_FIELD_OFFSET)?; + range.truncate(size_of::())?; + range + }; + let mut obj_write = BinderFdObject::default(); obj_write.hdr.type_ = BINDER_TYPE_FD; // This will be overwritten with the actual fd when the transaction is received. obj_write.__bindgen_anon_1.fd = u32::MAX; obj_write.cookie = obj.cookie; - view.write::(offset, &obj_write)?; - - const FD_FIELD_OFFSET: usize = - ::core::mem::offset_of!(bindings::binder_fd_object, __bindgen_anon_1.fd) - as usize; + view.write::(obj_range, &obj_write)?; - let field_offset = offset + FD_FIELD_OFFSET; - crate::trace::trace_transaction_fd_send(view.alloc.debug_id, fd, field_offset); + crate::trace::trace_transaction_fd_send( + view.alloc.debug_id, + fd, + fd_field_range.peek_offset(), + ); - view.alloc.info_add_fd(file, field_offset, false)?; + view.alloc.info_add_fd(file, fd_field_range, false)?; } BinderObjectRef::Ptr(obj) => { let obj_length = obj.length.try_into().map_err(|_| EINVAL)?; - let alloc_offset = match sg_state.unused_buffer_space.claim_next(obj_length) { + let ptr_range = match sg_state.unused_buffer_space.claim_next(obj_length) { Ok(alloc_offset) => alloc_offset, Err(err) => { pr_warn!( - "Failed to claim space for a BINDER_TYPE_PTR. (offset: {}, limit: {}, size: {})", - sg_state.unused_buffer_space.offset, - sg_state.unused_buffer_space.limit, + "Failed to claim space for a BINDER_TYPE_PTR. (size: {})", obj_length, ); return Err(err.into()); } }; + let buffer_ptr_in_user_space = (view.alloc.ptr + ptr_range.peek_offset()) as u64; + let sg_state_idx = sg_state.sg_entries.len(); sg_state.sg_entries.push( ScatterGatherEntry { obj_index, - offset: alloc_offset, + range: ptr_range, sender_uaddr: obj.buffer as _, - length: obj_length, pointer_fixups: Vec::new(), fixup_min_offset: 0, }, GFP_KERNEL, )?; - let buffer_ptr_in_user_space = (view.alloc.ptr + alloc_offset) as u64; - if obj.flags & bindings::BINDER_BUFFER_FLAG_HAS_PARENT == 0 { sg_state.ancestors.clear(); sg_state.ancestors.push(sg_state_idx, GFP_KERNEL)?; @@ -898,7 +904,7 @@ fn translate_object( obj_write.length = obj.length; obj_write.parent = obj.parent; obj_write.parent_offset = obj.parent_offset; - view.write::(offset, &obj_write)?; + view.write::(obj_range, &obj_write)?; } BinderObjectRef::Fda(obj) => { if !allow_fds { @@ -949,10 +955,26 @@ fn translate_object( return Err(EINVAL.into()); } - for i in (0..fds_len).step_by(size_of::()) { + // We're saving a fixup to skip this range in this buffer, so we won't actually use + // it twice. + // + // TODO: Move this logic to the code that performs fixups. That way, we can avoid + // duplicating this range. + let (_, mut fds_range) = parent_entry + .range + .duplicate_range_careful() + .split_at(info.target_offset)?; + fds_range.truncate(fds_len)?; + + for fd_range in fds_range.iter_chunks(size_of::())? { let fd = { + // We're not actually using the range to copy into the allocation here, so + // this won't lead to double use of any indices. + let start = fd_range.peek_offset() - info.target_offset; + let end = fd_range.peek_end()? - info.target_offset; + let mut fd_bytes = [0u8; size_of::()]; - fd_bytes.copy_from_slice(&fda_bytes[i..i + size_of::()]); + fd_bytes.copy_from_slice(&fda_bytes[start..end]); u32::from_ne_bytes(fd_bytes) }; @@ -968,7 +990,7 @@ fn translate_object( // The `validate_parent_fixup` call ensuers that this addition will not // overflow. - view.alloc.info_add_fd(file, info.target_offset + i, true)?; + view.alloc.info_add_fd(file, fd_range, true)?; } drop(fda_bytes); @@ -977,45 +999,34 @@ fn translate_object( obj_write.num_fds = obj.num_fds; obj_write.parent = obj.parent; obj_write.parent_offset = obj.parent_offset; - view.write::(offset, &obj_write)?; + view.write::(obj_range, &obj_write)?; } } Ok(()) } - fn apply_sg(&self, alloc: &mut Allocation, sg_state: &mut ScatterGatherState) -> BinderResult { - for sg_entry in &mut sg_state.sg_entries { - let mut end_of_previous_fixup = sg_entry.offset; - let offset_end = sg_entry.offset.checked_add(sg_entry.length).ok_or(EINVAL)?; + fn apply_sg(&self, alloc: &mut Allocation, sg_state: ScatterGatherState) -> BinderResult { + for sg_entry in sg_state.sg_entries { + let mut buffer_range = sg_entry.range; - let mut reader = UserSlice::new(sg_entry.sender_uaddr as _, sg_entry.length).reader(); - for fixup in &mut sg_entry.pointer_fixups { + let mut reader = + UserSlice::new(sg_entry.sender_uaddr as _, buffer_range.peek_length()).reader(); + for fixup in sg_entry.pointer_fixups { let fixup_len = if fixup.skip == 0 { size_of::() } else { fixup.skip }; - let target_offset_end = fixup.target_offset.checked_add(fixup_len).ok_or(EINVAL)?; - if fixup.target_offset < end_of_previous_fixup || offset_end < target_offset_end { - pr_warn!( - "Fixups oob {} {} {} {}", - fixup.target_offset, - end_of_previous_fixup, - offset_end, - target_offset_end - ); - return Err(EINVAL.into()); - } + let between_fixup_range = buffer_range.take_until(fixup.target_offset)?; + let fixup_range = buffer_range.take_from_start(fixup_len)?; - let copy_off = end_of_previous_fixup; - let copy_len = fixup.target_offset - end_of_previous_fixup; - if let Err(err) = alloc.copy_into(&mut reader, copy_off, copy_len) { + if let Err(err) = alloc.copy_into(&mut reader, between_fixup_range) { pr_warn!("Failed copying into alloc: {:?}", err); return Err(err.into()); } if fixup.skip == 0 { - let res = alloc.write::(fixup.target_offset, &fixup.pointer_value); + let res = alloc.write::(fixup_range, &fixup.pointer_value); if let Err(err) = res { pr_warn!("Failed copying ptr into alloc: {:?}", err); return Err(err.into()); @@ -1025,11 +1036,8 @@ fn apply_sg(&self, alloc: &mut Allocation, sg_state: &mut ScatterGatherState) -> pr_warn!("Failed skipping {} from reader: {:?}", fixup_len, err); return Err(err.into()); } - end_of_previous_fixup = target_offset_end; } - let copy_off = end_of_previous_fixup; - let copy_len = offset_end - end_of_previous_fixup; - if let Err(err) = alloc.copy_into(&mut reader, copy_off, copy_len) { + if let Err(err) = alloc.copy_into(&mut reader, buffer_range) { pr_warn!("Failed copying remainder into alloc: {:?}", err); return Err(err.into()); } @@ -1066,16 +1074,15 @@ pub(crate) fn copy_transaction_data( None }; + let secctx_size = secctx.as_ref().map(|(_, ctx)| ctx.len()).unwrap_or(0); + let data_size = trd.data_size.try_into().map_err(|_| EINVAL)?; let aligned_data_size = ptr_align(data_size); let offsets_size = trd.offsets_size.try_into().map_err(|_| EINVAL)?; let aligned_offsets_size = ptr_align(offsets_size); let buffers_size = tr.buffers_size.try_into().map_err(|_| EINVAL)?; let aligned_buffers_size = ptr_align(buffers_size); - let aligned_secctx_size = secctx - .as_ref() - .map(|(_, ctx)| ptr_align(ctx.len())) - .unwrap_or(0); + let aligned_secctx_size = ptr_align(secctx_size); // This guarantees that at least `sizeof(usize)` bytes will be allocated. let len = usize::max( @@ -1086,7 +1093,27 @@ pub(crate) fn copy_transaction_data( .ok_or(ENOMEM)?, size_of::(), ); - let secctx_off = aligned_data_size + aligned_offsets_size + aligned_buffers_size; + + // Split the buffer size into sub-ranges. + let full_range = Range::new_area(len); + let (mut data_range, after_data) = full_range.split_within(aligned_data_size)?; + let (mut offsets_range, after_offsets) = after_data.split_within(aligned_offsets_size)?; + let (mut buffers_range, after_buffers) = + after_offsets.split_within(aligned_buffers_size)?; + let (mut secctx_range, _after_secctx) = after_buffers.split_within(aligned_secctx_size)?; + + // Assert alignment. + data_range.assert_aligned(size_of::())?; + offsets_range.assert_aligned(size_of::())?; + buffers_range.assert_aligned(size_of::())?; + secctx_range.assert_aligned(size_of::())?; + + // Truncate extra space added for the sake of alignment. + data_range.truncate(data_size)?; + offsets_range.truncate(offsets_size)?; + buffers_range.truncate(buffers_size)?; + secctx_range.truncate(secctx_size)?; + let mut alloc = match to_process.buffer_alloc(debug_id, len, is_oneway, self.process.task.pid()) { Ok(alloc) => alloc, @@ -1106,62 +1133,55 @@ pub(crate) fn copy_transaction_data( // all bit-patterns. let trd_data_ptr = unsafe { &trd.data.ptr }; let mut buffer_reader = UserSlice::new(trd_data_ptr.buffer as _, data_size).reader(); - let mut end_of_previous_object = 0; let mut sg_state = None; // Copy offsets if there are any. if offsets_size > 0 { { + // We will first copy the offsets from userspace into the allocation, and then read + // the offsets again later down. Therefore, we need to duplicate the offsets range + // to use it twice. + let copy_range = offsets_range.duplicate_range_careful(); let mut reader = UserSlice::new(trd_data_ptr.offsets as _, offsets_size).reader(); - alloc.copy_into(&mut reader, aligned_data_size, offsets_size)?; + alloc.copy_into(&mut reader, copy_range)?; } - let offsets_start = aligned_data_size; - let offsets_end = aligned_data_size + aligned_offsets_size; - // This state is used for BINDER_TYPE_PTR objects. let sg_state = sg_state.insert(ScatterGatherState { unused_buffer_space: UnusedBufferSpace { - offset: offsets_end, - limit: len, + range: buffers_range, }, sg_entries: Vec::new(), ancestors: Vec::new(), }); + let mut translated_offsets = offsets_range.peek_offset()..offsets_range.peek_offset(); + // Traverse the objects specified. let mut view = AllocationView::new(&mut alloc, data_size); - for (index, index_offset) in (offsets_start..offsets_end) - .step_by(size_of::()) - .enumerate() + for (obj_index, index_range) in + offsets_range.iter_chunks(size_of::())?.enumerate() { - let offset = view.alloc.read(index_offset)?; + translated_offsets.end = index_range.peek_end()?; + let start_of_next_object = view.alloc.read(index_range.use_range().offset)?; - if offset < end_of_previous_object { - pr_warn!("Got transaction with invalid offset."); - return Err(EINVAL.into()); - } + let (between_objs, data_rest) = data_range.split_at(start_of_next_object)?; // Copy data between two objects. - if end_of_previous_object < offset { - view.alloc.copy_into( - &mut buffer_reader, - end_of_previous_object, - offset - end_of_previous_object, - )?; - } + view.alloc.copy_into(&mut buffer_reader, between_objs)?; let mut object = BinderObject::read_from(&mut buffer_reader)?; + let (obj_range, data_after_obj) = data_rest.split_within(object.size())?; match self.translate_object( - index, - offset, + obj_index, + obj_range, object.as_ref(), &mut view, allow_fds, sg_state, ) { - Ok(()) => end_of_previous_object = offset + object.size(), + Ok(()) => {} Err(err) => { pr_warn!("Error while translating object."); return Err(err); @@ -1169,20 +1189,15 @@ pub(crate) fn copy_transaction_data( } // Update the indexes containing objects to clean up. - let offset_after_object = index_offset + size_of::(); - view.alloc - .set_info_offsets(offsets_start..offset_after_object); + view.alloc.set_info_offsets(translated_offsets.clone()); + data_range = data_after_obj; } } // Copy remaining raw data. - alloc.copy_into( - &mut buffer_reader, - end_of_previous_object, - data_size - end_of_previous_object, - )?; + alloc.copy_into(&mut buffer_reader, data_range)?; - if let Some(sg_state) = sg_state.as_mut() { + if let Some(sg_state) = sg_state { if let Err(err) = self.apply_sg(&mut alloc, sg_state) { pr_warn!("Failure in apply_sg: {:?}", err); return Err(err); @@ -1190,11 +1205,11 @@ pub(crate) fn copy_transaction_data( } if let Some((off_out, secctx)) = secctx.as_mut() { - if let Err(err) = alloc.write(secctx_off, secctx.as_bytes()) { + **off_out = secctx_range.peek_offset(); + if let Err(err) = alloc.write(secctx_range, secctx.as_bytes()) { pr_warn!("Failed to write security context: {:?}", err); return Err(err.into()); } - **off_out = secctx_off; } Ok(alloc) }