E5FA Rollup of 14 pull requests by Zalathar · Pull Request #147019 · rust-lang/rust · GitHub
[go: up one dir, main page]

Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
96f385b
RawVecInner: add missing `unsafe` to unsafe fns
btj Aug 7, 2025
bc7986e
Add attributes for #[global_allocator] functions
nikic Sep 18, 2025
42cf78f
llvm: update remarks support on LLVM 22
durin42 Sep 22, 2025
f5c6c95
Add an attribute to check the number of lanes in a SIMD vector after …
calebzulawski Sep 16, 2025
60548ff
Including spans in layout errors for all ADTs
calebzulawski Sep 24, 2025
f509dff
f16_f128: enable some more tests in Miri
RalfJung Sep 18, 2025
b2634e3
std: add support for armv7a-vex-v5 target
tropicaaal Sep 24, 2025
a86f140
do not materialise X in [X; 0] when X is unsizing a const
dingxiangfei2009 Aug 11, 2025
120162e
add test fixture for newly allowed const expr
dingxiangfei2009 Aug 12, 2025
b77de83
mark THIR use as candidate for constness check
dingxiangfei2009 Sep 24, 2025
a00f241
unstably constify float mul_add methods
Qelxiros Sep 18, 2025
aa75d34
Small string formatting cleanup
GuillaumeGomez Sep 24, 2025
852da23
Explicitly note `&[SocketAddr]` impl of `ToSocketAddrs`.
LawnGnome Sep 24, 2025
fe440ec
llvm: add a destructor to call releaseSerializer
cuviper Sep 24, 2025
bc37dd4
Remove an erroneous normalization step in `tests/run-make/linker-warn…
fmease Sep 24, 2025
932f3a8
rustdoc: Fix documentation for `--doctest-build-arg`
fmease Sep 25, 2025
85018f0
Use `LLVMDisposeTargetMachine`
Zalathar Sep 25, 2025
747019c
bootstrap.py: Respect build.jobs while building bootstrap tool
neuschaefer Sep 24, 2025
2acd80c
Rollup merge of #145067 - btj:patch-3, r=tgross35
Zalathar Sep 25, 2025
21b0e12
Rollup merge of #145277 - dingxiangfei2009:fold-coercion-into-const, …
Zalathar Sep 25, 2025
0a34928
Rollup merge of #145973 - vexide:vex-std, r=tgross35
Zalathar Sep 25, 2025
fab0646
Rollup merge of #146667 - calebzulawski:simd-mono-lane-limit, r=lcnr,…
Zalathar Sep 25, 2025
8e62f95
Rollup merge of #146735 - Qelxiros:const_mul_add, r=tgross35,RalfJung
Zalathar Sep 25, 2025
cec668f
Rollup merge of #146737 - RalfJung:f16-f128-miri, r=tgross35
Zalathar Sep 25, 2025
46e25aa
Rollup merge of #146766 - nikic:global-alloc-attr, r=nnethercote
Zalathar Sep 25, 2025
2565b27
Rollup merge of #146905 - durin42:llvm-22-bitstream-remarks, r=nikic
Zalathar Sep 25, 2025
231002f
Rollup merge of #146982 - fmease:fix-rmake-linker-warning, r=bjorn3
Zalathar Sep 25, 2025
62aa0ae
Rollup merge of #147005 - GuillaumeGomez:string-formatting-cleanup, r…
Zalathar Sep 25, 2025
3053a18
Rollup merge of #147007 - LawnGnome:tosocketaddrs-doc, r=tgross35
Zalathar Sep 25, 2025
fe4cceb
Rollup merge of #147008 - neuschaefer:bootstrap-jobs, r=Kobzol
Zalathar Sep 25, 2025
a632541
Rollup merge of #147013 - fmease:fix-docs-doctest-build-arg, r=Guilla…
Zalathar Sep 25, 2025
59866ef
Rollup merge of #147015 - Zalathar:dispose-tm, r=lqd
Zalathar Sep 25, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
RawVecInner: add missing unsafe to unsafe fns
- RawVecInner::grow_exact causes UB if called with len and additional
  arguments such that len + additional is less than the current
  capacity.  Indeed, in that case it calls Allocator::grow with a
  new_layout that is smaller than old_layout, which violates a safety
  precondition.

- All RawVecInner methods for resizing the buffer cause UB
  if called with an elem_layout different from the one used to initially
  allocate the buffer, because in that case Allocator::grow/shrink is called with
  an old_layout that does not fit the allocated block, which violates a
  safety precondition.

- RawVecInner::current_memory might cause UB if called with an elem_layout
  different from the one used to initially allocate the buffer, because
  the unchecked_mul might overflow.

- Furthermore, these methods cause UB if called with an elem_layout
  where the size is not a multiple of the alignment. This is because
  Layout::repeat is used (in layout_array) to compute the allocation's
  layout when allocating, which includes padding to ensure alignment of
  array elements, but simple multiplication is used (in current_memory) to
  compute the old allocation's layout when resizing or deallocating, which
  would cause the layout used to resize or deallocate to not fit the
  allocated block, which violates a safety precondition.
  • Loading branch information
btj committed Sep 5, 2025
commit 96f385b20aa252629570cadeca16a9e159e6810c
149 changes: 118 additions & 31 deletions library/alloc/src/raw_vec/mod.rs
< 10BC0 td class="blob-code blob-code-addition js-file-line"> unsafe { self.inner.try_reserve(len, additional, T::LAYOUT) }
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,8 @@ impl<T, A: Allocator> RawVec<T, A> {
/// the returned `RawVec`.
#[inline]
pub(crate) const fn new_in(alloc: A) -> Self {
// Check assumption made in `current_memory`
const { assert!(T::LAYOUT.size() % T::LAYOUT.align() == 0) };
Self { inner: RawVecInner::new_in(alloc, Alignment::of::<T>()), _marker: PhantomData }
}

Expand Down Expand Up @@ -328,7 +330,8 @@ impl<T, A: Allocator> RawVec<T, A> {
#[inline]
#[track_caller]
pub(crate) fn reserve(&mut self, len: usize, additional: usize) {
self.inner.reserve(len, additional, T::LAYOUT)
// SAFETY: All calls on self.inner pass T::LAYOUT as the elem_layout
unsafe { self.inner.reserve(len, additional, T::LAYOUT) }
}

/// A specialized version of `self.reserve(len, 1)` which requires the
Expand All @@ -337,7 +340,8 @@ impl<T, A: Allocator> RawVec<T, A> {
#[inline(never)]
#[track_caller]
pub(crate) fn grow_one(&mut self) {
self.inner.grow_one(T::LAYOUT)
// SAFETY: All calls on self.inner pass T::LAYOUT as the elem_layout
unsafe { self.inner.grow_one(T::LAYOUT) }
}

/// The same as `reserve`, but returns on errors instead of panicking or aborting.
Expand All @@ -346,7 +350,8 @@ impl<T, A: Allocator> RawVec<T, A> {
len: usize,
additional: usize,
) -> Result<(), TryReserveError> {
self.inner.try_reserve(len, additional, T::LAYOUT)
// SAFETY: All calls on self.inner pass T::LAYOUT as the elem_layout
}

/// Ensures that the buffer contains at least enough space to hold `len +
Expand All @@ -369,7 +374,8 @@ impl<T, A: Allocator> RawVec<T, A> {
#[cfg(not(no_global_oom_handling))]
#[track_caller]
pub(crate) fn reserve_exact(&mut self, len: usize, additional: usize) {
self.inner.reserve_exact(len, additional, T::LAYOUT)
// SAFETY: All calls on self.inner pass T::LAYOUT as the elem_layout
unsafe { self.inner.reserve_exact(len, additional, T::LAYOUT) }
}

/// The same as `reserve_exact`, but returns on errors instead of panicking or aborting.
Expand All @@ -378,7 +384,8 @@ impl<T, A: Allocator> RawVec<T, A> {
len: usize,
additional: usize,
) -> Result<(), TryReserveError> {
self.inner.try_reserve_exact(len, additional, T::LAYOUT)
// SAFETY: All calls on self.inner pass T::LAYOUT as the elem_layout
unsafe { self.inner.try_reserve_exact(len, additional, T::LAYOUT) }
}

/// Shrinks the buffer down to the specified capacity. If the given amount
Expand All @@ -395,7 +402,8 @@ impl<T, A: Allocator> RawVec<T, A> {
#[track_caller]
#[inline]
pub(crate) fn shrink_to_fit(&mut self, cap: usize) {
self.inner.shrink_to_fit(cap, T::LAYOUT)
// SAFETY: All calls on self.inner pass T::LAYOUT as the elem_layout
unsafe { self.inner.shrink_to_fit(cap, T::LAYOUT) }
}
}

Expand Down Expand Up @@ -518,8 +526,12 @@ impl<A: Allocator> RawVecInner<A> {
&self.alloc
}

/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
#[inline]
fn current_memory(&self, elem_layout: Layout) -> Option<(NonNull<u8>, Layout)> {
unsafe fn current_memory(&self, elem_layout: Layout) -> Option<(NonNull<u8>, Layout)> {
if elem_layout.size() == 0 || self.cap.as_inner() == 0 {
None
} else {
Expand All @@ -535,48 +547,67 @@ impl<A: Allocator> RawVecInner<A> {
}
}

/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
#[cfg(not(no_global_oom_handling))]
#[inline]
#[track_caller]
fn reserve(&mut self, len: usize, additional: usize, elem_layout: Layout) {
unsafe fn reserve(&mut self, len: usize, additional: usize, elem_layout: Layout) {
// Callers expect this function to be very cheap when there is already sufficient capacity.
// Therefore, we move all the resizing and error-handling logic from grow_amortized and
// handle_reserve behind a call, while making sure that this function is likely to be
// inlined as just a comparison and a call if the comparison fails.
#[cold]
fn do_reserve_and_handle<A: Allocator>(
unsafe fn do_reserve_and_handle<A: Allocator>(
slf: &mut RawVecInner<A>,
len: usize,
additional: usize,
elem_layout: Layout,
) {
if let Err(err) = slf.grow_amortized(len, additional, elem_layout) {
// SAFETY: Precondition passed to caller
if let Err(err) = unsafe { slf.grow_amortized(len, additional, elem_layout) } {
handle_error(err);
}
}

if self.needs_to_grow(len, additional, elem_layout) {
do_reserve_and_handle(self, len, additional, elem_layout);
unsafe {
do_reserve_and_handle(self, len, additional, elem_layout);
}
}
}

/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
#[cfg(not(no_global_oom_handling))]
#[inline]
#[track_caller]
fn grow_one(&mut self, elem_layout: Layout) {
if let Err(err) = self.grow_amortized(self.cap.as_inner(), 1, elem_layout) {
unsafe fn grow_one(&mut self, elem_layout: Layout) {
// SAFETY: Precondition passed to caller
if let Err(err) = unsafe { self.grow_amortized(self.cap.as_inner(), 1, elem_layout) } {
handle_error(err);
}
}

fn try_reserve(
/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
unsafe fn try_reserve(
&mut self,
len: usize,
additional: usize,
elem_layout: Layout,
) -> Result<(), TryReserveError> {
if self.needs_to_grow(len, additional, elem_layout) {
self.grow_amortized(len, additional, elem_layout)?;
// SAFETY: Precondition passed to caller
unsafe {
self.grow_amortized(len, additional, elem_layout)?;
}
}
unsafe {
// Inform the optimizer that the reservation has succeeded or wasn't needed
Expand All @@ -585,22 +616,34 @@ impl<A: Allocator> RawVecInner<A> {
Ok(())
}

/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
#[cfg(not(no_global_oom_handling))]
#[track_caller]
fn reserve_exact(&mut self, len: usize, additional: usize, elem_layout: Layout) {
if let Err(err) = self.try_reserve_exact(len, additional, elem_layout) {
unsafe fn reserve_exact(&mut self, len: usize, additional: usize, elem_layout: Layout) {
// SAFETY: Precondition passed to caller
if let Err(err) = unsafe { self.try_reserve_exact(len, additional, elem_layout) } {
handle_error(err);
}
}

fn try_reserve_exact(
/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
unsafe fn try_reserve_exact(
&mut self,
len: usize,
additional: usize,
elem_layout: Layout,
) -> Result<(), TryReserveError> {
if self.needs_to_grow(len, additional, elem_layout) {
self.grow_exact(len, additional, elem_layout)?;
// SAFETY: Precondition passed to caller
unsafe {
self.grow_exact(len, additional, elem_layout)?;
}
}
unsafe {
// Inform the optimizer that the reservation has succeeded or wasn't needed
Expand All @@ -609,11 +652,16 @@ impl<A: Allocator> RawVecInner<A> {
Ok(())
}

/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
/// - `cap` must be less than or equal to `self.capacity(elem_layout.size())`
#[cfg(not(no_global_oom_handling))]
#[inline]
#[track_caller]
fn shrink_to_fit(&mut self, cap: usize, elem_layout: Layout) {
if let Err(err) = self.shrink(cap, elem_layout) {
unsafe fn shrink_to_fit(&mut self, cap: usize, elem_layout: Layout) {
if let Err(err) = unsafe { self.shrink(cap, elem_layout) } {
handle_error(err);
}
}
Expand All @@ -632,7 +680,13 @@ impl<A: Allocator> RawVecInner<A> {
self.cap = unsafe { Cap::new_unchecked(cap) };
}

fn grow_amortized(
/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
/// - The sum of `len` and `additional` must be greater than or equal to
/// `self.capacity(elem_layout.size())`
unsafe fn grow_amortized(
&mut self,
len: usize,
additional: usize,
Expand All @@ -657,14 +711,25 @@ impl<A: Allocator> RawVecInner<A> {

let new_layout = layout_array(cap, elem_layout)?;

let ptr = finish_grow(new_layout, self.current_memory(elem_layout), &mut self.alloc)?;
// SAFETY: layout_array would have resulted in a capacity overflow if we tried to allocate more than `isize::MAX` items
// SAFETY:
// - For the `current_memory` call: Precondition passed to caller
// - For the `finish_grow` call: Precondition passed to caller
// + `current_memory` does the right thing
let ptr =
unsafe { finish_grow(new_layout, self.current_memory(elem_layout), &mut self.alloc)? };

// SAFETY: layout_array would have resulted in a capacity overflow if we tried to allocate more than `isize::MAX` items
unsafe { self.set_ptr_and_cap(ptr, cap) };
Ok(())
}

fn grow_exact(
/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
/// - The sum of `len` and `additional` must be greater than or equal to
/// `self.capacity(elem_layout.size())`
unsafe fn grow_exact(
&mut self,
len: usize,
additional: usize,
Expand All @@ -679,17 +744,27 @@ impl<A: Allocator> RawVecInner<A> {
let cap = len.checked_add(additional).ok_or(CapacityOverflow)?;
let new_layout = layout_array(cap, elem_layout)?;

let ptr = finish_grow(new_layout, self.current_memory(elem_layout), &mut self.alloc)?;
// SAFETY:
// - For the `current_memory` call: Precondition passed to caller
// - For the `finish_grow` call: Precondition passed to caller
// + `current_memory` does the right thing
let ptr =
unsafe { finish_grow(new_layout, self.current_memory(elem_layout), &mut self.alloc)? };
// SAFETY: layout_array would have resulted in a capacity overflow if we tried to allocate more than `isize::MAX` items
unsafe {
self.set_ptr_and_cap(ptr, cap);
}
Ok(())
}

/// # Safety
/// - `elem_layout` must be valid for `self`, i.e. it must be the same `elem_layout` used to
/// initially construct `self`
/// - `elem_layout`'s size must be a multiple of its alignment
/// - `cap` must be less than or equal to `self.capacity(elem_layout.size())`
#[cfg(not(no_global_oom_handling))]
#[inline]
fn shrink(&mut self, cap: usize, elem_layout: Layout) -> Result<(), TryReserveError> {
unsafe fn shrink(&mut self, cap: usize, elem_layout: Layout) -> Result<(), TryReserveError> {
assert!(cap <= self.capacity(elem_layout.size()), "Tried to shrink to a larger capacity");
// SAFETY: Just checked this isn't trying to grow
unsafe { self.shrink_unchecked(cap, elem_layout) }
Expand All @@ -711,8 +786,12 @@ impl<A: Allocator> RawVecInner<A> {
cap: usize,
elem_layout: Layout,
) -> Result<(), TryReserveError> {
let (ptr, layout) =
if let Some(mem) = self.current_memory(elem_layout) { mem } else { return Ok(()) };
// SAFETY: Precondition passed to caller
let (ptr, layout) = if let Some(mem) = unsafe { self.current_memory(elem_layout) } {
mem
} else {
return Ok(());
};

// If shrinking to 0, deallocate the buffer. We don't reach this point
// for the T::IS_ZST case since current_memory() will have returned
Expand Down Expand Up @@ -748,18 +827,26 @@ impl<A: Allocator> RawVecInner<A> {
/// Ideally this function would take `self` by move, but it cannot because it exists to be
/// called from a `Drop` impl.
unsafe fn deallocate(&mut self, elem_layout: Layout) {
if let Some((ptr, layout)) = self.current_memory(elem_layout) {
// SAFETY: Precondition passed to caller
if let Some((ptr, layout)) = unsafe { self.current_memory(elem_layout) } {
unsafe {
self.alloc.deallocate(ptr, layout);
}
}
}
}

/// # Safety
/// If `current_memory` matches `Some((ptr, old_layout))`:
/// - `ptr` must denote a block of memory *currently allocated* via `alloc`
/// - `old_layout` must *fit* that block of memory
/// - `new_layout` must have the same alignment as `old_layout`
/// - `new_layout.size()` must be greater than or equal to `old_layout.size()`
/// If `current_memory` is `None`, this function is safe.
// not marked inline(never) since we want optimizers to be able to observe the specifics of this
// function, see tests/codegen-llvm/vec-reserve-extend.rs.
#[cold]
fn finish_grow<A>(
unsafe fn finish_grow<A>(
new_layout: Layout,
current_memory: Option<(NonNull<u8>, Layout)>,
alloc: &mut A,
Expand Down
10 changes: 5 additions & 5 deletions library/alloc/src/raw_vec/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ struct ZST;
fn zst_sanity<T>(v: &RawVec<T>) {
assert_eq!(v.capacity(), usize::MAX);
assert_eq!(v.ptr(), core::ptr::Unique::<T>::dangling().as_ptr());
assert_eq!(v.inner.current_memory(T::LAYOUT), None);
assert_eq!(unsafe { v.inner.current_memory(T::LAYOUT) }, None);
}

#[test]
Expand Down Expand Up @@ -126,12 +126,12 @@ fn zst() {
assert_eq!(v.try_reserve_exact(101, usize::MAX - 100), cap_err);
zst_sanity(&v);

assert_eq!(v.inner.grow_amortized(100, usize::MAX - 100, ZST::LAYOUT), cap_err);
assert_eq!(v.inner.grow_amortized(101, usize::MAX - 100, ZST::LAYOUT), cap_err);
assert_eq!(unsafe { v.inner.grow_amortized(100, usize::MAX - 100, ZST::LAYOUT) }, cap_err);
assert_eq!(unsafe { v.inner.grow_amortized(101, usize::MAX - 100, ZST::LAYOUT) }, cap_err);
zst_sanity(&v);

assert_eq!(v.inner.grow_exact(100, usize::MAX - 100, ZST::LAYOUT), cap_err);
assert_eq!(v.inner.grow_exact(101, usize::MAX - 100, ZST::LAYOUT), cap_err);
assert_eq!(unsafe { v.inner.grow_exact(100, usize::MAX - 100, ZST::LAYOUT) }, cap_err);
assert_eq!(unsafe { v.inner.grow_exact(101, usize::MAX - 100, ZST::LAYOUT) }, cap_err);
zst_sanity(&v);
}

Expand Down
Loading
0