22 Commits

Author SHA1 Message Date
dependabot[bot]
476e249be3 Bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-27 09:32:39 +01:00
a05739810d Enforce Item granularity 2025-11-27 09:30:03 +01:00
ed6676d345 Avoid broken docs in dependency 2025-11-27 09:30:03 +01:00
003f6a02d7 Prepare for release v0.3.2
This is a bugfix release and contains no new functionality.
2025-09-04 18:49:51 +02:00
d72827a74d Add missing fmt impls 2025-09-03 09:02:33 +02:00
df0f28b386 Relax sized bounds for std::sync wrappers 2025-09-03 09:02:33 +02:00
dependabot[bot]
8dec9db799 Bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-18 12:58:16 +02:00
28d67e871c Make elided lifetimes explicit 2025-08-18 12:51:08 +02:00
90843416e3 Missing section header 2025-05-13 16:09:39 +02:00
522ee3ffe5 Prepare release v0.3.1 2025-05-13 16:04:47 +02:00
1791ef5243 Update changelog 2025-05-13 15:57:30 +02:00
4b872c1262 Remove deprecated Cargo.toml options 2025-05-13 15:57:30 +02:00
99bca9852c Opt in to edition 2024 formatting 2025-05-13 15:57:30 +02:00
b396016224 Remove bors config 2025-05-13 15:57:30 +02:00
c29ccc4f4d Avoid updating deps on older versions 2025-04-10 20:58:45 +02:00
8ec32bdf16 Reorganise features
Now the features do not directly enable each other, which should silence
extraneous deprecation warnings while running tests. This can be cleaned
up in the next BC break when the old features are removed.
2025-04-10 20:58:45 +02:00
ccc4c98791 Rename dependencies to their crate names
Keep the old names but deprecate them so we can remove them in 0.4.
2025-04-10 20:58:45 +02:00
9a79b53147 Merge pull request #44 from bertptrs/reset-feature
Allow resetting dependencies of specific mutexes
2025-03-10 20:16:59 +01:00
af366ca3af Document new functionality 2025-03-10 19:32:24 +01:00
1ad0b2756e Rework CI for experimental features 2025-03-10 18:31:47 +01:00
d1a6b93ea8 Add new reset_dependencies API 2025-03-01 13:25:20 +01:00
ebdb6a18fe Restructure stdsync module 2025-03-01 12:39:55 +01:00
15 changed files with 946 additions and 738 deletions

View File

@@ -15,50 +15,64 @@ jobs:
strategy:
matrix:
rust:
- "1.74" # Current minimum for experimental features
- stable
- beta
- nightly
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@v1
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ matrix.rust }}
components: rustfmt, clippy
components: clippy
# Make sure we test with recent deps
- run: cargo update
# Note: some crates broke BC with 1.74 so we use the locked deps
if: "${{ matrix.rust != '1.74' }}"
- run: cargo build --all-features --all-targets
- run: cargo test --all-features
- run: cargo fmt --all -- --check
# Note: Rust 1.74 doesn't understand edition 2024 formatting so no point
if: "${{ matrix.rust != '1.74' }}"
- run: cargo clippy --all-features --all-targets -- -D warnings
msrv:
name: MSRV
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@v1
- uses: dtolnay/rust-toolchain@1.74
with:
toolchain: "1.70"
- run: cargo test --all-features
# Test everything except experimental features.
- run: cargo test --features backtraces,lock_api,parking_lot
formatting:
name: Formatting
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
# Use nightly formatting options
- uses: dtolnay/rust-toolchain@nightly
with:
components: rustfmt
- run: cargo fmt --check
docs:
name: Documentation build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@v1
with:
toolchain: nightly
- uses: dtolnay/rust-toolchain@nightly
- name: Build documentation
env:
# Build the docs like docs.rs builds it
RUSTDOCFLAGS: --cfg docsrs
run: cargo doc --all-features
run: cargo doc --all-features --no-deps

View File

@@ -6,6 +6,16 @@ adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.3.2]
### Fixed
- `std::sync` wrappers now no longer incorrectly require `T: Sized`
- Added missing `Display` and `Debug` implementations to `RwLock(Read|Write)Guard`.
## [0.3.1]
### Added
- On Rust 1.80 or newer, a wrapper for `std::sync::LazyLock` is now available. The MSRV has not been
@@ -15,10 +25,28 @@ adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
`tracing_mutex::parkinglot` is now identical to `parking_lot`, and an example showing how to use
it as a drop-in replacement was added.
- Introduced `experimental` feature, which will be used going forward to help evolve the API outside
the normal stability guarantees. APIs under this feature are not subject to semver or MSRV
guarantees.
- Added experimental api `tracing_mutex::util::reset_dependencies`, which can be used to reset the
ordering information of a specific mutex when you need to reorder them. This API is unsafe, as its
use invalidates any of the deadlock prevention guarantees made.
### Changed
- Reworked CI to better test continued support for the minimum supported Rust version
### Fixed
- Support for `parking_lot` and `lock_api` can now be enabled properly with the matching feature
names.
### Deprecated
- The `parkinglot` and `lockapi` features have been deprecated. They will be removed in version 0.4.
To fix it, you can use the `parking_lot` and `lock_api` features respectively.
## [0.3.0] - 2023-09-09
### Added
@@ -106,7 +134,9 @@ adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
Initial release.
[Unreleased]: https://github.com/bertptrs/tracing-mutex/compare/v0.3.0...HEAD
[Unreleased]: https://github.com/bertptrs/tracing-mutex/compare/v0.3.2...HEAD
[0.3.2]: https://github.com/bertptrs/tracing-mutex/compare/v0.3.1...v0.3.2
[0.3.1]: https://github.com/bertptrs/tracing-mutex/compare/v0.3.0...v0.3.1
[0.3.0]: https://github.com/bertptrs/tracing-mutex/compare/v0.2.1...v0.3.0
[0.2.1]: https://github.com/bertptrs/tracing-mutex/compare/v0.2.0...v0.2.1
[0.2.0]: https://github.com/bertptrs/tracing-mutex/compare/v0.1.2...v0.2.0

2
Cargo.lock generated
View File

@@ -597,7 +597,7 @@ dependencies = [
[[package]]
name = "tracing-mutex"
version = "0.3.0"
version = "0.3.2"
dependencies = [
"autocfg",
"criterion",

View File

@@ -1,10 +1,8 @@
[package]
name = "tracing-mutex"
version = "0.3.0"
authors = ["Bert Peters <bert@bertptrs.nl>"]
version = "0.3.2"
edition = "2021"
license = "MIT OR Apache-2.0"
documentation = "https://docs.rs/tracing-mutex"
categories = ["concurrency", "development-tools::debugging"]
keywords = ["mutex", "rwlock", "once", "thread"]
description = "Ensure deadlock-free mutexes by allocating in order, or else."
@@ -37,9 +35,13 @@ required-features = ["parkinglot"]
[features]
default = ["backtraces"]
backtraces = []
# Feature names do not match crate names pending namespaced features.
lockapi = ["lock_api"]
parkinglot = ["parking_lot", "lockapi"]
experimental = []
lock_api = ["dep:lock_api"]
parking_lot = ["dep:parking_lot", "lock_api"]
# Deprecated feature names from when cargo couldn't distinguish between dep and feature
lockapi = ["dep:lock_api"]
parkinglot = ["dep:parking_lot", "lock_api"]
[build-dependencies]
autocfg = "1.4.0"

View File

@@ -1,11 +1,11 @@
use std::sync::Arc;
use std::sync::Mutex;
use criterion::criterion_group;
use criterion::criterion_main;
use criterion::BenchmarkId;
use criterion::Criterion;
use criterion::Throughput;
use criterion::criterion_group;
use criterion::criterion_main;
use rand::prelude::*;
use tracing_mutex::stdsync::tracing::Mutex as TracingMutex;

View File

@@ -1,6 +0,0 @@
status = [
'Rust project (1.70)',
'Rust project (stable)',
'Rust project (beta)',
'Documentation build',
]

2
rustfmt.toml Normal file
View File

@@ -0,0 +1,2 @@
style_edition="2024"
imports_granularity="Item"

View File

@@ -1,7 +1,7 @@
use std::cell::Cell;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::collections::HashSet;
use std::collections::hash_map::Entry;
use std::hash::Hash;
type Order = usize;

View File

@@ -30,12 +30,17 @@
//!
//! - `backtraces`: Enables capturing backtraces of mutex dependencies, to make it easier to
//! determine what sequence of events would trigger a deadlock. This is enabled by default, but if
//! the performance overhead is unaccceptable, it can be disabled by disabling default features.
//! the performance overhead is unacceptable, it can be disabled by disabling default features.
//!
//! - `lockapi`: Enables the wrapper lock for [`lock_api`][lock_api] locks
//!
//! - `parkinglot`: Enables wrapper types for [`parking_lot`][parking_lot] mutexes
//!
//! - `experimental`: Enables experimental features. Experimental features are intended to test new
//! APIs and play with new APIs before committing to them. As such, breaking changes may be
//! introduced in it between otherwise semver-compatible versions, and the MSRV does not apply to
//! experimental features.
//!
//! # Performance considerations
//!
//! Tracing a mutex adds overhead to certain mutex operations in order to do the required
@@ -72,33 +77,44 @@ use std::fmt;
use std::marker::PhantomData;
use std::ops::Deref;
use std::ops::DerefMut;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use std::sync::Mutex;
use std::sync::MutexGuard;
use std::sync::OnceLock;
use std::sync::PoisonError;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
#[cfg(feature = "lockapi")]
#[cfg(feature = "lock_api")]
#[cfg_attr(docsrs, doc(cfg(feature = "lockapi")))]
#[deprecated = "The top-level re-export `lock_api` is deprecated. Use `tracing_mutex::lockapi::raw` instead"]
pub use lock_api;
#[cfg(feature = "parkinglot")]
#[cfg(feature = "parking_lot")]
#[cfg_attr(docsrs, doc(cfg(feature = "parkinglot")))]
#[deprecated = "The top-level re-export `parking_lot` is deprecated. Use `tracing_mutex::parkinglot::raw` instead"]
pub use parking_lot;
use graph::DiGraph;
use reporting::Dep;
use reporting::Reportable;
use crate::graph::DiGraph;
mod graph;
#[cfg(feature = "lockapi")]
#[cfg_attr(docsrs, doc(cfg(feature = "lockapi")))]
#[cfg(any(feature = "lock_api", feature = "lockapi"))]
#[cfg_attr(docsrs, doc(cfg(feature = "lock_api")))]
#[cfg_attr(
all(not(docsrs), feature = "lockapi", not(feature = "lock_api")),
deprecated = "The `lockapi` feature has been renamed `lock_api`"
)]
pub mod lockapi;
#[cfg(feature = "parkinglot")]
#[cfg_attr(docsrs, doc(cfg(feature = "parkinglot")))]
#[cfg(any(feature = "parking_lot", feature = "parkinglot"))]
#[cfg_attr(docsrs, doc(cfg(feature = "parking_lot")))]
#[cfg_attr(
all(not(docsrs), feature = "parkinglot", not(feature = "parking_lot")),
deprecated = "The `parkinglot` feature has been renamed `parking_lot`"
)]
pub mod parkinglot;
mod reporting;
pub mod stdsync;
pub mod util;
thread_local! {
/// Stack to track which locks are held
@@ -147,7 +163,7 @@ impl MutexId {
/// # Panics
///
/// This method panics if the new dependency would introduce a cycle.
pub fn get_borrowed(&self) -> BorrowedMutex {
pub fn get_borrowed(&self) -> BorrowedMutex<'_> {
self.mark_held();
BorrowedMutex {
id: self,
@@ -335,9 +351,11 @@ mod tests {
drop(b);
// If b's destructor correctly ran correctly we can now add an edge from c to a.
assert!(get_dependency_graph()
.add_edge(c.value(), a.value(), Dep::capture)
.is_ok());
assert!(
get_dependency_graph()
.add_edge(c.value(), a.value(), Dep::capture)
.is_ok()
);
}
/// Test creating a cycle, then panicking.

View File

@@ -8,6 +8,7 @@
//! Wrapped mutexes are at least one `usize` larger than the types they wrapped, and must be aligned
//! to `usize` boundaries. As such, libraries with many mutexes may want to consider the additional
//! required memory.
pub use lock_api as raw;
use lock_api::GuardNoSend;
use lock_api::RawMutex;
use lock_api::RawMutexFair;
@@ -24,6 +25,8 @@ use lock_api::RawRwLockUpgradeFair;
use lock_api::RawRwLockUpgradeTimed;
use crate::LazyMutexId;
use crate::MutexId;
use crate::util::PrivateTraced;
/// Tracing wrapper for all [`lock_api`] traits.
///
@@ -86,6 +89,12 @@ impl<T> TracingWrapper<T> {
}
}
impl<T> PrivateTraced for TracingWrapper<T> {
fn get_id(&self) -> &MutexId {
&self.id
}
}
unsafe impl<T> RawMutex for TracingWrapper<T>
where
T: RawMutex,

View File

@@ -47,20 +47,23 @@ pub use parking_lot::WaitTimeoutResult;
pub mod tracing;
// Skip reformatting the combined imports as it duplicates the guards
#[rustfmt::skip]
#[cfg(debug_assertions)]
pub use tracing::{
const_fair_mutex, const_mutex, const_reentrant_mutex, const_rwlock, FairMutex, FairMutexGuard,
MappedFairMutexGuard, MappedMutexGuard, MappedReentrantMutexGuard, MappedRwLockReadGuard,
MappedRwLockWriteGuard, Mutex, MutexGuard, Once, RawFairMutex, RawMutex, RawRwLock,
ReentrantMutex, ReentrantMutexGuard, RwLock, RwLockReadGuard, RwLockUpgradableReadGuard,
RwLockWriteGuard,
FairMutex, FairMutexGuard, MappedFairMutexGuard, MappedMutexGuard, MappedReentrantMutexGuard,
MappedRwLockReadGuard, MappedRwLockWriteGuard, Mutex, MutexGuard, Once, RawFairMutex, RawMutex,
RawRwLock, ReentrantMutex, ReentrantMutexGuard, RwLock, RwLockReadGuard,
RwLockUpgradableReadGuard, RwLockWriteGuard, const_fair_mutex, const_mutex,
const_reentrant_mutex, const_rwlock,
};
#[rustfmt::skip]
#[cfg(not(debug_assertions))]
pub use parking_lot::{
const_fair_mutex, const_mutex, const_reentrant_mutex, const_rwlock, FairMutex, FairMutexGuard,
MappedFairMutexGuard, MappedMutexGuard, MappedReentrantMutexGuard, MappedRwLockReadGuard,
MappedRwLockWriteGuard, Mutex, MutexGuard, Once, RawFairMutex, RawMutex, RawRwLock,
ReentrantMutex, ReentrantMutexGuard, RwLock, RwLockReadGuard, RwLockUpgradableReadGuard,
RwLockWriteGuard,
FairMutex, FairMutexGuard, MappedFairMutexGuard, MappedMutexGuard, MappedReentrantMutexGuard,
MappedRwLockReadGuard, MappedRwLockWriteGuard, Mutex, MutexGuard, Once, RawFairMutex, RawMutex,
RawRwLock, ReentrantMutex, ReentrantMutexGuard, RwLock, RwLockReadGuard,
RwLockUpgradableReadGuard, RwLockWriteGuard, const_fair_mutex, const_mutex,
const_reentrant_mutex, const_rwlock,
};

View File

@@ -2,8 +2,8 @@
pub use parking_lot::OnceState;
pub use parking_lot::RawThreadId;
use crate::lockapi::TracingWrapper;
use crate::LazyMutexId;
use crate::lockapi::TracingWrapper;
pub type RawFairMutex = TracingWrapper<::parking_lot::RawFairMutex>;
pub type RawMutex = TracingWrapper<::parking_lot::RawMutex>;

View File

@@ -20,11 +20,14 @@
//! ```
pub use std::sync as raw;
// Skip reformatting the combined imports as it duplicates the guards
#[rustfmt::skip]
#[cfg(not(debug_assertions))]
pub use std::sync::{
Condvar, Mutex, MutexGuard, Once, OnceLock, RwLock, RwLockReadGuard, RwLockWriteGuard,
};
#[rustfmt::skip]
#[cfg(debug_assertions)]
pub use tracing::{
Condvar, Mutex, MutexGuard, Once, OnceLock, RwLock, RwLockReadGuard, RwLockWriteGuard,
@@ -37,686 +40,4 @@ pub use tracing::LazyLock;
pub use std::sync::LazyLock;
/// Dependency tracing versions of [`std::sync`].
pub mod tracing {
use std::fmt;
use std::ops::Deref;
use std::ops::DerefMut;
use std::sync;
use std::sync::LockResult;
use std::sync::OnceState;
use std::sync::PoisonError;
use std::sync::TryLockError;
use std::sync::TryLockResult;
use std::sync::WaitTimeoutResult;
use std::time::Duration;
use crate::BorrowedMutex;
use crate::LazyMutexId;
#[cfg(has_std__sync__LazyLock)]
pub use lazy_lock::LazyLock;
#[cfg(has_std__sync__LazyLock)]
mod lazy_lock;
/// Wrapper for [`std::sync::Mutex`].
///
/// Refer to the [crate-level][`crate`] documentation for the differences between this struct and
/// the one it wraps.
#[derive(Debug, Default)]
pub struct Mutex<T> {
inner: sync::Mutex<T>,
id: LazyMutexId,
}
/// Wrapper for [`std::sync::MutexGuard`].
///
/// Refer to the [crate-level][`crate`] documentation for the differences between this struct and
/// the one it wraps.
#[derive(Debug)]
pub struct MutexGuard<'a, T> {
inner: sync::MutexGuard<'a, T>,
_mutex: BorrowedMutex<'a>,
}
fn map_lockresult<T, I, F>(result: LockResult<I>, mapper: F) -> LockResult<T>
where
F: FnOnce(I) -> T,
{
match result {
Ok(inner) => Ok(mapper(inner)),
Err(poisoned) => Err(PoisonError::new(mapper(poisoned.into_inner()))),
}
}
fn map_trylockresult<T, I, F>(result: TryLockResult<I>, mapper: F) -> TryLockResult<T>
where
F: FnOnce(I) -> T,
{
match result {
Ok(inner) => Ok(mapper(inner)),
Err(TryLockError::WouldBlock) => Err(TryLockError::WouldBlock),
Err(TryLockError::Poisoned(poisoned)) => {
Err(PoisonError::new(mapper(poisoned.into_inner())).into())
}
}
}
impl<T> Mutex<T> {
/// Create a new tracing mutex with the provided value.
pub const fn new(t: T) -> Self {
Self {
inner: sync::Mutex::new(t),
id: LazyMutexId::new(),
}
}
/// Wrapper for [`std::sync::Mutex::lock`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn lock(&self) -> LockResult<MutexGuard<T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.lock();
let mapper = |guard| MutexGuard {
_mutex: mutex,
inner: guard,
};
map_lockresult(result, mapper)
}
/// Wrapper for [`std::sync::Mutex::try_lock`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn try_lock(&self) -> TryLockResult<MutexGuard<T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.try_lock();
let mapper = |guard| MutexGuard {
_mutex: mutex,
inner: guard,
};
map_trylockresult(result, mapper)
}
/// Wrapper for [`std::sync::Mutex::is_poisoned`].
pub fn is_poisoned(&self) -> bool {
self.inner.is_poisoned()
}
/// Return a mutable reference to the underlying data.
///
/// This method does not block as the locking is handled compile-time by the type system.
pub fn get_mut(&mut self) -> LockResult<&mut T> {
self.inner.get_mut()
}
/// Unwrap the mutex and return its inner value.
pub fn into_inner(self) -> LockResult<T> {
self.inner.into_inner()
}
}
impl<T> From<T> for Mutex<T> {
fn from(t: T) -> Self {
Self::new(t)
}
}
impl<T> Deref for MutexGuard<'_, T> {
type Target = T;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl<T> DerefMut for MutexGuard<'_, T> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.inner
}
}
impl<T: fmt::Display> fmt::Display for MutexGuard<'_, T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.inner.fmt(f)
}
}
/// Wrapper around [`std::sync::Condvar`].
///
/// Allows `TracingMutexGuard` to be used with a `Condvar`. Unlike other structs in this module,
/// this wrapper does not add any additional dependency tracking or other overhead on top of the
/// primitive it wraps. All dependency tracking happens through the mutexes itself.
///
/// # Panics
///
/// This struct does not add any panics over the base implementation of `Condvar`, but panics due to
/// dependency tracking may poison associated mutexes.
///
/// # Examples
///
/// ```
/// use std::sync::Arc;
/// use std::thread;
///
/// use tracing_mutex::stdsync::tracing::{Condvar, Mutex};
///
/// let pair = Arc::new((Mutex::new(false), Condvar::new()));
/// let pair2 = Arc::clone(&pair);
///
/// // Spawn a thread that will unlock the condvar
/// thread::spawn(move || {
/// let (lock, condvar) = &*pair2;
/// *lock.lock().unwrap() = true;
/// condvar.notify_one();
/// });
///
/// // Wait until the thread unlocks the condvar
/// let (lock, condvar) = &*pair;
/// let guard = lock.lock().unwrap();
/// let guard = condvar.wait_while(guard, |started| !*started).unwrap();
///
/// // Guard should read true now
/// assert!(*guard);
/// ```
#[derive(Debug, Default)]
pub struct Condvar(sync::Condvar);
impl Condvar {
/// Creates a new condition variable which is ready to be waited on and notified.
pub const fn new() -> Self {
Self(sync::Condvar::new())
}
/// Wrapper for [`std::sync::Condvar::wait`].
pub fn wait<'a, T>(&self, guard: MutexGuard<'a, T>) -> LockResult<MutexGuard<'a, T>> {
let MutexGuard { _mutex, inner } = guard;
map_lockresult(self.0.wait(inner), |inner| MutexGuard { _mutex, inner })
}
/// Wrapper for [`std::sync::Condvar::wait_while`].
pub fn wait_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
condition: F,
) -> LockResult<MutexGuard<'a, T>>
where
F: FnMut(&mut T) -> bool,
{
let MutexGuard { _mutex, inner } = guard;
map_lockresult(self.0.wait_while(inner, condition), |inner| MutexGuard {
_mutex,
inner,
})
}
/// Wrapper for [`std::sync::Condvar::wait_timeout`].
pub fn wait_timeout<'a, T>(
&self,
guard: MutexGuard<'a, T>,
dur: Duration,
) -> LockResult<(MutexGuard<'a, T>, WaitTimeoutResult)> {
let MutexGuard { _mutex, inner } = guard;
map_lockresult(self.0.wait_timeout(inner, dur), |(inner, result)| {
(MutexGuard { _mutex, inner }, result)
})
}
/// Wrapper for [`std::sync::Condvar::wait_timeout_while`].
pub fn wait_timeout_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
dur: Duration,
condition: F,
) -> LockResult<(MutexGuard<'a, T>, WaitTimeoutResult)>
where
F: FnMut(&mut T) -> bool,
{
let MutexGuard { _mutex, inner } = guard;
map_lockresult(
self.0.wait_timeout_while(inner, dur, condition),
|(inner, result)| (MutexGuard { _mutex, inner }, result),
)
}
/// Wrapper for [`std::sync::Condvar::notify_one`].
pub fn notify_one(&self) {
self.0.notify_one();
}
/// Wrapper for [`std::sync::Condvar::notify_all`].
pub fn notify_all(&self) {
self.0.notify_all();
}
}
/// Wrapper for [`std::sync::RwLock`].
#[derive(Debug, Default)]
pub struct RwLock<T> {
inner: sync::RwLock<T>,
id: LazyMutexId,
}
/// Hybrid wrapper for both [`std::sync::RwLockReadGuard`] and [`std::sync::RwLockWriteGuard`].
///
/// Please refer to [`RwLockReadGuard`] and [`RwLockWriteGuard`] for usable types.
#[derive(Debug)]
pub struct TracingRwLockGuard<'a, L> {
inner: L,
_mutex: BorrowedMutex<'a>,
}
/// Wrapper around [`std::sync::RwLockReadGuard`].
pub type RwLockReadGuard<'a, T> = TracingRwLockGuard<'a, sync::RwLockReadGuard<'a, T>>;
/// Wrapper around [`std::sync::RwLockWriteGuard`].
pub type RwLockWriteGuard<'a, T> = TracingRwLockGuard<'a, sync::RwLockWriteGuard<'a, T>>;
impl<T> RwLock<T> {
pub const fn new(t: T) -> Self {
Self {
inner: sync::RwLock::new(t),
id: LazyMutexId::new(),
}
}
/// Wrapper for [`std::sync::RwLock::read`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn read(&self) -> LockResult<RwLockReadGuard<T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.read();
map_lockresult(result, |inner| TracingRwLockGuard {
inner,
_mutex: mutex,
})
}
/// Wrapper for [`std::sync::RwLock::write`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn write(&self) -> LockResult<RwLockWriteGuard<T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.write();
map_lockresult(result, |inner| TracingRwLockGuard {
inner,
_mutex: mutex,
})
}
/// Wrapper for [`std::sync::RwLock::try_read`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn try_read(&self) -> TryLockResult<RwLockReadGuard<T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.try_read();
map_trylockresult(result, |inner| TracingRwLockGuard {
inner,
_mutex: mutex,
})
}
/// Wrapper for [`std::sync::RwLock::try_write`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn try_write(&self) -> TryLockResult<RwLockWriteGuard<T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.try_write();
map_trylockresult(result, |inner| TracingRwLockGuard {
inner,
_mutex: mutex,
})
}
/// Return a mutable reference to the underlying data.
///
/// This method does not block as the locking is handled compile-time by the type system.
pub fn get_mut(&mut self) -> LockResult<&mut T> {
self.inner.get_mut()
}
/// Unwrap the mutex and return its inner value.
pub fn into_inner(self) -> LockResult<T> {
self.inner.into_inner()
}
}
impl<T> From<T> for RwLock<T> {
fn from(t: T) -> Self {
Self::new(t)
}
}
impl<L, T> Deref for TracingRwLockGuard<'_, L>
where
L: Deref<Target = T>,
{
type Target = T;
fn deref(&self) -> &Self::Target {
self.inner.deref()
}
}
impl<T, L> DerefMut for TracingRwLockGuard<'_, L>
where
L: Deref<Target = T> + DerefMut,
{
fn deref_mut(&mut self) -> &mut Self::Target {
self.inner.deref_mut()
}
}
/// Wrapper around [`std::sync::Once`].
///
/// Refer to the [crate-level][`crate`] documentaiton for the differences between this struct
/// and the one it wraps.
#[derive(Debug)]
pub struct Once {
inner: sync::Once,
mutex_id: LazyMutexId,
}
// New without default is intentional, `std::sync::Once` doesn't implement it either
#[allow(clippy::new_without_default)]
impl Once {
/// Create a new `Once` value.
pub const fn new() -> Self {
Self {
inner: sync::Once::new(),
mutex_id: LazyMutexId::new(),
}
}
/// Wrapper for [`std::sync::Once::call_once`].
///
/// # Panics
///
/// In addition to the panics that `Once` can cause, this method will panic if calling it
/// introduces a cycle in the lock dependency graph.
pub fn call_once<F>(&self, f: F)
where
F: FnOnce(),
{
self.mutex_id.with_held(|| self.inner.call_once(f))
}
/// Performs the same operation as [`call_once`][Once::call_once] except it ignores
/// poisoning.
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
pub fn call_once_force<F>(&self, f: F)
where
F: FnOnce(&OnceState),
{
self.mutex_id.with_held(|| self.inner.call_once_force(f))
}
/// Returns true if some `call_once` has completed successfully.
pub fn is_completed(&self) -> bool {
self.inner.is_completed()
}
}
/// Wrapper for [`std::sync::OnceLock`]
///
/// The exact locking behaviour of [`std::sync::OnceLock`] is currently undefined, but may
/// deadlock in the event of reentrant initialization attempts. This wrapper participates in
/// cycle detection as normal and will therefore panic in the event of reentrancy.
///
/// Most of this primitive's methods do not involve locking and as such are simply passed
/// through to the inner implementation.
///
/// # Examples
///
/// ```
/// use tracing_mutex::stdsync::tracing::OnceLock;
///
/// static LOCK: OnceLock<i32> = OnceLock::new();
/// assert!(LOCK.get().is_none());
///
/// std::thread::spawn(|| {
/// let value: &i32 = LOCK.get_or_init(|| 42);
/// assert_eq!(value, &42);
/// }).join().unwrap();
///
/// let value: Option<&i32> = LOCK.get();
/// assert_eq!(value, Some(&42));
/// ```
#[derive(Debug)]
pub struct OnceLock<T> {
id: LazyMutexId,
inner: sync::OnceLock<T>,
}
// N.B. this impl inlines everything that directly calls the inner implementation as there
// should be 0 overhead to doing so.
impl<T> OnceLock<T> {
/// Creates a new empty cell
pub const fn new() -> Self {
Self {
id: LazyMutexId::new(),
inner: sync::OnceLock::new(),
}
}
/// Gets a reference to the underlying value.
///
/// This method does not attempt to lock and therefore does not participate in cycle
/// detection.
#[inline]
pub fn get(&self) -> Option<&T> {
self.inner.get()
}
/// Gets a mutable reference to the underlying value.
///
/// This method does not attempt to lock and therefore does not participate in cycle
/// detection.
#[inline]
pub fn get_mut(&mut self) -> Option<&mut T> {
self.inner.get_mut()
}
/// Sets the contents of this cell to the underlying value
///
/// As this method may block until initialization is complete, it participates in cycle
/// detection.
pub fn set(&self, value: T) -> Result<(), T> {
self.id.with_held(|| self.inner.set(value))
}
/// Gets the contents of the cell, initializing it with `f` if the cell was empty.
///
/// This method participates in cycle detection. Reentrancy is considered a cycle.
pub fn get_or_init<F>(&self, f: F) -> &T
where
F: FnOnce() -> T,
{
self.id.with_held(|| self.inner.get_or_init(f))
}
/// Takes the value out of this `OnceLock`, moving it back to an uninitialized state.
///
/// This method does not attempt to lock and therefore does not participate in cycle
/// detection.
#[inline]
pub fn take(&mut self) -> Option<T> {
self.inner.take()
}
/// Consumes the `OnceLock`, returning the wrapped value. Returns None if the cell was
/// empty.
///
/// This method does not attempt to lock and therefore does not participate in cycle
/// detection.
#[inline]
pub fn into_inner(mut self) -> Option<T> {
self.take()
}
}
impl<T> Default for OnceLock<T> {
#[inline]
fn default() -> Self {
Self::new()
}
}
impl<T: PartialEq> PartialEq for OnceLock<T> {
#[inline]
fn eq(&self, other: &Self) -> bool {
self.inner == other.inner
}
}
impl<T: Eq> Eq for OnceLock<T> {}
impl<T: Clone> Clone for OnceLock<T> {
fn clone(&self) -> Self {
Self {
id: LazyMutexId::new(),
inner: self.inner.clone(),
}
}
}
impl<T> From<T> for OnceLock<T> {
#[inline]
fn from(value: T) -> Self {
Self {
id: LazyMutexId::new(),
inner: sync::OnceLock::from(value),
}
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use std::thread;
use super::*;
#[test]
fn test_mutex_usage() {
let mutex = Arc::new(Mutex::new(0));
assert_eq!(*mutex.lock().unwrap(), 0);
*mutex.lock().unwrap() = 1;
assert_eq!(*mutex.lock().unwrap(), 1);
let mutex_clone = mutex.clone();
let _guard = mutex.lock().unwrap();
// Now try to cause a blocking exception in another thread
let handle = thread::spawn(move || {
let result = mutex_clone.try_lock().unwrap_err();
assert!(matches!(result, TryLockError::WouldBlock));
});
handle.join().unwrap();
}
#[test]
fn test_rwlock_usage() {
let rwlock = Arc::new(RwLock::new(0));
assert_eq!(*rwlock.read().unwrap(), 0);
assert_eq!(*rwlock.write().unwrap(), 0);
*rwlock.write().unwrap() = 1;
assert_eq!(*rwlock.read().unwrap(), 1);
assert_eq!(*rwlock.write().unwrap(), 1);
let rwlock_clone = rwlock.clone();
let _read_lock = rwlock.read().unwrap();
// Now try to cause a blocking exception in another thread
let handle = thread::spawn(move || {
let write_result = rwlock_clone.try_write().unwrap_err();
assert!(matches!(write_result, TryLockError::WouldBlock));
// Should be able to get a read lock just fine.
let _read_lock = rwlock_clone.read().unwrap();
});
handle.join().unwrap();
}
#[test]
fn test_once_usage() {
let once = Arc::new(Once::new());
let once_clone = once.clone();
assert!(!once.is_completed());
let handle = thread::spawn(move || {
assert!(!once_clone.is_completed());
once_clone.call_once(|| {});
assert!(once_clone.is_completed());
});
handle.join().unwrap();
assert!(once.is_completed());
}
#[test]
#[should_panic(expected = "Found cycle in mutex dependency graph")]
fn test_detect_cycle() {
let a = Mutex::new(());
let b = Mutex::new(());
let hold_a = a.lock().unwrap();
let _ = b.lock();
drop(hold_a);
let _hold_b = b.lock().unwrap();
let _ = a.lock();
}
}
}
pub mod tracing;

748
src/stdsync/tracing.rs Normal file
View File

@@ -0,0 +1,748 @@
use std::fmt;
use std::ops::Deref;
use std::ops::DerefMut;
use std::sync;
use std::sync::LockResult;
use std::sync::OnceState;
use std::sync::PoisonError;
use std::sync::TryLockError;
use std::sync::TryLockResult;
use std::sync::WaitTimeoutResult;
use std::time::Duration;
use crate::BorrowedMutex;
use crate::LazyMutexId;
use crate::util::PrivateTraced;
#[cfg(has_std__sync__LazyLock)]
pub use lazy_lock::LazyLock;
#[cfg(has_std__sync__LazyLock)]
mod lazy_lock;
/// Wrapper for [`std::sync::Mutex`].
///
/// Refer to the [crate-level][`crate`] documentation for the differences between this struct and
/// the one it wraps.
#[derive(Debug, Default)]
pub struct Mutex<T: ?Sized> {
id: LazyMutexId,
inner: sync::Mutex<T>,
}
/// Wrapper for [`std::sync::MutexGuard`].
///
/// Refer to the [crate-level][`crate`] documentation for the differences between this struct and
/// the one it wraps.
pub struct MutexGuard<'a, T: ?Sized> {
inner: sync::MutexGuard<'a, T>,
_mutex: BorrowedMutex<'a>,
}
fn map_lockresult<T, I, F>(result: LockResult<I>, mapper: F) -> LockResult<T>
where
F: FnOnce(I) -> T,
{
match result {
Ok(inner) => Ok(mapper(inner)),
Err(poisoned) => Err(PoisonError::new(mapper(poisoned.into_inner()))),
}
}
fn map_trylockresult<T, I, F>(result: TryLockResult<I>, mapper: F) -> TryLockResult<T>
where
F: FnOnce(I) -> T,
{
match result {
Ok(inner) => Ok(mapper(inner)),
Err(TryLockError::WouldBlock) => Err(TryLockError::WouldBlock),
Err(TryLockError::Poisoned(poisoned)) => {
Err(PoisonError::new(mapper(poisoned.into_inner())).into())
}
}
}
impl<T> Mutex<T> {
/// Create a new tracing mutex with the provided value.
pub const fn new(t: T) -> Self {
Self {
inner: sync::Mutex::new(t),
id: LazyMutexId::new(),
}
}
}
impl<T: ?Sized> Mutex<T> {
/// Wrapper for [`std::sync::Mutex::lock`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn lock(&self) -> LockResult<MutexGuard<'_, T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.lock();
let mapper = |guard| MutexGuard {
_mutex: mutex,
inner: guard,
};
map_lockresult(result, mapper)
}
/// Wrapper for [`std::sync::Mutex::try_lock`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn try_lock(&self) -> TryLockResult<MutexGuard<'_, T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.try_lock();
let mapper = |guard| MutexGuard {
_mutex: mutex,
inner: guard,
};
map_trylockresult(result, mapper)
}
/// Wrapper for [`std::sync::Mutex::is_poisoned`].
pub fn is_poisoned(&self) -> bool {
self.inner.is_poisoned()
}
/// Return a mutable reference to the underlying data.
///
/// This method does not block as the locking is handled compile-time by the type system.
pub fn get_mut(&mut self) -> LockResult<&mut T> {
self.inner.get_mut()
}
/// Unwrap the mutex and return its inner value.
pub fn into_inner(self) -> LockResult<T>
where
T: Sized,
{
self.inner.into_inner()
}
}
impl<T: ?Sized> PrivateTraced for Mutex<T> {
fn get_id(&self) -> &crate::MutexId {
&self.id
}
}
impl<T> From<T> for Mutex<T> {
fn from(t: T) -> Self {
Self::new(t)
}
}
impl<T: ?Sized> Deref for MutexGuard<'_, T> {
type Target = T;
#[inline]
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl<T: ?Sized> DerefMut for MutexGuard<'_, T> {
#[inline]
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.inner
}
}
impl<T: ?Sized + fmt::Debug> fmt::Debug for MutexGuard<'_, T> {
#[inline]
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.inner.fmt(f)
}
}
impl<T: fmt::Display + ?Sized> fmt::Display for MutexGuard<'_, T> {
#[inline]
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.inner.fmt(f)
}
}
/// Wrapper around [`std::sync::Condvar`].
///
/// Allows `TracingMutexGuard` to be used with a `Condvar`. Unlike other structs in this module,
/// this wrapper does not add any additional dependency tracking or other overhead on top of the
/// primitive it wraps. All dependency tracking happens through the mutexes itself.
///
/// # Panics
///
/// This struct does not add any panics over the base implementation of `Condvar`, but panics due to
/// dependency tracking may poison associated mutexes.
///
/// # Examples
///
/// ```
/// use std::sync::Arc;
/// use std::thread;
///
/// use tracing_mutex::stdsync::tracing::{Condvar, Mutex};
///
/// let pair = Arc::new((Mutex::new(false), Condvar::new()));
/// let pair2 = Arc::clone(&pair);
///
/// // Spawn a thread that will unlock the condvar
/// thread::spawn(move || {
/// let (lock, condvar) = &*pair2;
/// *lock.lock().unwrap() = true;
/// condvar.notify_one();
/// });
///
/// // Wait until the thread unlocks the condvar
/// let (lock, condvar) = &*pair;
/// let guard = lock.lock().unwrap();
/// let guard = condvar.wait_while(guard, |started| !*started).unwrap();
///
/// // Guard should read true now
/// assert!(*guard);
/// ```
#[derive(Debug, Default)]
pub struct Condvar(sync::Condvar);
impl Condvar {
/// Creates a new condition variable which is ready to be waited on and notified.
pub const fn new() -> Self {
Self(sync::Condvar::new())
}
/// Wrapper for [`std::sync::Condvar::wait`].
pub fn wait<'a, T>(&self, guard: MutexGuard<'a, T>) -> LockResult<MutexGuard<'a, T>> {
let MutexGuard { _mutex, inner } = guard;
map_lockresult(self.0.wait(inner), |inner| MutexGuard { _mutex, inner })
}
/// Wrapper for [`std::sync::Condvar::wait_while`].
pub fn wait_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
condition: F,
) -> LockResult<MutexGuard<'a, T>>
where
F: FnMut(&mut T) -> bool,
{
let MutexGuard { _mutex, inner } = guard;
map_lockresult(self.0.wait_while(inner, condition), |inner| MutexGuard {
_mutex,
inner,
})
}
/// Wrapper for [`std::sync::Condvar::wait_timeout`].
pub fn wait_timeout<'a, T>(
&self,
guard: MutexGuard<'a, T>,
dur: Duration,
) -> LockResult<(MutexGuard<'a, T>, WaitTimeoutResult)> {
let MutexGuard { _mutex, inner } = guard;
map_lockresult(self.0.wait_timeout(inner, dur), |(inner, result)| {
(MutexGuard { _mutex, inner }, result)
})
}
/// Wrapper for [`std::sync::Condvar::wait_timeout_while`].
pub fn wait_timeout_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
dur: Duration,
condition: F,
) -> LockResult<(MutexGuard<'a, T>, WaitTimeoutResult)>
where
F: FnMut(&mut T) -> bool,
{
let MutexGuard { _mutex, inner } = guard;
map_lockresult(
self.0.wait_timeout_while(inner, dur, condition),
|(inner, result)| (MutexGuard { _mutex, inner }, result),
)
}
/// Wrapper for [`std::sync::Condvar::notify_one`].
pub fn notify_one(&self) {
self.0.notify_one();
}
/// Wrapper for [`std::sync::Condvar::notify_all`].
pub fn notify_all(&self) {
self.0.notify_all();
}
}
/// Wrapper for [`std::sync::RwLock`].
#[derive(Debug, Default)]
pub struct RwLock<T: ?Sized> {
id: LazyMutexId,
inner: sync::RwLock<T>,
}
/// Hybrid wrapper for both [`std::sync::RwLockReadGuard`] and [`std::sync::RwLockWriteGuard`].
///
/// Please refer to [`RwLockReadGuard`] and [`RwLockWriteGuard`] for usable types.
pub struct TracingRwLockGuard<'a, L> {
inner: L,
_mutex: BorrowedMutex<'a>,
}
/// Wrapper around [`std::sync::RwLockReadGuard`].
pub type RwLockReadGuard<'a, T> = TracingRwLockGuard<'a, sync::RwLockReadGuard<'a, T>>;
/// Wrapper around [`std::sync::RwLockWriteGuard`].
pub type RwLockWriteGuard<'a, T> = TracingRwLockGuard<'a, sync::RwLockWriteGuard<'a, T>>;
impl<T> RwLock<T> {
pub const fn new(t: T) -> Self {
Self {
inner: sync::RwLock::new(t),
id: LazyMutexId::new(),
}
}
}
impl<T: ?Sized> RwLock<T> {
/// Wrapper for [`std::sync::RwLock::read`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn read(&self) -> LockResult<RwLockReadGuard<'_, T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.read();
map_lockresult(result, |inner| TracingRwLockGuard {
inner,
_mutex: mutex,
})
}
/// Wrapper for [`std::sync::RwLock::write`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn write(&self) -> LockResult<RwLockWriteGuard<'_, T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.write();
map_lockresult(result, |inner| TracingRwLockGuard {
inner,
_mutex: mutex,
})
}
/// Wrapper for [`std::sync::RwLock::try_read`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn try_read(&self) -> TryLockResult<RwLockReadGuard<'_, T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.try_read();
map_trylockresult(result, |inner| TracingRwLockGuard {
inner,
_mutex: mutex,
})
}
/// Wrapper for [`std::sync::RwLock::try_write`].
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
#[track_caller]
pub fn try_write(&self) -> TryLockResult<RwLockWriteGuard<'_, T>> {
let mutex = self.id.get_borrowed();
let result = self.inner.try_write();
map_trylockresult(result, |inner| TracingRwLockGuard {
inner,
_mutex: mutex,
})
}
/// Return a mutable reference to the underlying data.
///
/// This method does not block as the locking is handled compile-time by the type system.
pub fn get_mut(&mut self) -> LockResult<&mut T> {
self.inner.get_mut()
}
/// Unwrap the mutex and return its inner value.
pub fn into_inner(self) -> LockResult<T>
where
T: Sized,
{
self.inner.into_inner()
}
}
impl<T: ?Sized> PrivateTraced for RwLock<T> {
fn get_id(&self) -> &crate::MutexId {
&self.id
}
}
impl<T> From<T> for RwLock<T> {
fn from(t: T) -> Self {
Self::new(t)
}
}
impl<L, T> Deref for TracingRwLockGuard<'_, L>
where
T: ?Sized,
L: Deref<Target = T>,
{
type Target = T;
#[inline]
fn deref(&self) -> &Self::Target {
self.inner.deref()
}
}
impl<L, T> DerefMut for TracingRwLockGuard<'_, L>
where
T: ?Sized,
L: Deref<Target = T> + DerefMut,
{
#[inline]
fn deref_mut(&mut self) -> &mut Self::Target {
self.inner.deref_mut()
}
}
impl<L> fmt::Debug for TracingRwLockGuard<'_, L>
where
L: fmt::Debug,
{
#[inline]
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.inner.fmt(f)
}
}
impl<L> fmt::Display for TracingRwLockGuard<'_, L>
where
L: fmt::Display,
{
#[inline]
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.inner.fmt(f)
}
}
/// Wrapper around [`std::sync::Once`].
///
/// Refer to the [crate-level][`crate`] documentaiton for the differences between this struct
/// and the one it wraps.
#[derive(Debug)]
pub struct Once {
inner: sync::Once,
mutex_id: LazyMutexId,
}
// New without default is intentional, `std::sync::Once` doesn't implement it either
#[allow(clippy::new_without_default)]
impl Once {
/// Create a new `Once` value.
pub const fn new() -> Self {
Self {
inner: sync::Once::new(),
mutex_id: LazyMutexId::new(),
}
}
/// Wrapper for [`std::sync::Once::call_once`].
///
/// # Panics
///
/// In addition to the panics that `Once` can cause, this method will panic if calling it
/// introduces a cycle in the lock dependency graph.
pub fn call_once<F>(&self, f: F)
where
F: FnOnce(),
{
self.mutex_id.with_held(|| self.inner.call_once(f))
}
/// Performs the same operation as [`call_once`][Once::call_once] except it ignores
/// poisoning.
///
/// # Panics
///
/// This method participates in lock dependency tracking. If acquiring this lock introduces a
/// dependency cycle, this method will panic.
pub fn call_once_force<F>(&self, f: F)
where
F: FnOnce(&OnceState),
{
self.mutex_id.with_held(|| self.inner.call_once_force(f))
}
/// Returns true if some `call_once` has completed successfully.
pub fn is_completed(&self) -> bool {
self.inner.is_completed()
}
}
impl PrivateTraced for Once {
fn get_id(&self) -> &crate::MutexId {
&self.mutex_id
}
}
/// Wrapper for [`std::sync::OnceLock`]
///
/// The exact locking behaviour of [`std::sync::OnceLock`] is currently undefined, but may
/// deadlock in the event of reentrant initialization attempts. This wrapper participates in
/// cycle detection as normal and will therefore panic in the event of reentrancy.
///
/// Most of this primitive's methods do not involve locking and as such are simply passed
/// through to the inner implementation.
///
/// # Examples
///
/// ```
/// use tracing_mutex::stdsync::tracing::OnceLock;
///
/// static LOCK: OnceLock<i32> = OnceLock::new();
/// assert!(LOCK.get().is_none());
///
/// std::thread::spawn(|| {
/// let value: &i32 = LOCK.get_or_init(|| 42);
/// assert_eq!(value, &42);
/// }).join().unwrap();
///
/// let value: Option<&i32> = LOCK.get();
/// assert_eq!(value, Some(&42));
/// ```
#[derive(Debug)]
pub struct OnceLock<T> {
id: LazyMutexId,
inner: sync::OnceLock<T>,
}
// N.B. this impl inlines everything that directly calls the inner implementation as there
// should be 0 overhead to doing so.
impl<T> OnceLock<T> {
/// Creates a new empty cell
pub const fn new() -> Self {
Self {
id: LazyMutexId::new(),
inner: sync::OnceLock::new(),
}
}
/// Gets a reference to the underlying value.
///
/// This method does not attempt to lock and therefore does not participate in cycle
/// detection.
#[inline]
pub fn get(&self) -> Option<&T> {
self.inner.get()
}
/// Gets a mutable reference to the underlying value.
///
/// This method does not attempt to lock and therefore does not participate in cycle
/// detection.
#[inline]
pub fn get_mut(&mut self) -> Option<&mut T> {
self.inner.get_mut()
}
/// Sets the contents of this cell to the underlying value
///
/// As this method may block until initialization is complete, it participates in cycle
/// detection.
pub fn set(&self, value: T) -> Result<(), T> {
self.id.with_held(|| self.inner.set(value))
}
/// Gets the contents of the cell, initializing it with `f` if the cell was empty.
///
/// This method participates in cycle detection. Reentrancy is considered a cycle.
pub fn get_or_init<F>(&self, f: F) -> &T
where
F: FnOnce() -> T,
{
self.id.with_held(|| self.inner.get_or_init(f))
}
/// Takes the value out of this `OnceLock`, moving it back to an uninitialized state.
///
/// This method does not attempt to lock and therefore does not participate in cycle
/// detection.
#[inline]
pub fn take(&mut self) -> Option<T> {
self.inner.take()
}
/// Consumes the `OnceLock`, returning the wrapped value. Returns None if the cell was
/// empty.
///
/// This method does not attempt to lock and therefore does not participate in cycle
/// detection.
#[inline]
pub fn into_inner(mut self) -> Option<T> {
self.take()
}
}
impl<T> PrivateTraced for OnceLock<T> {
fn get_id(&self) -> &crate::MutexId {
&self.id
}
}
impl<T> Default for OnceLock<T> {
#[inline]
fn default() -> Self {
Self::new()
}
}
impl<T: PartialEq> PartialEq for OnceLock<T> {
#[inline]
fn eq(&self, other: &Self) -> bool {
self.inner == other.inner
}
}
impl<T: Eq> Eq for OnceLock<T> {}
impl<T: Clone> Clone for OnceLock<T> {
fn clone(&self) -> Self {
Self {
id: LazyMutexId::new(),
inner: self.inner.clone(),
}
}
}
impl<T> From<T> for OnceLock<T> {
#[inline]
fn from(value: T) -> Self {
Self {
id: LazyMutexId::new(),
inner: sync::OnceLock::from(value),
}
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use std::thread;
use super::*;
#[test]
fn test_mutex_usage() {
let mutex = Arc::new(Mutex::new(0));
assert_eq!(*mutex.lock().unwrap(), 0);
*mutex.lock().unwrap() = 1;
assert_eq!(*mutex.lock().unwrap(), 1);
let mutex_clone = mutex.clone();
let _guard = mutex.lock().unwrap();
// Now try to cause a blocking exception in another thread
let handle = thread::spawn(move || {
let result = mutex_clone.try_lock().unwrap_err();
assert!(matches!(result, TryLockError::WouldBlock));
});
handle.join().unwrap();
}
#[test]
fn test_rwlock_usage() {
let rwlock = Arc::new(RwLock::new(0));
assert_eq!(*rwlock.read().unwrap(), 0);
assert_eq!(*rwlock.write().unwrap(), 0);
*rwlock.write().unwrap() = 1;
assert_eq!(*rwlock.read().unwrap(), 1);
assert_eq!(*rwlock.write().unwrap(), 1);
let rwlock_clone = rwlock.clone();
let _read_lock = rwlock.read().unwrap();
// Now try to cause a blocking exception in another thread
let handle = thread::spawn(move || {
let write_result = rwlock_clone.try_write().unwrap_err();
assert!(matches!(write_result, TryLockError::WouldBlock));
// Should be able to get a read lock just fine.
let _read_lock = rwlock_clone.read().unwrap();
});
handle.join().unwrap();
}
#[test]
fn test_once_usage() {
let once = Arc::new(Once::new());
let once_clone = once.clone();
assert!(!once.is_completed());
let handle = thread::spawn(move || {
assert!(!once_clone.is_completed());
once_clone.call_once(|| {});
assert!(once_clone.is_completed());
});
handle.join().unwrap();
assert!(once.is_completed());
}
#[test]
#[should_panic(expected = "Found cycle in mutex dependency graph")]
fn test_detect_cycle() {
let a = Mutex::new(());
let b = Mutex::new(());
let hold_a = a.lock().unwrap();
let _ = b.lock();
drop(hold_a);
let _hold_b = b.lock().unwrap();
let _ = a.lock();
}
}

67
src/util.rs Normal file
View File

@@ -0,0 +1,67 @@
//! Utilities related to the internals of dependency tracking.
use crate::MutexId;
/// Reset the dependencies for the given entity.
///
/// # Performance
///
/// This function locks the dependency graph to remove the item from it. This is an `O(E)` operation
/// with `E` being the number of dependencies directly associated with this particular instance. As
/// such, it is not advisable to call this method from a hot loop.
///
/// # Safety
///
/// Use of this method invalidates the deadlock prevention guarantees that this library makes. As
/// such, it should only be used when it is absolutely certain this will not introduce deadlocks
/// later.
///
/// Other than deadlocks, no undefined behaviour can result from the use of this function.
///
/// # Example
///
/// ```
/// use tracing_mutex::stdsync::Mutex;
/// use tracing_mutex::util;
///
/// let first = Mutex::new(());
/// let second = Mutex::new(());
///
/// {
/// let _first_lock = first.lock().unwrap();
/// second.lock().unwrap();
/// }
///
/// // Reset the dependencies for the first mutex
/// unsafe { util::reset_dependencies(&first) };
///
/// // Now we can unlock the mutexes in the opposite order without a panic.
/// let _second_lock = second.lock().unwrap();
/// first.lock().unwrap();
/// ```
#[cfg(feature = "experimental")]
#[cfg_attr(docsrs, doc(cfg(feature = "experimental")))]
pub unsafe fn reset_dependencies<T: Traced>(traced: &T) {
crate::get_dependency_graph().remove_node(traced.get_id().value());
}
/// Types that participate in dependency tracking
///
/// This trait is a public marker trait and is automatically implemented fore all types that
/// implement the internal dependency tracking features.
#[cfg(feature = "experimental")]
#[cfg_attr(docsrs, doc(cfg(feature = "experimental")))]
#[allow(private_bounds)]
pub trait Traced: PrivateTraced {}
#[cfg(feature = "experimental")]
#[cfg_attr(docsrs, doc(cfg(feature = "experimental")))]
impl<T: PrivateTraced> Traced for T {}
/// Private implementation of the traced marker.
///
/// This trait is private (and seals the outer trait) to avoid exposing the MutexId type.
#[cfg_attr(not(feature = "experimental"), allow(unused))]
pub(crate) trait PrivateTraced {
/// Get the mutex id associated with this traced item.
fn get_id(&self) -> &MutexId;
}