4 Commits

Author SHA1 Message Date
9ca5af2c82 Merge pull request #36 from bertptrs/example/show-potential 2023-11-13 08:37:36 +01:00
74b4fe0bb1 Rewrite example to show potential deadlock
The example originally showed a certain deadlock, which was not as clear
as it could be. The new version shows intentionally racy code that may
result in a successful execution but may also deadlock.
2023-11-12 18:36:53 +01:00
bors[bot]
6199598944 Merge #34
34: Fix remaining references to TracingMutex r=bertptrs a=bertptrs

Thanks to `@ReinierMaas` for noticing.

Co-authored-by: Bert Peters <bert@bertptrs.nl>
2023-10-06 07:01:48 +00:00
fd75fc453b Fix remaining references to TracingMutex 2023-10-06 08:59:21 +02:00
2 changed files with 58 additions and 22 deletions

View File

@@ -56,10 +56,10 @@ introduce a cyclic dependency between your locks, the operation panics instead.
immediately notice the cyclic dependency rather than be eventually surprised by it in production. immediately notice the cyclic dependency rather than be eventually surprised by it in production.
Mutex tracing is efficient, but it is not completely overhead-free. If you cannot spare the Mutex tracing is efficient, but it is not completely overhead-free. If you cannot spare the
performance penalty in your production environment, this library also offers debug-only tracing. performance penalty in your production environment, this library also offers debug-only tracing. The
`DebugMutex`, also found in the `stdsync` module, is a type alias that evaluates to `TracingMutex` type aliases in `tracing_mutex::stdsync` correspond to tracing primitives from
when debug assertions are enabled, and to `Mutex` when they are not. Similar helper types are `tracing_mutex::stdsync::tracing` when debug assertions are enabled, and to primitives from
available for other synchronization primitives. `std::sync::Mutex` when they are not. A similar structure exists for other
The minimum supported Rust version is 1.70. Increasing this is not considered a breaking change, but The minimum supported Rust version is 1.70. Increasing this is not considered a breaking change, but
will be avoided within semver-compatible releases if possible. will be avoided within semver-compatible releases if possible.
@@ -68,6 +68,7 @@ will be avoided within semver-compatible releases if possible.
- Dependency-tracking wrappers for all locking primitives - Dependency-tracking wrappers for all locking primitives
- Optional opt-out for release mode code - Optional opt-out for release mode code
- Optional backtrace capture to aid with reproducing cyclic mutex chains
- Support for primitives from: - Support for primitives from:
- `std::sync` - `std::sync`
- `parking_lot` - `parking_lot`
@@ -76,7 +77,6 @@ will be avoided within semver-compatible releases if possible.
## Future improvements ## Future improvements
- Improve performance in lock tracing - Improve performance in lock tracing
- Optional logging to make debugging easier
- Better and configurable error handling when detecting cyclic dependencies - Better and configurable error handling when detecting cyclic dependencies
- Support for other locking libraries - Support for other locking libraries
- Support for async locking libraries - Support for async locking libraries

View File

@@ -1,26 +1,62 @@
//! Show what a crash looks like //! Show what a crash looks like
//! //!
//! This shows what a traceback of a cycle detection looks like. It is expected to crash. //! This shows what a traceback of a cycle detection looks like. It is expected to crash when run in
//! debug mode, because it might deadlock. In release mode, no tracing is used and the program may
//! do any of the following:
//!
//! - Return a random valuation of `a`, `b`, and `c`. The implementation has a race-condition by
//! design. I have observed (4, 3, 6), but also (6, 3, 5).
//! - Deadlock forever.
//!
//! One can increase the SLEEP_TIME constant to increase the likelihood of a deadlock to occur. On
//! my machine, 1ns of sleep time gives about a 50/50 chance of the program deadlocking.
use std::thread;
use std::time::Duration;
use tracing_mutex::stdsync::Mutex; use tracing_mutex::stdsync::Mutex;
fn main() { fn main() {
let a = Mutex::new(()); let a = Mutex::new(1);
let b = Mutex::new(()); let b = Mutex::new(2);
let c = Mutex::new(()); let c = Mutex::new(3);
// Create an edge from a to b // Increase this time to increase the likelihood of a deadlock.
{ const SLEEP_TIME: Duration = Duration::from_nanos(1);
let _a = a.lock();
let _b = b.lock();
}
// Create an edge from b to c // Depending on random CPU performance, this section may deadlock, or may return a result. With
{ // tracing enabled, the potential deadlock is always detected and a backtrace should be
let _b = b.lock(); // produced.
let _c = c.lock(); thread::scope(|s| {
} // Create an edge from a to b
s.spawn(|| {
let a = a.lock().unwrap();
thread::sleep(SLEEP_TIME);
*b.lock().unwrap() += *a;
});
// Now crash by trying to add an edge from c to a // Create an edge from b to c
let _c = c.lock(); s.spawn(|| {
let _a = a.lock(); // This line will crash let b = b.lock().unwrap();
thread::sleep(SLEEP_TIME);
*c.lock().unwrap() += *b;
});
// Create an edge from c to a
//
// N.B. the program can crash on any of the three edges, as there is no guarantee which
// thread will execute first. Nevertheless, any one of them is guaranteed to panic with
// tracing enabled.
s.spawn(|| {
let c = c.lock().unwrap();
thread::sleep(SLEEP_TIME);
*a.lock().unwrap() += *c;
});
});
println!(
"{}, {}, {}",
a.into_inner().unwrap(),
b.into_inner().unwrap(),
c.into_inner().unwrap()
);
} }