summaryrefslogtreecommitdiff
path: root/src/lib.rs
diff options
context:
space:
mode:
Diffstat (limited to 'src/lib.rs')
-rw-r--r--src/lib.rs96
1 files changed, 77 insertions, 19 deletions
diff --git a/src/lib.rs b/src/lib.rs
index 673d279..9c39c6d 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -14,7 +14,7 @@
//! # Theory
//!
//! There are four conditions necessary for a deadlock to occur. In order to
-//! prevent deadlocks, we need to prevent one of the following:
+//! prevent deadlocks, we just need to prevent one of the following:
//!
//! 1. mutual exclusion
//! 2. non-preemptive allocation
@@ -27,24 +27,10 @@
//! first. Requesting multiple resources at once is atomic. You either get all
//! the requested resources or none at all.
//!
-//! # Performance
-//!
-//! **Avoid [`LockCollection::try_new`].** This constructor will check to make
-//! sure that the collection contains no duplicate locks. This is an O(n^2)
-//! operation, where n is the number of locks in the collection.
-//! [`LockCollection::new`] and [`LockCollection::new_ref`] don't need these
-//! checks because they use [`OwnedLockable`], which is guaranteed to be unique
-//! as long as it is accessible. As a last resort,
-//! [`LockCollection::new_unchecked`] doesn't do this check, but is unsafe to
-//! call.
-//!
-//! **Avoid using distinct lock orders for [`LockCollection`].** The problem is
-//! that this library must iterate through the list of locks, and not complete
-//! until every single one of them is unlocked. This also means that attempting
-//! to lock multiple mutexes gives you a lower chance of ever running. Only one
-//! needs to be locked for the operation to need a reset. This problem can be
-//! prevented by not doing that in your code. Resources should be obtained in
-//! the same order on every thread.
+//! As an optimization, this library also often prevents **circular wait**.
+//! Many collections sort the locks in order of their memory address. As long
+//! as the locks are always acquired in that order, then time doesn't need to
+//! be wasted on releasing locks after a failure and re-acquiring them later.
//!
//! # Examples
//!
@@ -105,6 +91,78 @@
//! println!("{}", *data.0);
//! println!("{}", *data.1);
//! ```
+//!
+//! In many cases, the [`LockCollection::new`] or [`LockCollection::new_ref`]
+//! method can be used, improving performance.
+//!
+//! ```rust
+//! use std::thread;
+//! use happylock::{LockCollection, Mutex, ThreadKey};
+//!
+//! const N: usize = 100;
+//!
+//! static DATA: [Mutex<i32>; 2] = [Mutex::new(0), Mutex::new(1)];
+//!
+//! for _ in 0..N {
+//! thread::spawn(move || {
+//! let key = ThreadKey::get().unwrap();
+//!
+//! // a reference to a type that implements `OwnedLockable` will never
+//! // contain duplicates, so no duplicate checking is needed.
+//! let collection = LockCollection::new_ref(&DATA);
+//! let mut guard = collection.lock(key);
+//!
+//! let x = *guard[1];
+//! *guard[1] += *guard[0];
+//! *guard[0] = x;
+//! });
+//! }
+//!
+//! let key = ThreadKey::get().unwrap();
+//! let data = LockCollection::new_ref(&DATA);
+//! let data = data.lock(key);
+//! println!("{}", data[0]);
+//! println!("{}", data[1]);
+//! ```
+//!
+//! # Performance
+//!
+//! **The `ThreadKey` is a mostly-zero cost abstraction.** It doesn't use any
+//! memory, and it doesn't really exist at run-time. The only cost comes from
+//! calling `ThreadKey::get()`, because the function has to ensure at runtime
+//! that the key hasn't already been taken. Dropping the key will also have a
+//! small cost.
+//!
+//! **Consider [`OwnedLockCollection`].** This will almost always be the
+//! fastest lock collection. It doesn't expose the underlying collection
+//! immutably, which means that it will always be locked in the same order, and
+//! doesn't need any sorting.
+//!
+//! **Avoid [`LockCollection::try_new`].** This constructor will check to make
+//! sure that the collection contains no duplicate locks. In most cases, this
+//! is O(nlogn), where n is the number of locks in the collections but in the
+//! case of [`RetryingLockCollection`], it's close to O(n).
+//! [`LockCollection::new`] and [`LockCollection::new_ref`] don't need these
+//! checks because they use [`OwnedLockable`], which is guaranteed to be unique
+//! as long as it is accessible. As a last resort,
+//! [`LockCollection::new_unchecked`] doesn't do this check, but is unsafe to
+//! call.
+//!
+//! **Know how to use [`RetryingLockCollection`].** This collection doesn't do
+//! any sorting, but uses a wasteful lock algorithm. It can't rely on the order
+//! of the locks to be the same across threads, so if it finds a lock that it
+//! can't acquire without blocking, it'll first release all of the locks it
+//! already acquired to avoid blocking other threads. This is wasteful because
+//! this algorithm may end up re-acquiring the same lock multiple times. To
+//! avoid this, ensure that (1) the first lock in the collection is always the
+//! first lock in any collection it appears in, and (2) the other locks in the
+//! collection are always preceded by that first lock. This will prevent any
+//! wasted time from re-acquiring locks. If you're unsure, [`LockCollection`]
+//! is a sensible default.
+//!
+//! [`OwnedLockable`]: `lockable::OwnedLockable`
+//! [`OwnedLockCollection`]: `collection::OwnedLockCollection`
+//! [`RetryingLockCollection`]: `collection::RetryingLockCollection`
mod key;