Evaluating f(x) = x·sin(x) with Dual<f64> computes the value and exact derivative in a single pass — no finite differences, no symbolic engine. The green tangent line is the dual field: the slope that falls out for free.
use tang::*; use tang_la::{DVec, DMat, Svd}; use tang_ad::{Tape, grad}; use tang_optim::Adam; // Dual numbers give you exact derivatives for free let x = Dual::new(2.0, 1.0); let y = (x * x).sin(); // y.dual = cos(x²) · 2x // Reverse-mode AD for large parameter spaces let loss = |x: &[f64]| x[0]*x[0] + x[1]*x[1] + x[2]*x[2]; let g = grad(loss, &[1.0, 2.0, 3.0]); // [2, 4, 6] // Dense linear algebra — LU, SVD, Cholesky, QR, Eigen let a = DMat::from_fn(3, 3, |i, j| if i == j { 2.0 } else { -1.0 }); let svd = Svd::new(&a); // The same Scalar trait flows through everything let q = Quat::axis_angle(Dir3::Z, core::f64::consts::FRAC_PI_2); let v = q.rotate(Vec3::new(1.0, 0.0, 0.0)); // ≈ (0, 1, 0)
// the-scalar-trick
Write your math once with S: Scalar. Swap the type to change what it does.
fn spring_energy<S: Scalar>( k: S, x: S ) -> S { k * x * x * S::from_f64(0.5) } spring_energy(10.0_f64, 0.3) // → 0.45
// same function, different type let x = Dual::var(0.3); spring_energy(10.0.into(), x) // → Dual { // val: 0.45, // energy // dual: 3.0, // ∂E/∂x = kx // }
// interval arithmetic let x = Interval::new(0.28, 0.32); spring_energy(10.0.into(), x) // → Interval { // lo: 0.392, // hi: 0.512, // }
One trait. Same code. Exact derivatives. No tape, no graph tracing.
// benchmarks
Apple M-series, single-threaded. cargo bench -p tang-bench
Lower is better. glam uses f32 (SIMD).
tang AD gives exact derivatives. Finite differences are ε-approximate and require step-size tuning.
The speed comparison is secondary — the real win is machine-precision derivatives with zero tuning. Finite differences break down for stiff systems.
tang is pure generic Rust — works with Dual<f64>, any Scalar. nalgebra dispatches to BLAS-style routines. For peak f64, enable the faer feature.
// use-cases
BRep, NURBS, mesh ops. The types CAD needs.
let n = Vec3::new(0.0, 0.0, 1.0); let t = Transform::rotation( Quat::axis_angle(Dir3::Z, PI/4.0) );
Spatial algebra, screw theory, inertia tensors.
let i = SpatialInertia::new( mass, com, inertia ); let f = i.apply(&accel);
Differentiable FK/IK, exact Jacobians for free.
// exact Jacobian via Dual let j = jacobian( |q| forward_kinematics(q), &joint_angles );
Same types train your nets and run your sim.
let g = grad( |x| mse_loss(&predict(x), &target), ¶ms );
// migration
Drop-in aliases. Minimal call-site changes.
| nalgebra | tang |
|---|---|
| Vector3<f64> | Vec3<f64> |
| Point3<f64> | Point3<f64> |
| Unit<Vector3> | Dir3<f64> |
| Matrix4<f64> | Mat4<f64> |
| UnitQuaternion | Quat<f64> |
| DVector<f64> | DVec<f64> |
| DMatrix<f64> | DMat<f64> |
// nalgebra style — works v.dot(&w); v.cross(&w); // tang style — also works v.dot(w); v.cross(w);
| nalgebra | tang |
|---|---|
| Vec3::zeros() | Vec3::zero() + alias |
| v.norm_squared() | v.norm_sq() + alias |
| Unit::new_normalize(v) | Dir3::new(v) + alias |
| m.try_inverse() | m.try_inverse() |
| m.symmetric_eigen() | m.symmetric_eigen() |
| a.clone().lu().solve(&b) | a.clone().lu().solve(&b) |
// targets
#![no_std] with alloc. Run tang on microcontrollers, in kernels, anywhere without a standard library. No heavyweight dependencies — core types are hand-rolled #[repr(C)].
tang-gpu provides wgpu compute shaders for sparse matrix-vector products, batch operations, and tensor kernels. Cross-platform GPU acceleration on Vulkan, Metal, DX12, and WebGPU.