A lightweight reverse-mode automatic differentiation (autograd) engine implemented in C++ with Python bindings.
- Core Tensor Operations: Addition, subtraction, multiplication, division, power
- Mathematical Functions: sin, cos, exp, log, tanh
- Activation Functions: ReLU, Sigmoid
- Automatic Differentiation: Reverse-mode autodiff with gradient accumulation
- Python Integration: Seamless Python API via Pybind11
- Operator Overloading: Natural mathematical syntax in Python
- C++14 compatible compiler
- Python 3.6+
- CMake 3.12+
- pybind11
# Install pybind11
pip install pybind11
# Build and install the package
pip install .
# Or for development
pip install -e .# C++ tests
mkdir build && cd build
cmake ..
make
./test_autocpp
# Python tests
python tests/test_python.py
# Or with pytest
pip install pytest
pytest tests/test_python.pyfrom autocpp import Tensor
# Create tensors with gradient tracking
x = Tensor(2.0, requires_grad=True)
y = Tensor(3.0, requires_grad=True)
# Forward pass
z = x * y + x.sin()
# Backward pass
z.backward()
# Access gradients
print(f"dz/dx = {x.grad}") # Gradient with respect to x
print(f"dz/dy = {y.grad}") # Gradient with respect to yfrom autocpp import Tensor
# Simple neural network layer
x = Tensor(1.0, requires_grad=True)
w1 = Tensor(0.5, requires_grad=True)
b1 = Tensor(0.1, requires_grad=True)
w2 = Tensor(-0.3, requires_grad=True)
b2 = Tensor(0.2, requires_grad=True)
# Forward pass
h = (x * w1 + b1).tanh() # Hidden layer with tanh activation
y = h * w2 + b2 # Output layer
# Compute gradients
y.backward()
# All parameters now have gradients
print(f"dL/dw1 = {w1.grad}")
print(f"dL/db1 = {b1.grad}")
print(f"dL/dw2 = {w2.grad}")
print(f"dL/db2 = {b2.grad}")Tensor(value: float, requires_grad: bool = False)Attributes:
value: The scalar valuegrad: Accumulated gradientrequires_grad: Whether to track gradients
Methods:
backward(): Compute gradients via backpropagationzero_grad(): Reset gradients to zero
Operations:
- Arithmetic:
+,-,*,/,**,-(negation) - Trigonometric:
sin(),cos() - Exponential:
exp(),log() - Activation:
tanh(),relu(),sigmoid()
Engine.backward(tensor) # Run backpropagation
Engine.zero_grad(tensors) # Zero gradients for list of tensors