DeepLens is a differentiable optical lens simulator for (1) automated optical design and (2) end-to-end optics-algorithm design (End2endImaging). It helps researchers build custom differentiable optical systems with minimal effort.
Docs • Tutorials • Community • PyPI
from deeplens import GeoLens
lens = GeoLens(filename="./datasets/lenses/cellphone/cellphone80deg.json")
lens.analysis(full_eval=True, render=True)- Differentiable Optics. DeepLens leverages gradient backpropagation and differentiable optimization, demonstrating outstanding optimization power compared to classical optical design.
- Automated Lens Design. Enables automated lens design using curriculum learning, optical regularization losses, and GPU acceleration.
- Hybrid Refractive-Diffractive Optics. Supports accurate simulation and optimization of hybrid refractive-diffractive lenses (e.g., DOEs, metasurfaces).
- Accurate Image Simulation. Delivers photorealistic, spatially-varying image simulations, verified against commercial software and real-world experiments.
Additional features (available via collaboration):
- Polarization Ray Tracing. Provides polarization ray tracing and differentiable optimization of coating films.
- Non-Sequential Ray Tracing. Includes a differentiable non-sequential ray tracing model for stray light analysis and optimization.
- Kernel Acceleration. Achieves >10x speedup and >90% GPU memory reduction with custom GPU kernels across NVIDIA and AMD platforms.
- Distributed Optimization. Supports distributed simulation and optimization for billions of rays and high-resolution (>100k x 100k) diffractive computations.
Fully automated lens design from scratch with differentiable optimization. Try it with AutoLens!
A surrogate network for efficient lens representation ang image simulation (spatially-varying aberration + defocus).
Design hybrid refractive-diffractive lenses with differentiable ray-wave model.
DeepLens serves as the differentiable optics engine in End2endImaging, an end-to-end differentiable computational imaging framework. End2endImaging integrates optics (DeepLens), sensor/ISP simulation, and neural reconstruction networks into a single PyTorch computation graph, enabling joint optimization of the entire camera pipeline.
Clone this repo:
git clone https://github.com/singer-yang/DeepLens
cd DeepLens
Create a conda environment:
conda create -n deeplens_env python=3.12
conda activate deeplens_env
# Linux and Mac
pip install torch torchvision
# Windows
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128
pip install -r requirements.txt
or
conda env create -f environment.yml -n deeplens_env
Run the demo code:
python 0_hello_deeplens.py
DeepLens repo structure:
DeepLens/
│
├── deeplens/
│ ├── geolens.py (multi-element refractive lens)
│ ├── hybridlens.py (refractive + diffractive hybrid lens)
│ ├── diffraclens.py (pure diffractive lens)
│ ├── paraxiallens.py (thin-lens model)
│ ├── psfnetlens.py (neural surrogate lens)
│ ├── geometric_surface/ (spheric, aspheric, aperture, etc.)
│ ├── diffractive_surface/(DOE surfaces)
│ ├── phase_surface/ (phase-only surfaces)
│ ├── light/ (Ray, ComplexWave)
│ ├── material/ (glass catalogs)
│ ├── imgsim/ (PSF convolution, monte carlo)
│ ├── geolens_pkg/ (eval, optim, vis, io mixins)
│ └── surrogate/ (MLP, Siren neural surrogates)
│
├── 0_hello_deeplens.py (code tutorials)
├── ...
└── write_your_own_code.py
Join our Slack workspace and WeChat Group (singeryang1999) to connect with our core contributors, receive the latest industry updates, and be part of our community. For any inquiries, contact Xinge Yang (xinge.yang@kaust.edu.sa).
We welcome all contributions. To get started, please read our Contributing Guide or check out open questions. All project participants are expected to adhere to our Code of Conduct. A list of contributors can be viewed in Contributors and below:
If you use DeepLens in your research, please cite the paper. See more in History of DeepLens.
@article{yang2024curriculum,
title={Curriculum learning for ab initio deep learned refractive optics},
author={Yang, Xinge and Fu, Qiang and Heidrich, Wolfgang},
journal={Nature communications},
volume={15},
number={1},
pages={6572},
year={2024},
publisher={Nature Publishing Group UK London}
}




