Open-source projects focused on local language model inference, model architecture research, and systems-level tooling. All projects are written in C++ and Python and are designed to run entirely on local hardware.
| Repository | Description | Language |
|---|---|---|
| Quadtrix.cpp | Inference engine for running language models on local hardware | C++ |
- Local inference — running language models without external APIs or cloud services
- Model implementation — transformer architectures, tokenizers, and training pipelines implemented from source
- Hardware tooling — CPU, RAM, and GPU monitoring at the OS level
- Agent systems — task automation combining language model output with system interaction
See CONTRIBUTING.md for contribution guidelines, code style requirements, and the pull request process.
To report a security issue, refer to SECURITY.md instead of opening a public issue.
Projects in this organization are released under the MIT License unless otherwise noted in the repository.