We are thankful for the continuous support from the following organizations:
- Major code contributions to the codebase (special thanks to @ngxson)
- llama.cpp-based inference endpoint hosting on the HF platform
- Model hosting and GGUF integration in the HF web UI
- General support and help throughout the development of the project
- More info at: ggml.ai joins Hugging Face to ensure the long-term progress of Local AI
- Technical discussions and code contributions to the CUDA backend
- Hardware provision