llama-cpp v5975
- Created
- Updated
LLM inference in C/C++
# License
# Supported Platforms
All platforms are supported
# Features
No default features set.
# download
Support downloading a model from an URL
Host Dependencies:
2 transitive dependencies:
- vcpkg-cmake (by curl )
- vcpkg-cmake-config (by curl )
# tools
Build tools
Dependencies:
No dependencies.
Host Dependencies:
No dependencies.
# Dependencies
No transitive dependencies.
# Host Dependencies
No transitive dependencies.
# Dependents
No dependents.
# Host Dependents
No dependents.
# Contributors
Stefano Sinigardi
Kai Pastor