Skip to Content

llama-cpp v5975

Created
Updated

LLM inference in C/C++

# License

# Supported Platforms

All platforms are supported

# Features

No default features set.

# download

Support downloading a model from an URL

Dependencies:
  • curl

    Features: (none)

and one transitive dependency:
Host Dependencies:
2 transitive dependencies:

# tools

Build tools

Dependencies:

No dependencies.

Host Dependencies:

No dependencies.

# Dependencies

No transitive dependencies.

# Host Dependencies

No transitive dependencies.

# Dependents

No dependents.

# Host Dependents

No dependents.

# Contributors

  • Stefano Sinigardi's avatar Stefano Sinigardi
  • Kai Pastor's avatar Kai Pastor

# Changelog

  • 507dfce [ggml,llama-cpp,whisper-cpp] Update, features, cleanup (#46569)
  • 7e7032a [ggml,llama-cpp,whisper-cpp] Revise, fix features (#45756)
  • cf72b50 [llama-cpp] add new port (and its ggml dependency) (#43925)

# Source