llama.cpp
Port of Facebook's LLaMA model in C/C++
Last updated
Was this helpful?
Port of Facebook's LLaMA model in C/C++
Last updated
Was this helpful?
The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook
It uses the LLaMA model to generate predictions or outputs based on input data. And it runs without any dependencies or external libraries. This allows for faster and more efficient inference on desktop CPUs.
There are several reasons:
Without external libraries and dependencies, the model can reduce the overhead associated with loading and managing these resources.
It is designed to run on the macOS
Optimizing via, Accelerate, and frameworks
4,5,8 bit support
Supports /Apple BLAS/ARM Performance Lib/ATLAS/BLIS/Intel MKL/NVHPC/ACML/SCSL/SGIMAH and more in BLAS
cuBLAS and CLBlast support
make
cmake
zig
Metal allows the computation to be executed on the GPU for Apple devices
make
LLAMA_METAL=1 make
cmake
cmake -DLLAMA_METAL=ON ..
Enable GPU inference with the --gpu-layers|-ngl
command-line argument. Any value larger than 0 will offload the computation to the GPU.
./main -m ./models/7B/ggml-model-q4_0.bin -n 128 -ngl 1
Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). There are currently three different implementations of it:
Accelerate Framework
Only available on Mac PCs and it is enabled by default.
OpenBLAS
This provides BLAS acceleration using only the CPU(Need to install OpenBLAS first)
Intel MKL
cuBLAS
This provides BLAS acceleration using the CUDA cores of your Nvidia GPU(Need the CUDA toolkit installed)
CLBlast
OpenCL acceleration is provided by the matrix multiplication kernels from the project and custom kernels for ggml that can generate tokens on the GPU. (Need the OpenCL SDK)