diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-12-07 13:03:17 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-12-07 13:03:17 +0200 |
commit | bcc0eb4591bec5ec02fad3f2bdcb1b265052ea56 (patch) | |
tree | 5082f49b7cb13d8e4f08c14ecf436606a1ae2ff8 /examples/quantize-stats | |
parent | 81bc9214a389362010f7a57f4cbc30e5f83a2d28 (diff) |
llama : per-layer KV cache + quantum K cache (#4309)
* per-layer KV
* remove unnecessary copies
* less code duplication, offload k and v separately
* llama : offload KV cache per-layer
* llama : offload K shift tensors
* llama : offload for rest of the model arches
* llama : enable offload debug temporarily
* llama : keep the KV related layers on the device
* llama : remove mirrors, perform Device -> Host when partial offload
* common : add command-line arg to disable KV cache offloading
* llama : update session save/load
* llama : support quantum K cache (#4312)
* llama : support quantum K cache (wip)
* metal : add F32 -> Q8_0 copy kernel
* cuda : add F32 -> Q8_0 copy kernel
ggml-ci
* cuda : use mmv kernel for quantum cache ops
* llama : pass KV cache type through API
* llama : fix build
ggml-ci
* metal : add F32 -> Q4_0 copy kernel
* metal : add F32 -> Q4_1 copy kernel
* cuda : wip
* cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels
* llama-bench : support type_k/type_v
* metal : use mm kernel only for quantum KV cache
* cuda : add comment
* llama : remove memory_f16 and kv_f16 flags
---------
Co-authored-by: slaren <slarengh@gmail.com>
* readme : add API change notice
---------
Co-authored-by: slaren <slarengh@gmail.com>
Diffstat (limited to 'examples/quantize-stats')
-rw-r--r-- | examples/quantize-stats/quantize-stats.cpp | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/examples/quantize-stats/quantize-stats.cpp b/examples/quantize-stats/quantize-stats.cpp index 27128247..77302416 100644 --- a/examples/quantize-stats/quantize-stats.cpp +++ b/examples/quantize-stats/quantize-stats.cpp @@ -321,7 +321,6 @@ int main(int argc, char ** argv) { auto cparams = llama_context_default_params(); cparams.n_ctx = 256; cparams.seed = 1; - cparams.f16_kv = false; ctx = llama_new_context_with_model(model, cparams); |