diff options
author | firecoperana <xuqiaowei1124@gmail.com> | 2025-06-19 02:24:53 -0500 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-06-19 10:24:53 +0300 |
commit | 3f111ad7bbb2d4f721332f9b2b344e48b3bbf9aa (patch) | |
tree | a3a17ee74e0436253e17f0d322320ed554d34b0a /examples/infill/infill.cpp | |
parent | c5368148cf3af7a3694e0eb03d24a08326c01d12 (diff) |
add dry sampler (#513)
* add dry sampler
* use vocab instead of model in dry_init function
* fix compile error for build test
---------
Co-authored-by: firecoperana <firecoperana>
Diffstat (limited to 'examples/infill/infill.cpp')
-rw-r--r-- | examples/infill/infill.cpp | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/infill/infill.cpp b/examples/infill/infill.cpp index 92d630b1..d3c3ad5a 100644 --- a/examples/infill/infill.cpp +++ b/examples/infill/infill.cpp @@ -349,7 +349,7 @@ int main(int argc, char ** argv) { std::vector<llama_token> embd; - struct llama_sampling_context * ctx_sampling = llama_sampling_init(sparams); + struct llama_sampling_context * ctx_sampling = llama_sampling_init(llama_get_model_vocab(model), sparams); while (n_remain != 0 || params.interactive) { // predict |