index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2023-08-18
llama : add benchmark example (#2626)
slaren
2023-08-18
perplexity : more meaningful ETA number - 2 decimal points
Georgi Gerganov
2023-08-18
server : support for saving templates in browser LocalStorage (#2486)
staviq
2023-08-17
Add --cfg-negative-prompt-file option for examples (#2591)
Kerfuffle
2023-08-15
server : add missing /json-schema-to-grammar.mjs (#2616)
Jhen-Jie Hong
2023-08-14
server : add --numa support (#2524)
Cheng Shao
2023-08-14
server : fix default grammar by use empty string in the UI (#2604)
Jhen-Jie Hong
2023-08-14
server : implement json-schema-to-grammar.mjs & add grammar param in the UI (...
Jhen-Jie Hong
2023-08-12
Adding support for llama2.c models (#2559)
byte-6174
2023-08-12
server: fixed wrong variable name in timing json (#2579)
Equim
2023-08-10
Handle `ENABLE_VIRTUAL_TERMINAL_PROCESSING` more gracefully on earlier versio...
DannyDaemonic
2023-08-10
Add --n-predict -2 for stopping generation on full context (#2565)
Christian Demsar
2023-08-10
Fix grammar-based sampling issue in server (#2566)
Martin Krasser
2023-08-08
Allow passing grammar to completion endpoint (#2532)
Martin Krasser
2023-08-08
llm.vim : multiline autocompletion, get rid of "^@" (#2543)
chaihahaha
2023-08-08
vim : bring back simple llm.vim example
Georgi Gerganov
2023-08-08
vim : streaming and more (#2495)
AustinMroz
2023-08-07
Add --rope-scale parameter (#2544)
klosax
2023-08-06
console : fix issue related to Windows 11 PowerShell console mode persistence...
DannyDaemonic
2023-08-04
fix firefox autoscroll (#2519)
Jonas Wunderlich
2023-08-04
server: regenerate completion.js.hpp (#2515)
Cebtenzzre
2023-08-04
Add --simple-io option for subprocesses and break out console.h and cpp (#1558)
DannyDaemonic
2023-08-04
Fixing race condition in server and partial stream handling in frontend. (#2391)
Stephen Nichols
2023-08-04
build : fix several cast and printf warnings (#2499)
Borislav Stanimirov
2023-08-02
examples : generate JSON according to schema (#1887)
Evan Jones
2023-08-02
tests : Fix compilation warnings (Linux/GCC) (#2451)
Eve
2023-08-01
fix a typo in examples/server/README.md (#2478)
Bono Lv
2023-08-01
server : Support dark mode (#2414)
ebraminio
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-28
perplexity : add Hellaswag calculation (#2389)
klosax
2023-07-28
examples : fix whitespace
Georgi Gerganov
2023-07-28
examples : server chat mode with llama2 (#2400)
nhamanasu
2023-07-28
readme : fix the description of the Tail free sampling (TFS) method (#2431)
Weird Constructor
2023-07-28
llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)
Rand Xie
2023-07-25
Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)
Kawrakow
2023-07-25
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
Xiao-Yong Jin
2023-07-25
server: add rms_norm_eps parameter (#2380)
slaren
2023-07-25
[Server] Escape HTML in webchat (#2368)
Henri Vasserman
2023-07-24
make rms_norm_eps a parameter (#2374)
slaren
2023-07-24
Chat UI extras (#2366)
Aarni Koskela
2023-07-23
llama : add grammar-based sampling (#1773)
Evan Jones
2023-07-23
Add gqa parameter support to the server (#2351)
IgnacioFDM
2023-07-23
common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)
wzy
2023-07-23
llama : grouped-query attention + LLaMAv2 70B support (#2276)
Georgi Gerganov
2023-07-23
llama : print help to stdout (#2338)
maddes8cht
2023-07-23
examples : simplify vim plugin (#2327)
AustinMroz
2023-07-22
llama : optimize memory buffers (#2325)
Georgi Gerganov
2023-07-22
Perplexity: Compute scores correlated to HellaSwag (#2312)
klosax
2023-07-22
examples : basic VIM plugin
whoreson
2023-07-21
examples : add easy python script to create quantized (k-bit support) GGML mo...
Richard Roberson
[next]