Age | Commit message (Collapse) | Author |
|
* ggml : add ggml_cpu_has_ssse3
* llama : show SSSE3 in system info
|
|
* ci : add lora test
ggml-ci
* move lora summary to the top, add lora logs
ggml-ci
* ci : decrease CPU ppl runs to 2 to avoide 20 min timeout
ggml-ci
* add 7b lora test
use 1 thread for CUDA generation tests
ggml-ci
* add test with q8_0 (cpu only)
ggml-ci
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* Add a /detokenize endpoint to the example server
* remove trailing white-space
|
|
* Allow convert.py to convert to q8_0
Fix issue with bounded_parallel_map and greedy consuming iterator
Display elapsed time during conversion
* Add --concurrency option
Minor improvements to help text
Clean up bounded_parallel_map function a bit
* Massive speed improvement thanks to Cebtenzzre
* Refactor types
|
|
The use of special characters within source files can break compiling on some computers with different region and language settings. Using Unicode escape sequences should allow for the code to be compiled on all setups without needing to change your computers settings or switch regions.
|
|
|
|
|
|
(#1528)
* Fix bug in main.cpp where penalize_nl=false has no effect. It modifies the underlying logits array, but at this point we are already working on the candidates copy.
* Suppress redefinition warning for NOMINMAX on mingw. In my installation, this macro is already defined by /usr/lib/gcc/x86_64-w64-mingw32/11/include/c++/x86_64-w64-mingw32/bits/os_defines.h:45.
* main : fix indentation
* main : pass ctx to llama_token_nl()
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
Plain 'abs' casts the input to int.
|
|
|
|
* Better perplexity for 2- and 3-bit quantization for the 70B model
* PR comment
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
Problem
-------
`nix build` fails with missing `Accelerate.h`.
Changes
-------
- Fix build of the llama.cpp with nix for Intel: add the same SDK frameworks as
for ARM
- Add `quantize` app to the output of nix flake
- Extend nix devShell with llama-python so we can use convertScript
Testing
-------
Testing the steps with nix:
1. `nix build`
Get the model and then
2. `nix develop` and then `python convert.py models/llama-2-7b.ggmlv3.q4_0.bin`
3. `nix run llama.cpp#quantize -- open_llama_7b/ggml-model-f16.gguf ./models/ggml-model-q4_0.bin 2`
4. `nix run llama.cpp#llama -- -m models/ggml-model-q4_0.bin -p "What is nix?" -n 400 --temp 0.8 -e -t 8`
Co-authored-by: Volodymyr Vitvitskyi <volodymyrvitvitskyi@SamsungPro.local>
|
|
|
|
* llama.cpp : fix spm whitespace escaping + clean up
* main.cpp : spm - add whitespace in front of prompt
* test-tokenizer-0.cpp : spm - add whitespace in front of prompt
|
|
|
|
|
|
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
* Add llama_beam_search().
* Add '// Beam search' heading to llama.{h,cpp} after llama_grammar_accept_token().
* Add space around * pointers and & references.
* Add spaces around comparison and assignment operators.
* Prefer west const.
* Use llama_ prefix for structs in global namespace.
* Delete obsolete comment from an earlier revision.
* Change eos to eob in llama_beam and llama_beam_view structs.
|
|
* Get rope scale from HF models
* Save rope scale only for linear scaling
* Rewrite for clarity
|
|
* llama-bench : add model sizes
* more compact markdown output
* back to GiB
* adjust column sizes
|
|
model (#2773)
|
|
* server : add n_probs param in chat UI
* server : keep message data array & show in probabilites component
* server : add simple popover component
* server : fix completion_probabilities undefined if not set n_probs
* server : implement Probabilites
* server : handle bytes
* server : make n_probs max to 10 for easy scroll
* server : adjust for dark/light mode
* server : Fix regenerated prompt
* server : update index.html.hpp
* server : convert prob to percentage + show original value as div title
* server : fix Probabilites not used if included empty str
* server : skip byte pair in display probabilites
* server : remove array check of completion_probabilities in messages
* skip empty array or byte pair (> 1) in Probabilites
* generate index.html.hpp
* fix incorrect prob convert if the str is already a known token
* use final response to show probabilities on stop
* revert unnecessary change
* correct probabilites usage
* remove unused function
* always send partial response for get correct probs of last to_send
* fix typo
* fix content of format_final_response
* refactor probs render & make pColor transparent if not found
* send empty string when got stop_pos in partial
* avoid unnecessary empty data event & send rest of partial tokens on stop
* use <br /> for new line
* skip -1 tok in loop to avoid send '' on end
* trim last new lines on stop
* revert unnecessary change
|
|
ggml-ci
|
|
* gguf export more objects to user code
* gguf export all objects to user code for now
* gguf : bump version
|
|
* use hipblas based on cublas
* Update Makefile for the Cuda kernels
* Expand arch list and make it overrideable
* Fix multi GPU on multiple amd architectures with rocblas_initialize() (#5)
* add hipBLAS to README
* new build arg LLAMA_CUDA_MMQ_Y
* fix half2 decomposition
* Add intrinsics polyfills for AMD
* AMD assembly optimized __dp4a
* Allow overriding CC_TURING
* use "ROCm" instead of "CUDA"
* ignore all build dirs
* Add Dockerfiles
* fix llama-bench
* fix -nommq help for non CUDA/HIP
---------
Co-authored-by: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Co-authored-by: ardfork <134447697+ardfork@users.noreply.github.com>
Co-authored-by: funnbot <22226942+funnbot@users.noreply.github.com>
Co-authored-by: Engininja2 <139037756+Engininja2@users.noreply.github.com>
Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
Co-authored-by: jammm <2500920+jammm@users.noreply.github.com>
Co-authored-by: jdecourval <7315817+jdecourval@users.noreply.github.com>
|
|
* cuda : add RoPE kernel for mode == 2 (NeoX)
* falcon : do not offload the embeddings layer
|
|
* gitignore : add dist and rm pyproject.toml
* gguf: prepare as Pip package
* gguf: prepare as Pip package
* gguf : fix line endings
* requirements : add gguf
* gguf : update readme with build notes
* gguf : update readme with build notes
* gguf : add notes for tests
|
|
Since we also store barriers in this array, we need to double its size.
|
|
|
|
|
|
|
|
|
|
* metal: better memory alloc w/ concurrency dispatch
The ggml-alloc should only free tensors at memory barriers.
* ggml-alloc: avoid return silently
In certain cases, the allocate_node() function may silently return
without performing any memory allocation.
|
|
|
|
* Fix for main example getting stuck when -n -2 and --interactive
* Add a comment so future generations may suffer less.
|
|
(#2768)
|
|
* Modified build.yml to use build number for release
* Add the short hash back into the tag
* Prefix the build number with b
|
|
* metal : add dequantize_q8_0 kernel
* metal : add mul_mat_q8_0_f32 kernel
* metal : add Q8_0 mul_mm kernel
|
|
|
|
|
|
|
|
|
|
|
|
* llama : refactor GGUF constants into static maps
* llama : check if model architecture is known
* llama : refactor llama_model_load_internal()
* gguf : add KV constant maps
* llm : read arch-specific KVs
* convert : add dummy scores + types
* falcon : load tensor data (CPU only)
* llama : fix loading progress bar
* llama : add arch member to llama_model
* falcon : CPU inference working
* falcon : support non-40B models
* falcon : minor
* llama : minor updates
ggml-ci
* convert-falcon-hf-to-gguf.py : fix special token mapping
* llama.cpp : llama default UNK token = id 0
* llama.cpp : fix bpe tokenizer
* llama.cpp : fix the fix of bpe tokenizer
* ggml : pass eps to ggml_norm
* metal : implement RoPE (mode = 2) + avoid ggml_repeat
* ggml : ggml_repeat always creates new tensor
* falcon : copy-paste self-attention from LLaMA
* metal : print extra compute pipeline info
* falcon : minor changes (still chasing the Metal problem)
* llama.cpp : fix linefeed token
* metal : fix GELU kernel numerical stability by using precise::tanh
* metal : temporary workaround for the concurrency optimization bug
* falcon : add CUDA offloading (#2739)
* llama : better model naming and size reporting
* llama : prep new tokenizer support
* llama : advanced BPE tokenizer based on ggllm.cpp imlpementation
* llama : remove oboslete comment
ggml-ci
* common : remove obsolete BPE API + disable test-tokenizer-1
* llama : revert BPE special-case in llama_byte_to_token()
* cuda : add TODOs for RoPE NeoX implementation
* llama : default special tokens based on vocab type
* perplexity : add log for start of tokenization
---------
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
Co-authored-by: slaren <slarengh@gmail.com>
|
|
|
|
* Fix import of llama2.c models that don't share weights between embedding layers
* llama2c: reinstate ggmlv3 conversion output + update readme w/ gguf conv
* llama2.c: comment out legacy "load from ggml model" logic
* llama2.c: convert special-cased "<0xXX>" single byte tokens from tokenizer.bin
|
|
|
|
* main.cpp : insert bos if no tokens
* Update examples/main/main.cpp
* Update examples/main/main.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|