index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
llava
Age
Commit message (
Expand
)
Author
2024-02-18
llava : update surgery script to not remove tensors (#5536)
Daniel Bevenius
2024-02-16
llava : removed excess free(NULL) operation (#5531)
Herman Semenov
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-02-16
llava : fix clip-model-is-vision flag in README.md (#5509)
Daniel Bevenius
2024-02-15
clip : fix wrong loop condition
Georgi Gerganov
2024-02-15
llava : fix memory management bug (#5491)
Elbios
2024-02-15
llaba : hotfix for llava-1.6 image number (#5495)
John
2024-02-14
llava : update README.md (#5489)
John
2024-02-14
llava : support v1.6 (#5267)
John
2024-02-12
llava : remove prog parameter from ArgumentParser (#5457)
Daniel Bevenius
2024-02-12
sync : ggml (#5452)
Georgi Gerganov
2024-02-09
llava : add requirements.txt and update README.md (#5428)
Daniel Bevenius
2024-02-08
llava : add missing .py, and fix paths in README.md (#5414)
Daniel Bevenius
2024-02-08
llava: fix typo/formatting in README.md (#5405)
Daniel Bevenius
2024-02-07
llava-cli : always tokenize special tokens (#5382)
Xiao-Yong Jin
2024-01-31
llava : add MobileVLM support (#5132)
JidongZhang-THU
2024-01-27
llava : support for Yi-VL and fix for mobileVLM (#5093)
John
2024-01-27
Remove unused data and add fixes (#5154)
Michael Klimenko
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
llava : MobileVLM support (#4954)
XiaotaoChen
2024-01-10
clip : support more quantization types (#4846)
John
2024-01-09
llava-cli : don't crash if --image flag is invalid (#4835)
Justine Tunney
2023-12-30
clip : refactor + bug fixes (#4696)
Georgi Gerganov
2023-12-29
clip : use ggml_backend_buffer_is_host (#4205)
Georgi Gerganov
2023-12-29
clip : enable gpu backend (#4205)
Steward Garcia
2023-12-29
cmake : fix ld warning duplicate libraries libllama.a (#4671)
Cuong Trinh Manh
2023-12-29
llava-cli : refactor to use sampling library (#4669)
Justine Tunney
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-14
ggml : remove n_dims from ggml_tensor (#4469)
slaren
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
2023-11-30
llava : ShareGPT4V compatibility (vision encoder only loading) (#4172)
John
2023-11-17
llava : fix compilation warning that fread return value is not used (#4069)
Huawei Lin
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-13
llava : fix regression for square images in #3613 (#4056)
M. Yusuf Sarıgöz
2023-11-13
sync : ggml (backend v2) (#3912)
Georgi Gerganov
2023-11-07
Use params when loading models in llava-cli (#3976)
Matthew Tejo
2023-11-07
llava : expose as a shared library for downstream projects (#3613)
Damian Stewart
2023-11-02
build : link against build info instead of compiling against it (#3879)
cebtenzzre
2023-10-23
llama : remove token functions with `context` args in favor of `model` (#3720)
Marcus Dunn
2023-10-22
server : parallel decoding and multimodal (#3677)
Georgi Gerganov
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-19
multimodal : add BakLLaVA conversion support (#3682)
M. Yusuf Sarıgöz
2023-10-19
llava : avoid segfault in case of non-existent mmproj file (#3674)
M. Yusuf Sarıgöz
2023-10-18
speculative : add tree-based sampling example (#3624)
Georgi Gerganov
2023-10-16
llava : fix tokenization to not add bos between image embeddings and user pro...
Georgi Gerganov
2023-10-14
Honor -ngl option for Cuda offloading in llava (#3621)
M. Yusuf Sarıgöz
2023-10-12
examples: support LLaVA v1.5 (multimodal model) (#3436)
M. Yusuf Sarıgöz