index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml_vk_generate_shaders.py
Age
Commit message (
Expand
)
Author
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-16
Vulkan Shader Refactor, Memory Debugging Option (#7947)
0cc4m
2024-06-11
Update Vulkan RoPE implementation (#7818)
0cc4m
2024-06-03
Vulkan Mixture of Experts (MoE) support (#7628)
0cc4m
2024-05-29
ggml : fix YARN + add tests + add asserts (#7617)
Georgi Gerganov
2024-05-23
Update vulkan rope implementation to support frequency factors (#7475)
0cc4m
2024-05-19
Vulkan Embedding Fix (#7360)
0cc4m
2024-05-18
Update and fix Vulkan soft_max and argsort implementations (#7237)
0cc4m
2024-05-09
Vulkan Bugfixes and Improvements (#7084)
0cc4m
2024-05-03
convert.py : add python logging instead of print() (#6511)
Brian
2024-03-29
Vulkan k-quant mmq and ggml-backend offload functionality (#6155)
0cc4m
2024-03-05
Vulkan Improvements (#5835)
0cc4m
2024-02-09
vulkan: Set limit for task concurrency (#5427)
Neuman Vong
2024-02-03
Vulkan Intel Fixes, Optimizations and Debugging Flags (#5301)
0cc4m
2024-02-01
Vulkan Phi Fix for AMD Proprietary Drivers (#5260)
0cc4m
2024-01-31
Vulkan Fixes (#5223)
0cc4m
2024-01-28
ggml : add Vulkan backend (#2059)
0cc4m