summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Expand)Author
2025-04-01Additional guards for interleaved quants (#299)Kawrakow
2025-03-27Make sure tensor row size is multiple of block size also when quantizing with...Kawrakow
2025-03-23Improve DeepSeek batched processing speed (#282)Kawrakow
2025-03-23Test transparent huge pages on Linux (#278)Kawrakow
2025-03-22Add Gemma3 support (text only) (#276)Kawrakow
2025-03-21Fix bug: missing parentheses in logical expression (#275)Kawrakow
2025-03-21Specify tensor name regex for tensors to be repacked (#274)Kawrakow
2025-03-21FlashMLA-3: the best of both worlds (CPU only) (#273)Kawrakow
2025-03-21Convert models to row-interleaved quants using the quantize tool (#272)Kawrakow
2025-03-19Honor mmap setting when using tensor overrides (#270)Kawrakow
2025-03-18Make Q8_0 KV cache work with mla=2,fa on CUDA (#264)Kawrakow
2025-03-18FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)Kawrakow
2025-03-17Prepare wk_b tensors of DeepSeek models on the fly (#259)Kawrakow
2025-03-13FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)Kawrakow
2025-03-10DeepSeek imatrix stuff (#250)Kawrakow
2025-03-10Faster MoE token generation on CUDA (#248)Kawrakow
2025-03-09This works on CUDA, but (#247)Kawrakow
2025-03-08Faster FlashMLA prompt processing (#246)Kawrakow
2025-03-07Custom quantization rules with regular expressions (#244)Kawrakow
2025-03-05DeepSeek CUDA Flash Attention (#241)Kawrakow
2025-03-03Flash MLA (CPU only) (#240)Kawrakow
2025-03-02SER - Smart Expert Reduction (#239)Kawrakow
2025-03-01Reduce size of compute buffers (#237)Kawrakow
2025-02-27Option to use MLA without a transposed cache (#235)Kawrakow
2025-02-27Faster MLA on CUDA (#234)Kawrakow
2025-02-25Give the user the option to override where model weights are stored (#232)Kawrakow
2025-02-23Fused MoE ffn_up and ffn_gate (#229)Kawrakow
2025-02-20Honor attn_output specified in the command line also for low-bit quantsIwan Kawrakow
2025-02-19Q8_KV: 8-bit quantization type targeting the KV cache (#208)Kawrakow
2025-02-13MLA: allow Q8_0 K-cache for MLA (#206)Kawrakow
2025-02-13Faster MLA prompt processing (#205)Kawrakow
2025-02-11DeepSeek FA support (CPU only) (#200)Kawrakow
2025-02-10 Load all MoE experts during warmup and make warmup 1 token (#198)saood06
2025-02-09Add optional MLA (#188)Kawrakow
2025-02-07cuda: non-contiguous rms norm (#190)Kawrakow
2025-02-06Rename q4_0_r4, q8_0_r4 and iq4_xs_r4 to _r8 (#189)Kawrakow
2025-02-06IQ1_M_R4: better 1.75 bpw quants (#187)Kawrakow
2025-02-05IQ1_S_R4: better 1.5 bpw quants (#185)Kawrakow
2025-01-30Deepseek-Lite (#184)Kawrakow
2025-01-27Minor performance improvements (#179)Kawrakow
2025-01-27Interleave 8 rows (Q8_0, IQ4_XS) (#178)Kawrakow
2025-01-24Update chat templates (#177)Kawrakow
2025-01-23Deepseek V3 support added (#176)saood06
2025-01-23Add Deepseek-R1-Distill pre-tokenizerIwan Kawrakow
2025-01-10Be able to re-quantize MS BitNet I2_S models (#169)Kawrakow
2025-01-10Falcon3 changes (#168)Kawrakow
2024-12-23IQ3_S_R4 (#162)Kawrakow
2024-12-21IQ2_S_R4 (#156)Kawrakow
2024-12-21IQ2_XS_R4 (#155)Kawrakow
2024-12-20IQ2_XXS_R4 (#154)Kawrakow