summaryrefslogtreecommitdiff
path: root/examples/server/json.hpp
diff options
context:
space:
mode:
authorKawrakow <48489457+ikawrakow@users.noreply.github.com>2024-01-14 10:53:39 +0200
committerGitHub <noreply@github.com>2024-01-14 10:53:39 +0200
commita128c38de862431f1aae9ccc40b792fbc1b8b682 (patch)
tree2946ef20e083b883c325fed2bc0a11d1ca84166d /examples/server/json.hpp
parent5f5fe1bd608fa2ed42af97b5f2ea31be6625fc48 (diff)
Fix ffn_down quantization mix for MoE models (#4927)
* Fix ffn_down quantization mix for MoE models In #4872 I did not consider the part where every third tensor is quantized with more bits. Fir MoE this leads to tensors of the same layer being quantized with different number of bits, which is not considered as a possibility in the inference implementation (it is assumed all experts use the same quantization). * Fix the fix * Review suggestion --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/server/json.hpp')
0 files changed, 0 insertions, 0 deletions