From 04976db7a819fcf8bfefbfc09a3344210b79dd27 Mon Sep 17 00:00:00 2001 From: omahs <73983677+omahs@users.noreply.github.com> Date: Tue, 7 May 2024 17:20:33 +0200 Subject: docs: fix typos (#7124) * fix typo * fix typos * fix typo * fix typos * fix typo * fix typos --- docs/BLIS.md | 2 +- docs/HOWTO-add-model.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) (limited to 'docs') diff --git a/docs/BLIS.md b/docs/BLIS.md index 0bcd6eee..c933766b 100644 --- a/docs/BLIS.md +++ b/docs/BLIS.md @@ -23,7 +23,7 @@ Install BLIS: sudo make install ``` -We recommend using openmp since it's easier to modify the cores been used. +We recommend using openmp since it's easier to modify the cores being used. ### llama.cpp compilation diff --git a/docs/HOWTO-add-model.md b/docs/HOWTO-add-model.md index a56b7834..48769cdf 100644 --- a/docs/HOWTO-add-model.md +++ b/docs/HOWTO-add-model.md @@ -96,9 +96,9 @@ NOTE: The dimensions in `ggml` are typically in the reverse order of the `pytorc This is the funniest part, you have to provide the inference graph implementation of the new model architecture in `llama_build_graph`. -Have a look to existing implementation like `build_llama`, `build_dbrx` or `build_bert`. +Have a look at existing implementation like `build_llama`, `build_dbrx` or `build_bert`. -When implementing a new graph, please note that the underlying `ggml` backends might not support them all, support of missing backend operations can be added in another PR. +When implementing a new graph, please note that the underlying `ggml` backends might not support them all, support for missing backend operations can be added in another PR. Note: to debug the inference graph: you can use [eval-callback](../examples/eval-callback). -- cgit v1.2.3