diff options
author | Ziang Wu <97337387+ZiangWu-77@users.noreply.github.com> | 2024-03-20 23:29:51 +0800 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-20 17:29:51 +0200 |
commit | f9c7ba34476ffc4f13ae2cdb1aec493a16eb8d47 (patch) | |
tree | e25c0b27a5fd545d5a36377975f88e462c40f8fa | |
parent | 272935b281fee5c683e3d6d1eb580b84553cf503 (diff) |
llava : update MobileVLM-README.md (#6180)
-rw-r--r-- | examples/llava/MobileVLM-README.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/llava/MobileVLM-README.md b/examples/llava/MobileVLM-README.md index c1f361d1..4d5fef02 100644 --- a/examples/llava/MobileVLM-README.md +++ b/examples/llava/MobileVLM-README.md @@ -6,7 +6,7 @@ for more information, please go to [Meituan-AutoML/MobileVLM](https://github.com The implementation is based on llava, and is compatible with llava and mobileVLM. The usage is basically same as llava. -Notice: The overall process of model inference for both **MobilVLM** and **MobilVLM_V2** models is the same, but the process of model conversion is a little different. Therefore, using MobiVLM as an example, the different conversion step will be shown. +Notice: The overall process of model inference for both **MobileVLM** and **MobileVLM_V2** models is the same, but the process of model conversion is a little different. Therefore, using MobiVLM as an example, the different conversion step will be shown. ## Usage Build with cmake or run `make llava-cli` to build it. |