diff options
author | JohnnyB <jboero@users.noreply.github.com> | 2023-08-23 15:28:22 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-08-23 17:28:22 +0300 |
commit | f19dca04ea5fbf9a0b2753091d93464585d5c73b (patch) | |
tree | 2f43a60fe664df1aa51584c1c14b9d3f59ca988c | |
parent | 8207214b6a37a46526cee9e72d4c9092b9d1872f (diff) |
devops : RPM Specs (#2723)
* Create llama-cpp.srpm
* Rename llama-cpp.srpm to llama-cpp.srpm.spec
Correcting extension.
* Tested spec success.
* Update llama-cpp.srpm.spec
* Create lamma-cpp-cublas.srpm.spec
* Create lamma-cpp-clblast.srpm.spec
* Update lamma-cpp-cublas.srpm.spec
Added BuildRequires
* Moved to devops dir
-rw-r--r-- | .devops/lamma-cpp-clblast.srpm.spec | 58 | ||||
-rw-r--r-- | .devops/lamma-cpp-cublas.srpm.spec | 59 | ||||
-rw-r--r-- | .devops/llama-cpp.srpm.spec | 58 |
3 files changed, 175 insertions, 0 deletions
diff --git a/.devops/lamma-cpp-clblast.srpm.spec b/.devops/lamma-cpp-clblast.srpm.spec new file mode 100644 index 00000000..739c6828 --- /dev/null +++ b/.devops/lamma-cpp-clblast.srpm.spec @@ -0,0 +1,58 @@ +# SRPM for building from source and packaging an RPM for RPM-based distros. +# https://fedoraproject.org/wiki/How_to_create_an_RPM_package +# Built and maintained by John Boero - boeroboy@gmail.com +# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal + +# Notes for llama.cpp: +# 1. Tags are currently based on hash - which will not sort asciibetically. +# We need to declare standard versioning if people want to sort latest releases. +# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. +# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. +# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo +# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. +# It is up to the user to install the correct vendor-specific support. + +Name: llama.cpp-clblast +Version: master +Release: 1%{?dist} +Summary: OpenCL Inference of LLaMA model in pure C/C++ +License: MIT +Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz +BuildRequires: coreutils make gcc-c++ git mesa-libOpenCL-devel +URL: https://github.com/ggerganov/llama.cpp + +%define debug_package %{nil} +%define source_date_epoch_from_changelog 0 + +%description +CPU inference for Meta's Lllama2 models using default options. + +%prep +%setup -n llama.cpp-master + +%build +make -j LLAMA_CLBLAST=1 + +%install +mkdir -p %{buildroot}%{_bindir}/ +cp -p main %{buildroot}%{_bindir}/llamacppclblast +cp -p server %{buildroot}%{_bindir}/llamacppclblastserver +cp -p simple %{buildroot}%{_bindir}/llamacppclblastsimple + +%clean +rm -rf %{buildroot} +rm -rf %{_builddir}/* + +%files +%{_bindir}/llamacppclblast +%{_bindir}/llamacppclblastserver +%{_bindir}/llamacppclblastsimple + +%pre + +%post + +%preun +%postun + +%changelog diff --git a/.devops/lamma-cpp-cublas.srpm.spec b/.devops/lamma-cpp-cublas.srpm.spec new file mode 100644 index 00000000..75d32fbe --- /dev/null +++ b/.devops/lamma-cpp-cublas.srpm.spec @@ -0,0 +1,59 @@ +# SRPM for building from source and packaging an RPM for RPM-based distros. +# https://fedoraproject.org/wiki/How_to_create_an_RPM_package +# Built and maintained by John Boero - boeroboy@gmail.com +# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal + +# Notes for llama.cpp: +# 1. Tags are currently based on hash - which will not sort asciibetically. +# We need to declare standard versioning if people want to sort latest releases. +# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. +# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. +# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo +# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. +# It is up to the user to install the correct vendor-specific support. + +Name: llama.cpp-cublas +Version: master +Release: 1%{?dist} +Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL) +License: MIT +Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz +BuildRequires: coreutils make gcc-c++ git cuda-toolkit +Requires: cuda-toolkit +URL: https://github.com/ggerganov/llama.cpp + +%define debug_package %{nil} +%define source_date_epoch_from_changelog 0 + +%description +CPU inference for Meta's Lllama2 models using default options. + +%prep +%setup -n llama.cpp-master + +%build +make -j LLAMA_CUBLAS=1 + +%install +mkdir -p %{buildroot}%{_bindir}/ +cp -p main %{buildroot}%{_bindir}/llamacppcublas +cp -p server %{buildroot}%{_bindir}/llamacppcublasserver +cp -p simple %{buildroot}%{_bindir}/llamacppcublassimple + +%clean +rm -rf %{buildroot} +rm -rf %{_builddir}/* + +%files +%{_bindir}/llamacppcublas +%{_bindir}/llamacppcublasserver +%{_bindir}/llamacppcublassimple + +%pre + +%post + +%preun +%postun + +%changelog diff --git a/.devops/llama-cpp.srpm.spec b/.devops/llama-cpp.srpm.spec new file mode 100644 index 00000000..c65251a5 --- /dev/null +++ b/.devops/llama-cpp.srpm.spec @@ -0,0 +1,58 @@ +# SRPM for building from source and packaging an RPM for RPM-based distros. +# https://fedoraproject.org/wiki/How_to_create_an_RPM_package +# Built and maintained by John Boero - boeroboy@gmail.com +# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal + +# Notes for llama.cpp: +# 1. Tags are currently based on hash - which will not sort asciibetically. +# We need to declare standard versioning if people want to sort latest releases. +# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. +# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. +# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo +# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. +# It is up to the user to install the correct vendor-specific support. + +Name: llama.cpp +Version: master +Release: 1%{?dist} +Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL) +License: MIT +Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz +BuildRequires: coreutils make gcc-c++ git +URL: https://github.com/ggerganov/llama.cpp + +%define debug_package %{nil} +%define source_date_epoch_from_changelog 0 + +%description +CPU inference for Meta's Lllama2 models using default options. + +%prep +%autosetup + +%build +make -j + +%install +mkdir -p %{buildroot}%{_bindir}/ +cp -p main %{buildroot}%{_bindir}/llamacpp +cp -p server %{buildroot}%{_bindir}/llamacppserver +cp -p simple %{buildroot}%{_bindir}/llamacppsimple + +%clean +rm -rf %{buildroot} +rm -rf %{_builddir}/* + +%files +%{_bindir}/llamacpp +%{_bindir}/llamacppserver +%{_bindir}/llamacppsimple + +%pre + +%post + +%preun +%postun + +%changelog |