Skip to content

Fix cuda compilation#1128

Merged
slaren merged 3 commits intoggml-org:masterfrom
slaren:cuda-mfix
Apr 24, 2023
Merged

Fix cuda compilation#1128
slaren merged 3 commits intoggml-org:masterfrom
slaren:cuda-mfix

Conversation

@slaren
Copy link
Copy Markdown
Member

@slaren slaren commented Apr 22, 2023

Continued from #1127. The -Wno-pedantic is necessary to avoid a warning: style of line directive is a GCC extension for every single line in the compilation.

B1gM8c and others added 3 commits April 22, 2023 16:22
Fix: Issue with CUBLAS compilation error due to missing `-fPIC` flag
Fix: Issue with CUBLAS compilation error
@slaren slaren merged commit e4cf982 into ggml-org:master Apr 24, 2023
@slaren slaren deleted the cuda-mfix branch April 24, 2023 15:30
@hlhr202
Copy link
Copy Markdown
Contributor

hlhr202 commented May 2, 2023

Hi @slaren is there any workaround for cmake cublas fpic?
I m compiling through rust and got such error:
relocation R_X86_64_PC32 against symbol `stderr@@GLIBC_2.2.5' can not be used when making a shared object; recompile with -fPIC

@Green-Sky
Copy link
Copy Markdown
Collaborator

@hlhr202 did you set LLAMA_SHARED ?

@hlhr202
Copy link
Copy Markdown
Contributor

hlhr202 commented May 2, 2023

LLAMA_SHARED

@Green-Sky this will result to another error
image

/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/11/crtbeginT.o: relocation R_X86_64_32 against hidden symbol `__TMC_END__' can not be used when making a shared object
  /usr/bin/ld: failed to set dynamic section sizes: bad value
  collect2: error: ld returned 1 exit status
  gmake[2]: *** [CMakeFiles/llama.dir/build.make:106: libllama.so] Error 1
  gmake[1]: *** [CMakeFiles/Makefile2:111: CMakeFiles/llama.dir/all] Error 2
  gmake: *** [Makefile:91: all] Error 2
  thread 'main' panicked at 'Failed to build lib', packages/llama-cpp/llama-sys/build.rs:106:9

@Green-Sky
Copy link
Copy Markdown
Collaborator

what if you have BUILD_SHARED_LIBS ON and LLAMA_STATIC OFF ?

@hlhr202
Copy link
Copy Markdown
Contributor

hlhr202 commented May 2, 2023

what if you have BUILD_SHARED_LIBS ON and LLAMA_STATIC OFF ?

I may try dynamically linking later since I am statically linking llama to rust program, so currently I cant turn off LLAMA_STATIC...

@hlhr202
Copy link
Copy Markdown
Contributor

hlhr202 commented May 2, 2023

@Green-Sky Thanks for your suggestion. I v tried shared lib and it works properly. but still if there is a statically linking method, it would be convinient for us to distribute downstream applications.

@tchereau
Copy link
Copy Markdown

tchereau commented May 3, 2023

Hi @slaren is there any workaround for cmake cublas fpic? I m compiling through rust and got such error: relocation R_X86_64_PC32 against symbol `stderr@@GLIBC_2.2.5' can not be used when making a shared object; recompile with -fPIC

line 65
.arg("-DCMAKE_POSITION_INDEPENDENT_CODE=ON");
work for me

Seunghhon pushed a commit to Seunghhon/llama.cpp that referenced this pull request Apr 26, 2026
* Fix: Issue with CUBLAS compilation error due to missing -fPIC flag

---------

Co-authored-by: B1gM8c <89020353+B1gM8c@users.noreply.github.com>
phuongncn pushed a commit to phuongncn/llama.cpp-gx10-dgx-sparks-deepseekv4 that referenced this pull request Apr 28, 2026
* Fix: Issue with CUBLAS compilation error due to missing -fPIC flag

---------

Co-authored-by: B1gM8c <89020353+B1gM8c@users.noreply.github.com>
phuongncn pushed a commit to phuongncn/llama.cpp-gx10-dgx-sparks-deepseekv4 that referenced this pull request Apr 28, 2026
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants