NativeApi.llama_log_set allows to intercept log messages from llama.cpp. But even if you set log callback as early as possible in your code, you will loose a few first messages that are produced by a call to the llama_backend_init from here:
For example, this messages aren't handled by the provided callback:
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce <...>
In order for NativeApi.llama_log_set to be executed, static constructor of NativeApi must run. This static constructor will load native library, and also call llama_backend_init. So there is no way to set a callback with NativeApi.llama_log_set before that.
There is also a slightly related problem with logging, but this time with LLamaSharp messages: logs, activated by NativeLibraryConfig.Instance.WithLogs() will always be written to the console and currently there is not way to redirect them elsewhere.
One possible solution is to make an overload for NativeLibraryConfig.WithLogs() that accepts an optional LLamaLogCallback, and pass it to the llama_log_set during the loading before any other function of llama.cpp is called. Also the same callback instance may be used to intercept LLamaSharp initialization log messages, or maybe it should be another dedicated instance, passed along with the one for llama.cpp.
NativeApi.llama_log_setallows to intercept log messages from llama.cpp. But even if you set log callback as early as possible in your code, you will loose a few first messages that are produced by a call to thellama_backend_initfrom here:LLamaSharp/LLama/Native/NativeApi.Load.cs
Line 38 in fa73c8f
For example, this messages aren't handled by the provided callback:
In order for
NativeApi.llama_log_setto be executed, static constructor ofNativeApimust run. This static constructor will load native library, and also callllama_backend_init. So there is no way to set a callback withNativeApi.llama_log_setbefore that.There is also a slightly related problem with logging, but this time with LLamaSharp messages: logs, activated by
NativeLibraryConfig.Instance.WithLogs()will always be written to the console and currently there is not way to redirect them elsewhere.One possible solution is to make an overload for
NativeLibraryConfig.WithLogs()that accepts an optionalLLamaLogCallback, and pass it to thellama_log_setduring the loading before any other function of llama.cpp is called. Also the same callback instance may be used to intercept LLamaSharp initialization log messages, or maybe it should be another dedicated instance, passed along with the one for llama.cpp.