-
-
Notifications
You must be signed in to change notification settings - Fork 202
Description
Description
This is not strictly a bug, but a usability problem. In a docker container, we lack any visibility into the progress of crashpad_handler when uploading a crash.
When does the problem happen
In a linux container, Unreal Engine (our custom plugin using this toolkit), we log a crash handling, log that it is handed over to crashpad_handler, but it never shows up in sentry.
- During build
- During run-time
- When capturing a hard crash
Environment
- OS: Linux
- Compiler: n/a
- CMake version and config: n/a
Steps To Reproduce
In house only.
Log output
This is what we see as stdout from the unreal process:
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" /app/docker-entrypoint.sh: info: Unreal server container exiting.
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" /app/docker-entrypoint.sh: line 268: 22 Segmentation fault "${SERVER_EXE}" PaxDei "${PD_MAP}" -ExecCmds="${PD_CMD_LINE_EXECCMDS}" -LogCmds="${PD_CMD_LINE_LOGCMDS}" ${PD_CMD_LINE_DEFAULT_ARGS} ${PD_CMD_LINE_ARGS}
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" Engine crash handling finished; re-raising signal 11 for the default handler. Good bye.
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" LogExit: Executing StaticShutdownAfterError
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" LogCore: Fatal error!0x000000000ac97960 <nonsensical stack trace>
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" LogCore: === Critical error: ===Unhandled Exception: SIGSEGV: invalid attempt to read memory at address 0x00007f1ca329b65e
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" CommonUnixCrashHandler: Signal=11
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" Malloc Size=262146 LargeMemoryPoolOffset=262162
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" Signal 11 caught.
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" [51:51:20250325,151305.439054:ERROR file_io_posix.cc:145] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq: No such file or directory (2)
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" [51:51:20250325,151305.439007:ERROR file_io_posix.cc:145] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq: No such file or directory (2)
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" Sentry [info] : handing control over to crashpad
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" LogSentryCore: handing control over to crashpad
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" Sentry [debug] : serializing envelope into buffer
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" Sentry [debug] : sending envelope
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" LogSentryCore: Verbose: serializing envelope into buffer
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" LogSentryCore: Verbose: sending envelope
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" LogSentryCore: Verbose: invoking `on_crash` hook
2025-03-25T15:13:06.234Z "i-06dcd49c07fe83fca" "paxdei-server-ue4" LogSentryCore: flushing session and queue before crashpad handler
The lines are in reverse order.
The script docker_entrypoint.sh will wait for any process named crashpad_handler to exit, using the pidof utility to find its PID, before logging the "Unreal server container exiting" line. If it finds a running process it will log that, so at this time, crashpad_handler is not running.
Now, crashpad_handler is likely daemonized and so its standard output is not shared with the main process, but is there any place where it logs its activities, so that we can manually output what it does, and be confident that it actually uploaded the crash?
Secondly: There is a stack trace produced by the UnrealEngine core after handing control over to crashpad_handler, but for this error it is nonsensical. I'd like to ask if there is any chance that the crash handling by sentry has somehow messed up the exception context at this time?
Metadata
Metadata
Assignees
Fields
Give feedbackProjects
Status