Skip to content

Conversation

@adelejjeh
Copy link

This PR cherry-picks 3c2e5d50cadf1644b30d42a23d14b0e20f7669a8 for SWDEV-561934.

Problem Summary

PyTorch's test_warp_softmax_64bit_indexing is failing with a numerical precision error where log(1.1422761679) computed with 54% higher error than expected (9.042e-09 vs 5.859e-09), causing gradient computations to exceed tolerance thresholds. This precision degradation was reproducible across all AMD GPU architectures (gfx1100, gfx1200, gfx90a, gfx950). I tracked down the problem to the commit 4703f8b (March 6, 2025) which changed HIP math headers to call __builtin_logf() directly instead of __ocml_log_f32():

- float logf(float __x) { return __FAST_OR_SLOW(__logf, __ocml_log_f32)(__x); }
+ float logf(float __x) { return __FAST_OR_SLOW(__logf, __builtin_logf)(__x); }

This change exposed a problem in the AMDGCN back-end as described below:

Key Findings

1. Contract flag propagation: When -ffp-contract=fast is enabled (default for HIP), Clang's CodeGen adds the contract flag to all CallInst instructions within the scope of CGFPOptionsRAII, including calls to LLVM intrinsics like llvm.log.f32.

2. Behavior change from OCML to builtin path:

  • Old path (via __ocml_log_f32): The preprocessed IR showed the call to the OCML library function had the contract flag, but the OCML implementation internally dropped the contract flag when calling the llvm.log.f32 intrinsic.
; Function Attrs: alwaysinline convergent mustprogress nounwind
define internal noundef float @_ZL4logff(float noundef %__x) #6 {
entry:
  %retval = alloca float, align 4, addrspace(5)
  %__x.addr = alloca float, align 4, addrspace(5)
  %retval.ascast = addrspacecast ptr addrspace(5) %retval to ptr
  %__x.addr.ascast = addrspacecast ptr addrspace(5) %__x.addr to ptr
  store float %__x, ptr %__x.addr.ascast, align 4, !tbaa !23
  %0 = load float, ptr %__x.addr.ascast, align 4, !tbaa !23
  %call = call contract float @__ocml_log_f32(float noundef %0) #23
  ret float %call
}

; Function Attrs: convergent mustprogress nofree norecurse nosync nounwind willreturn memory(none)
define internal noundef float @__ocml_log_f32(float noundef %0) #7 {
  %2 = tail call float @llvm.log.f32(float %0)
  ret float %2
}
  • New path (via __builtin_logf): The call goes directly to llvm.log.f32 intrinsic with the contract flag preserved, causing the backend to apply FMA contraction during polynomial expansion.
; Function Attrs: alwaysinline convergent mustprogress nounwind
define internal noundef float @_ZL4logff(float noundef %__x) #6 {
entry:
  %retval = alloca float, align 4, addrspace(5)
  %__x.addr = alloca float, align 4, addrspace(5)
  %retval.ascast = addrspacecast ptr addrspace(5) %retval to ptr
  %__x.addr.ascast = addrspacecast ptr addrspace(5) %__x.addr to ptr
  store float %__x, ptr %__x.addr.ascast, align 4, !tbaa !24
  %0 = load float, ptr %__x.addr.ascast, align 4, !tbaa !24
  %1 = call contract float @llvm.log.f32(float %0)
  ret float %1
}

3. Why contract breaks log: Our AMDGCM target back end implements the natural logarithm by taking the result of the hardware log, then multiplying that by ln(2), and applying some rounding error correction to that multiplication. This results in something like:

r = y * c1; // y is result of v_log_ instruction, c1 = ln(2)
r = r + fma(y, c2, fma(y, c1, -r)) // c2 is another error-correcting constant
  v_log_f32_e32 v1, v1
  s_mov_b32 s2, 0x3f317217
  v_mul_f32_e32 v3, 0x3f317217, v1
  v_fma_f32 v4, v1, s2, -v3
  v_fmac_f32_e32 v4, 0x3377d1cf, v1
  v_add_f32_e32 v3, v3, v4

With the presence of the contract flag, the back-end fuses the add (`r

  • Z`) with the multiply thinking that it is legal, thus eliminating the intermediate rounding. The error compensation term, which was calculated based on the rounded product, is now being added to the full-precision result from the FMA, leading to incorrect error correction and degraded accuracy. The corresponding contracted operations become the following:
r = y * c1;
r = fma(y, c1, fma(y, c2, fma(y, c1, -r)));
  v_log_f32_e32 v1, v1
  s_mov_b32 s2, 0x3f317217
  v_mul_f32_e32 v3, 0x3f317217, v1
  v_fma_f32 v3, v1, s2, -v3
  v_fmac_f32_e32 v3, 0x3377d1cf, v1
  v_fmac_f32_e32 v3, 0x3f317217, v1

Solution and Proposed Fix

Based on our implementation of llvm.log and llvm.log10, it should be illegal for the back-end to propagate the contract flag when it is present on the intrinsic call because it uses error-correcting summation. My proposed fix is to modify the instruction selection passes (both global-isel and sdag) to drop the contract flag when lowering llvm.log. That way, when the instruction selection performs the contraction optimization, it will not fuse the multiply and add.

Note: I had originally implemented this fix in the FE by removing the contract flag when lowering the llvm.log builtin (PR llvm#168770). I have since closed that PR.

…lvm#168916)

## Problem Summary

PyTorch's `test_warp_softmax_64bit_indexing` is failing with a numerical
precision error where `log(1.1422761679)` computed with 54% higher error
than expected (9.042e-09 vs 5.859e-09), causing gradient computations to
exceed tolerance thresholds. This precision degradation was reproducible
across all AMD GPU architectures (gfx1100, gfx1200, gfx90a, gfx950). I
tracked down the problem to the commit **4703f8b6610a** (March 6, 2025)
which changed HIP math headers to call `__builtin_logf()` directly
instead of `__ocml_log_f32()`:

```diff
- float logf(float __x) { return __FAST_OR_SLOW(__logf, __ocml_log_f32)(__x); }
+ float logf(float __x) { return __FAST_OR_SLOW(__logf, __builtin_logf)(__x); }
```

This change exposed a problem in the AMDGCN back-end as described below:

## Key Findings

**1. Contract flag propagation:** When `-ffp-contract=fast` is enabled
(default for HIP), Clang's CodeGen adds the `contract` flag to all
`CallInst` instructions within the scope of `CGFPOptionsRAII`, including
calls to LLVM intrinsics like `llvm.log.f32`.

**2. Behavior change from OCML to builtin path:**
- **Old path** (via `__ocml_log_f32`): The preprocessed IR showed the
call to the OCML library function had the contract flag, but the OCML
implementation internally dropped the contract flag when calling the
`llvm.log.f32` intrinsic.
```llvm
; Function Attrs: alwaysinline convergent mustprogress nounwind
define internal noundef float @_ZL4logff(float noundef %__x) #6 {
entry:
  %retval = alloca float, align 4, addrspace(5)
  %__x.addr = alloca float, align 4, addrspace(5)
  %retval.ascast = addrspacecast ptr addrspace(5) %retval to ptr
  %__x.addr.ascast = addrspacecast ptr addrspace(5) %__x.addr to ptr
  store float %__x, ptr %__x.addr.ascast, align 4, !tbaa !23
  %0 = load float, ptr %__x.addr.ascast, align 4, !tbaa !23
  %call = call contract float @__ocml_log_f32(float noundef %0) #23
  ret float %call
}

; Function Attrs: convergent mustprogress nofree norecurse nosync nounwind willreturn memory(none)
define internal noundef float @__ocml_log_f32(float noundef %0) #7 {
  %2 = tail call float @llvm.log.f32(float %0)
  ret float %2
}
```
- **New path** (via `__builtin_logf`): The call goes directly to
`llvm.log.f32` intrinsic with the contract flag preserved, causing the
backend to apply FMA contraction during polynomial expansion.
```llvm
; Function Attrs: alwaysinline convergent mustprogress nounwind
define internal noundef float @_ZL4logff(float noundef %__x) #6 {
entry:
  %retval = alloca float, align 4, addrspace(5)
  %__x.addr = alloca float, align 4, addrspace(5)
  %retval.ascast = addrspacecast ptr addrspace(5) %retval to ptr
  %__x.addr.ascast = addrspacecast ptr addrspace(5) %__x.addr to ptr
  store float %__x, ptr %__x.addr.ascast, align 4, !tbaa !24
  %0 = load float, ptr %__x.addr.ascast, align 4, !tbaa !24
  %1 = call contract float @llvm.log.f32(float %0)
  ret float %1
}
```
**3. Why contract breaks log:** Our AMDGCM target back end implements
the natural logarithm by taking the result of the hardware log, then
multiplying that by `ln(2)`, and applying some rounding error correction
to that multiplication. This results in something like:
```c
r = y * c1; // y is result of v_log_ instruction, c1 = ln(2)
r = r + fma(y, c2, fma(y, c1, -r)) // c2 is another error-correcting constant
```
```asm
  v_log_f32_e32 v1, v1
  s_mov_b32 s2, 0x3f317217
  v_mul_f32_e32 v3, 0x3f317217, v1
  v_fma_f32 v4, v1, s2, -v3
  v_fmac_f32_e32 v4, 0x3377d1cf, v1
  v_add_f32_e32 v3, v3, v4
```
With the presence of the `contract` flag, the back-end fuses the add (`r
+ Z`) with the multiply thinking that it is legal, thus eliminating the
intermediate rounding. The error compensation term, which was calculated
based on the rounded product, is now being added to the full-precision
result from the FMA, leading to incorrect error correction and degraded
accuracy. The corresponding contracted operations become the following:
```c
r = y * c1;
r = fma(y, c1, fma(y, c2, fma(y, c1, -r)));
```
```asm
  v_log_f32_e32 v1, v1
  s_mov_b32 s2, 0x3f317217
  v_mul_f32_e32 v3, 0x3f317217, v1
  v_fma_f32 v3, v1, s2, -v3
  v_fmac_f32_e32 v3, 0x3377d1cf, v1
  v_fmac_f32_e32 v3, 0x3f317217, v1
```

## Solution and Proposed Fix

Based on our implementation of `llvm.log` and `llvm.log10`, it should be
illegal for the back-end to propagate the `contract` flag when it is
present on the intrinsic call because it uses error-correcting
summation. My proposed fix is to modify the instruction selection passes
(both global-isel and sdag) to drop the `contract` flag when lowering
llvm.log. That way, when the instruction selection performs the
contraction optimization, it will not fuse the multiply and add.

Note: I had originally implemented this fix in the FE by removing the
`contract` flag when lowering the llvm.log builtin (PR llvm#168770). I have
since closed that PR.
@adelejjeh adelejjeh requested a review from ronlieb January 6, 2026 23:12
@adelejjeh adelejjeh changed the title [AMDGPU] Update log lowering to remove contract for AMDGCN backend (#… [AMDGPU] Update log lowering to remove contract for AMDGCN backend Jan 6, 2026
@z1-cciauto
Copy link
Collaborator

@github-actions
Copy link

github-actions bot commented Jan 6, 2026

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@z1-cciauto
Copy link
Collaborator

@ronlieb
Copy link
Collaborator

ronlieb commented Jan 7, 2026

@ronlieb ronlieb merged commit 1f2241b into amd-mainline Jan 7, 2026
40 of 41 checks passed
@ronlieb ronlieb deleted the amd/dev/aejjeh/mainline-swdev-561934 branch January 7, 2026 14:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants