Skip to content

perf: codegen, llvm, host_api#1

Merged
deepsek merged 1 commit intoamd-integrationfrom
perf/deepsek/generic_perf_plus_monolith
Apr 17, 2026
Merged

perf: codegen, llvm, host_api#1
deepsek merged 1 commit intoamd-integrationfrom
perf/deepsek/generic_perf_plus_monolith

Conversation

@deepsek
Copy link
Copy Markdown
Collaborator

@deepsek deepsek commented Apr 17, 2026

test_output.log

Summary
This PR delivers a set of performance-critical changes across the AMDGPU codegen, JIT compilation pipeline, kernel launch path, and RHI driver layer. The optimizations target instruction selection quality, memory access efficiency, kernel launch overhead, and compilation caching to improve end-to-end GPU kernel performance on AMDGPU hardware.

Changes

  1. Address Space Annotations for Global Memory (codegen, LLVM passes)
    GetRootStmt, SNodeLookupStmt, GetChStmt: SNode root pointers from hipMalloc'd memory are now cast to addrspace(1) (global), ensuring LLVM emits global_load/global_store instructions instead of flat_load/flat_store. This avoids the FLAT unit's runtime address-space resolution overhead.
    GlobalLoadStmt / GlobalStoreStmt: Overridden to cast pointers to addrspace(1) before load/store, with invariant.load metadata applied for read-only SNode fields.
    get_runtime() / create_intrinsic_load(): Overridden to load runtime metadata through addrspace(1) with invariant-load hints for read-only caching.
    New AMDGPUFlatToGlobalLoadStorePass: Post-optimization pass that converts remaining addrspace(0) (flat) loads/stores to addrspace(1) (global) by tracing pointer origins — accesses derived from allocas or scratch memory (addrspace 5) are left as flat.
  2. Branchless sgn Implementation (codegen)
    Replaced the branch-and-alloca-based sgn codegen for f32/f64 with a branchless select-based implementation. Eliminates private scratch memory usage and control-flow divergence.
  3. Dynamic Shared Memory Promotion (codegen)
    Large shared arrays exceeding cuda_dynamic_shared_array_threshold_bytes are now promoted to dynamically-sized LDS allocations (addrspace(3)) via GlobalVariable, matching the CUDA backend's behavior.
  4. BLS (Block Local Storage) Support (codegen, extension registry)
    Implemented create_bls_buffer() for AMDGPU using LDS global variables (addrspace(3)).
    Enabled the bls extension for Arch::amdgpu in the extension registry.
  5. By-Value Kernarg Context Passing (codegen, kernel launcher)
    AMDGPU kernels now receive the RuntimeContext struct by value in kernarg instead of through an indirection pointer (hipMalloc + memcpy). This eliminates a device-side memory allocation and a host-to-device copy per kernel launch.
    Added kernel_argument_struct_in_kernarg() virtual override and supporting context_val_alloca_ plumbing in the base TaskCodeGenLLVM.
  6. HIP Async Memory Pool (RHI driver, device, kernel launcher)
    Added hipMallocAsync/hipFreeAsync support with automatic fallback to synchronous hipMalloc/hipFree when the runtime doesn't support memory pools.
    Probes hipDeviceGetDefaultMemPool at init and configures a 128 MB release threshold via hipMemPoolSetAttribute.
    Kernel launcher and device allocator updated to use async allocation paths when available.
  7. HSACo Compilation Caching (JIT)
    HSACo binaries are now cached in-memory keyed by an MD5 hash of the LLVM module bitcode. Repeated compilations of the same module skip the full ISA compilation pipeline and reuse the cached binary.
  8. AMDGPU-Specific Function Attributes and Compiler Flags (JIT)
    unsafe-fp-math=true and no-signed-zeros-fp-math=true applied to all functions (matching CUDA's unconditional FTZ behavior).
    Kernel functions annotated with amdgpu-waves-per-eu=1,2, uniform-work-group-size=true, amdgpu-ieee=false, amdgpu-dx10-clamp=false.
    __oclc_daz_opt=1 global variable emitted for denormals-as-zero.
    FPOpFusion::Fast set unconditionally (not gated behind fast_math).
    amdgpu-flat-work-group-size attribute set on kernels with known block dimensions.
  9. Additional LLVM Optimization Passes (JIT)
    Added post-pipeline passes: LoopStrengthReduce, SeparateConstOffsetFromGEP, and EarlyCSE (with MemorySSA) to improve address calculation and reduce register pressure.
  10. Misc Fixes
    Profiler now correctly passes dynamic_shared_mem_bytes to trace calls.
    saturating_grid_dim calculation corrected (removed erroneous * 2 multiplier).
    amdgpu_auto_waves_per_eu config option added with Python binding.
    MAX_THREADS_PER_MULTIPROCESSOR device attribute query for accurate listgen grid sizing.

@rtmadduri rtmadduri self-requested a review April 17, 2026 02:07
Copy link
Copy Markdown
Collaborator

@rtmadduri rtmadduri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has been tested and benchmarked! LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants