• Domi@lemmy.secnd.me
      link
      fedilink
      arrow-up
      7
      ·
      7 months ago

      I use ROCm for inference, both text generation via llama.cpp/LMStudio and image generation via ComfyUI.

      Works pretty much perfectly on a 6900 XT. Very fast and easy to setup.

      I had issues with some libraries only supporting CUDA when trying to train, but that was almost 6 months ago so things probably have improved in that area as well.

  • Presi300@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 months ago

    I’ve commented this before and I’ll do it again. DO NOT try to install ROCm/HIP, it’s a nightmare. AMD provides preconfigured docker containers with it already setup. Download one of them and do whatever you need to do on that.