Prediction using Haiguang CPU/DCU is the same as prediction using Intel CPU/Nvidia GPU. It supports Paddle Inference, which is suitable for high-performance server-side and cloud-based inference. The current Paddle ROCm version is fully compatible with the C++/Python API of the Paddle CUDA version, and the original GPU prediction commands and parameters can be used directly. …

#ROCm #version #prediction #paddle #frame

Leave a Comment

Your email address will not be published. Required fields are marked *