MLComponent will attempt to use a dedicated hardware accelerator to run the neural network. If the device doesn't support it, GPU mode will be used instead.
MLComponent will automatically decide how to run the neural network based on what is supported. It will start with Accelerator, then fall back to GPU, then CPU.
MLComponent will run the neural network on CPU. Available on all devices.
MLComponent will attempt to run the neural network on GPU. If the device doesn't support it, CPU mode will be used instead.
MLComponent will run the model on CPU using device native backend (like CoreML on Apple devices).
NativeCPUAndNPU: MLComponent will try to run the model using device Neural Processing Unit (e.g. Apple Neural Engine on Apple devices) if exists and model is supported, with fallback support for running on CPU.
Inference modes used by
MLComponent.inferenceMode. Each mode describes how the neural network will be run.See
Used By: MLComponent#inferenceMode
Example