Lens Scripting API
    Preparing search index...

    Enumeration InferenceMode

    Inference modes used by MLComponent.inferenceMode. Each mode describes how the neural network will be run.

    //@input Component.MLComponent mlComponent
    script.mlComponent.inferenceMode = MachineLearning.InferenceMode.CPU;
    Index

    Enumeration Members

    Accelerator: number

    MLComponent will attempt to use a dedicated hardware accelerator to run the neural network. If the device doesn't support it, GPU mode will be used instead.

    Auto: number

    MLComponent will automatically decide how to run the neural network based on what is supported. It will start with Accelerator, then fall back to GPU, then CPU.

    CPU: number

    MLComponent will run the neural network on CPU. Available on all devices.

    GPU: number

    MLComponent will attempt to run the neural network on GPU. If the device doesn't support it, CPU mode will be used instead.

    NativeCPU: number

    MLComponent will run the model on CPU using device native backend (like CoreML on Apple devices).

    NativeCPUAndNPU: number

    NativeCPUAndNPU: MLComponent will try to run the model using device Neural Processing Unit (e.g. Apple Neural Engine on Apple devices) if exists and model is supported, with fallback support for running on CPU.