Get Started Guide#

  • Time to Complete: 30 minutes

  • Programming Language: Python*, Javascript*

Get Started#

Complete this guide to confirm that your setup is working correctly and try out a basic workflow in the reference implementation.

Prerequisites for Target System#

  • 11th Generation Intel® Core™ processor or above

  • 8 GB of memory minimum

  • 80 GB of storage space minimum

  • Internet access

  • Ubuntu* 22.04 LTS Desktop

Note
If Ubuntu Desktop is not installed on the target system, follow the instructions from Ubuntu to install Ubuntu desktop.

Download the helm chart#

Follow this procedure on the target system to install the package.

  1. Download helm chart with the following command

    helm pull oci://registry-1.docker.io/intel/pallet-defect-detection-reference-implementation --version 2.0.0

  2. unzip the package using the following command

    tar xvf pallet-defect-detection-reference-implementation-2.0.0.tgz

  • Get into the helm directory

    cd pallet-defect-detection-reference-implementation

Configure and update the environment variables#

  1. Update the below fields in values.yaml file in the helm chart

    HOST_IP: # replace localhost with system IP example: HOST_IP: 10.100.100.100
    POSTGRES_PASSWORD: # example: POSTGRES_PASSWORD: intel1234
    MINIO_ACCESS_KEY: # example: MINIO_ACCESS_KEY: intel1234
    MINIO_SECRET_KEY: # example: MINIO_SECRET_KEY: intel1234
    http_proxy: # example: http_proxy: http://proxy.example.com:891
    https_proxy: # example: http_proxy: http://proxy.example.com:891
    VISUALIZER_GRAFANA_USER: # example: VISUALIZER_GRAFANA_USER: admin
    VISUALIZER_GRAFANA_PASSWORD: # example: VISUALIZER_GRAFANA_PASSWORD: password
    
  2. Update HOST_IP_where_MRaaS_is_running in evam_config.json file in the helm chart

         "model_registry": {
            "url": "http://<HOST_IP_where_MRaaS_is_running>:32002",
            "request_timeout": 300,
            "saved_models_dir": "./mr_models"
        },
    

Run multiple AI pipelines#

Follow this procedure to run the reference implementation. In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. The following demonstrates running two AI pipelines and observing telemetry data from a Grafana* dashboard.

  1. Deploy helm chart

    helm install pdd-deploy . -n apps  --create-namespace
    
  2. Verify all the pods and services are running:

    kubectl get pods -n apps
    kubectl get svc -n apps
    
  3. Start the pallet defect detection pipeline with the following Client URL (cURL) command. This pipeline is configured to run in a loop forever. This pipeline is not configured to run in a loop forever. This REST/cURL request will return a pipeline instance ID, which can be used as an identifier to query later the pipeline status or stop the pipeline instance. For example, a6d67224eacc11ec9f360242c0a86003.

    curl http://localhost:30107/pipelines/user_defined_pipelines/pallet_defect_detection_1 -X POST -H 'Content-Type: application/json' -d '{
        "parameters": {
            "udfloader": {
                "udfs": [
                    {
                        "name": "python.geti_udf.geti_udf",
                        "type": "python",
                        "device": "CPU",
                        "visualize": "true",
                        "deployment": "./resources/models/geti/pallet_defect_detection/deployment",
                        "metadata_converter": "null"
                    }
                ]
            }
        }
    }'
    
  4. Start another pallet defect detection pipeline with the following Client URL (cURL) command. This pipeline is not configured to run in a loop forever. This pipeline is not configured to run in a loop forever. This REST/cURL request will return a pipeline instance ID, which can be used as an identifier to query later the pipeline status or stop the pipeline instance. For example, a6d67224eacc11ec9f360242c0a86003.

    curl http://localhost:30107/pipelines/user_defined_pipelines/pallet_defect_detection_2 -X POST -H 'Content-Type: application/json' -d '{
        "source": {
                "uri": "file:///home/pipeline-server/resources/videos/warehouse.avi",
                "type": "uri"
        },
        "parameters": {
            "udfloader": {
                "udfs": [
                    {
                        "name": "python.geti_udf.geti_udf",
                        "type": "python",
                        "device": "CPU",
                        "visualize": "true",
                        "deployment": "./resources/models/geti/pallet_defect_detection/deployment",
                        "metadata_converter": "null"
                    }
                ]
            }
        }
    }'
    

    Note: Note the instance ID of this pipeline

  5. Go to Grafana dashboard on http://<HOST_IP>:30101/ and login with credentials provided in values.yaml file. Click on Dashboards -> Video Analytics Dashboard on the left to see the telemetry data for both pipelines.

    Example of a Grafana dashboard with telemetry data of two pipelines

    Figure 1: Dashboard with telemetry data of two pipelines

    You can see boxes, shipping labels, and defects being detected. You have successfully run the reference implementation.

  6. Advanced details: You can also see the topics on which the results are published by going to http://<HOST_IP>:30108/topics and then using those topics to see the streams like so: Stream 1 @ http://<HOST_IP>:30108/<topic_1> and Stream 2 @ http://<HOST_IP>:30108/<topic_2>

  7. Stop the 2nd pipeline using the instance ID noted in point #4 above, before proceeding with this documentation.

    curl --location -X DELETE http://<HOST_IP>:30107/pipelines/{instance_id}
    

MLOps Flow: At runtime, download a new model from model registry and restart the pipeline with the new model.#

  1. Get all the registered models in the model registry

    curl 'http://<host_system_ip_address>:32002/models'
    
  2. Upload a model file to Model Registry

    curl -L -X POST "http://<host_system_ip_address>:32002/models" \
    -H 'Content-Type: multipart/form-data' \
    -F 'name="YOLO_Test_Model"' \
    -F 'precision="fp32"' \
    -F 'version="v1"' \
    -F 'file=@<model_file_path.zip>;type=application/zip' \
    -F 'project_name="pallet-defect-detection"' \
    -F 'architecture="YOLO"' \
    -F 'category="Detection"'
    
  3. Check instance ID of currently running pipeline and use it in the next command

    curl --location -X GET http://<HOST_IP>:30107/pipelines/status
    
  4. Download the files for a specific model from the model registry microservice and restart the running pipeline with the new model. Essentially, the running instance gets aborted and a new instance gets started.

    curl 'http://<host_system_ip_address>:30107/pipelines/user_defined_pipelines/pallet_defect_detection_1/<instance_id_of_currently_running_pipeline>/models' \
    --header 'Content-Type: application/json' \
    --data '{
    "project_name": "pallet-defect-detection",
    "version": "v1",
    "category": "Detection",
    "architecture": "YOLO",
    "precision": "fp32",
    "deploy": true
    }'
    

    Note: The data above assumes there is a model in the registry that contains these properties. Note: The pipeline name that follows user_defined_pipelines, will affect the deployment folder name.

  5. View the output in Grafana: http://<HOST_IP>:30101/

    Example of a Grafana dashboard with telemetry data of restarted pipeline instance

    Figure 2: Dashboard with telemetry data of restarted pipeline instance.

  6. You can also stop any running pipeline by using the pipeline instance “id”

    curl --location -X DELETE http://<HOST_IP>:30107/pipelines/{instance_id}
    

End the demonstration#

Follow this procedure to stop the reference implementation and end this demonstration.

  1. Stop the reference implementation with the following command that uninstalls the release pdd-deploy.

    helm uninstall pdd-deploy -n apps
    
  2. Confirm the pods are no longer running.

    kubectl get pods -n apps
    

Summary#

In this guide, you installed and validated the Pallet Defect Detection Reference Implementation. You also completed a demonstration where multiple pipelines run on a single system with near real-time defect detection.

Learn More#

Troubleshooting#

The following are options to help you resolve issues with the reference implementation.

Grafana Dashboard#

The firewall may prevent you from viewing the video stream and metrics in the Grafana dashboard. Please disable the firewall using this command.

     sudo ufw disable

Error Logs#

View the container logs using this command.

     kubectl logs -f <pod_name> -n apps