Tutorials#

Use REST API#

Refer to the section to understand the usage of REST API via starting a DLStreamer pipeline that runs object detection, checking running status of this pipeline and stopping the pipeline. This example also demonstrates saving the resulting metadata to a file and sending frames over RTSP for streaming.

Set GPU backend for inferencing#

This CURL request sets inference device to run the inference on GPU. You will re-run the same pallet_defect_detection pipeline, but this time, you will use the integrated GPU for detection inference by setting device to GPU under UDF configuration.

Similar to default EVAM pipeline, it publishes metadata to a file /tmp/results.jsonl and sends frames over RTSP for streaming.

RTSP Stream will be accessible at rtsp://<SYSTEM_IP_ADDRESS>:8554/pallet-defect-detection.

curl localhost:8080/pipelines/user_defined_pipelines/pallet_defect_detection -X POST -H 'Content-Type: application/json' -d '{
                "source": {
                    "uri": "file:///home/pipeline-server/resources/warehouse.avi",
                    "type": "uri"
                },
                "destination": {
                   "metadata": {
                        "type": "file",
                        "path": "/tmp/results.jsonl",
                        "format": "json-lines"
                    },
                    "frame": {
                        "type": "rtsp",
                        "path": "pallet-defect-detection"
                    }
                },
                "parameters": {
                    "udfloader": {
                        "udfs": [
                            {
                                "name": "python.geti_udf.geti_udf",
                                "type": "python",
                                "device": "GPU",
                                "visualize": "true",
                                "deployment": "./resources/geti/pallet_defect_detection/deployment",
                                "metadata_converter": "null"
                            }
                        ]
                    }
                }
}'

Note:

Use RTSP source#

You can either start a RTSP server to feed video or you can use RTSP stream from a camera and accordingly update the uri key in following request to start default pallet defect detection pipeline.

Similar to default EVAM pipeline, it publishes metadata to a file /tmp/results.jsonl and sends frames over RTSP for streaming.

RTSP Stream will be accessible at rtsp://<SYSTEM_IP_ADDRESS>:8554/pallet-defect-detection.

curl localhost:8080/pipelines/user_defined_pipelines/pallet_defect_detection -X POST -H 'Content-Type: application/json' -d '{
                "source": {
                    "uri": "rtsp://<ip_address>:<port>/<server_url>",
                    "type": "uri"
                },
                "destination": {
                   "metadata": {
                        "type": "file",
                        "path": "/tmp/results.jsonl",
                        "format": "json-lines"
                    },
                    "frame": {
                        "type": "rtsp",
                        "path": "pallet-defect-detection"
                    }
                },
                "parameters": {
                    "udfloader": {
                        "udfs": [
                            {
                                "name": "python.geti_udf.geti_udf",
                                "type": "python",
                                "device": "CPU",
                                "visualize": "true",
                                "deployment": "./resources/geti/pallet_defect_detection/deployment",
                                "metadata_converter": "null"
                            }
                        ]
                    }
                }
}'

Change DLStreamer pipeline#

As shown in above examples, EVAM supports dynamic update of pipeline parameters using REST API. Users are required to provide required placeholders in [EVAM_WORKDIR]/configs/default/config.json to make it configurable at run-time of EVAM container.

In case users want to update default pipeline, they need to update the same in configuration file that EVAM loads. Users can mount updated config files from host systems on to EVAM containers by updating [EVAM_WORKDIR]/docker/docker-compose.yml. Refer below snippets:

    volumes:
      # Volume mount [WORDDIR]/configs/default/config.json to config file that EVAM container loads."
      - "../configs/default/config.json:/home/pipeline-server/config.json"

As an example we are creating video-ingestion and resize pipeline. We need to update pipeline key in [EVAM_WORKDIR]/configs/default/config.json as shown below. It would create DLStreamer pipeline that reads user provided video file, decodes it, and resize to 1280x720.

"pipeline": "{auto_source} name=source  ! decodebin ! videoscale ! video/x-raw, width=1280,height=720 ! gvametapublish name=destination ! appsink name=appsink",

Note: If needed users can change pipeline name by updating name key in config.json. If user is updating this field, accordingly endpoint in curl request needs to be changed to <SERVER-IP>:<PORT>/pipelines/user_defined_pipelines/<NEW-NAME>. In this example, we are only changing pipeline.

Once update, user needs to restart EVAM containers to reflect this change. Run these commands from [EVAM_WORKDIR]/docker/ folder.

docker compose down

docker compose up

Above steps will restart EVAM and load video-ingestion and resize pipeline. Now to start this pipeline, run below Curl request. It would start DLStreamer pipeline that reads classroom.avi video file with resolution of 1920x1080 and after resizing to 1280x720, it would stream over RTSP. Users can view this on any media player e.g. vlc, ffplay etc.

RTSP Stream will be accessible at rtsp://<SYSTEM_IP_ADDRESS>:8554/classroom-video-streaming.

curl localhost:8080/pipelines/user_defined_pipelines/pallet_defect_detection -X POST -H 'Content-Type: application/json' -d '{
                "source": {
                    "uri": "file:///home/pipeline-server/resources/classroom.avi",
                    "type": "uri"
                },
                "destination": {
                   "metadata": {
                        "type": "file",
                        "path": "/tmp/results.jsonl",
                        "format": "json-lines"
                    },
					"frame": {
                        "type": "rtsp",
                        "path": "classroom-video-streaming"
                    }
                }
}'

Use image file as source#

Pipeline requests can also be sent on-demand on an image file source on an already queued pipeline. It is only supported for source type: "image-ingestor". A sample config has been provided under [EVAM_WORKDIR]/config/sample_image_ingestor directory. Adapt the default config.json to your needs. Follow this tutorial to launch EVAM with new config file.

Pipeline can be started by the following request. This would set the pipeline in queued state to wait for requests.

Note that source is not needed to start the pipeline, only destination and other parameters.

curl localhost:8080/pipelines/user_defined_pipelines/pallet_defect_detection -X POST -H 'Content-Type: application/json' -d '{
    "destination": {
        "metadata": {
            "type": "file",
            "path": "/tmp/results1.jsonl",
            "format": "json-lines"
        }
    },
    "parameters": {
        "udfloader": {
            "udfs": [
                {
                    "name": "python.geti_udf.geti_udf",
                    "type": "python",
                    "device": "CPU",
                    "visualize": "true",
                    "deployment": "./resources/models/geti/pallet_defect_detection/deployment",
                    "metadata_converter": "null"
                }
            ]
        }
    }
}'

Once the pipeline has started, we would receive an instance id (e.g. 9b041988436c11ef8e690242c0a82003). We can use this instance id to send inference requests for images as shown below. Replace the {instance_id} to the id you would have received as a response from the previous POST request.

Note that only source section is needed for the image files to send infer requests. Make sure that EVAM has access to the file preferably by volume mounting in docker-compose.yml.

curl localhost:8080/pipelines/user_defined_pipelines/pallet_defect_detection/{instance_id} -X POST -H 'Content-Type: application/json' -d '{
    "source": {
        "path": "/home/pipeline-server/resources/images/classroom.png",
        "type": "file"
    }
}'

Users can make as many request to the queued pipeline until the pipeline has been explicitly stopped. As set in the very first request, the inference results are available in the destination provided, which is /tmp/results.jsonl in our example.

Start MQTT publish of metadata#

The below CURL command publishes metadata to a MQTT broker and sends frames over RTSP for streaming.

Replace the <SYSTEM_IP_ADDRESS> field with your system IP address.

Note: When bringing up EVAM containers using docker compose up, MQTT broker is also started listening on port 1883.

RTSP Stream will be accessible at rtsp://<SYSTEM_IP_ADDRESS>:8554/anomalib-detection.

curl localhost:8080/pipelines/user_defined_pipelines/pallet_defect_detection -X POST -H 'Content-Type: application/json' -d '{
                "source": {
                    "uri": "file:///home/pipeline-server/resources/anomalib_pcb_test.avi",
                    "type": "uri"
                },
                "destination": {
                    "metadata": {
                        "type": "mqtt",
                        "host": "<SYSTEM_IP_ADDRESS>:1883",
                        "topic": "anomalib-detection"
                    },
                    "frame": {
                        "type": "rtsp",
                        "path": "anomalib-detection"
                    }
                },
                "parameters": {
                    "udfloader": {
                        "udfs": [
                                {
                                    "device": "CPU",
                                    "task": "classification",
                                    "inferencer": "openvino_nomask",
                                    "model_metadata": "/home/pipeline-server/udfs/python/anomalib_udf/stfpm/metadata.json",
                                    "name": "python.anomalib_udf.inference",
                                    "type": "python",
                                    "weights": "/home/pipeline-server/udfs/python/anomalib_udf/stfpm/model.onnx"
                                }
                        ]
                    }
                }
}'

This UDF does not overlay model results on RTSP stream. Output can be viewed on MQTT subscriber as shown below.

docker run -it --entrypoint mosquitto_sub eclipse-mosquitto:latest --topic anomalib-detection -p 1883 -h <SYSTEM_IP_ADDRESS>