Tutorials#

These tutorials show how to use the Intel® Edge Insights System

Prerequisites#

Tutorial 1: Deploy vision analytics pipeline via GUI#

Follow the below steps to configure and launch the Intel® Edge Insights System baseline services:

Note: By default, after executing ./run.sh Password saved in plain text in .env (both build/.env and WebVision/.env) will be removed. User/Val team needs to note down the password before executing ./run.sh

cd 
edge_insights_system/Intel_Edge_Insights_System_2.0_Premium/IEdgeInsights/WebVision
./setup.sh -u
./setup.sh
./run.sh

Refer to GUI-Workflow for more details.

SummaryIn this tutorial, you learned how to deploy vision analytics pipeline.#

Tutorial 2: Deploy Timeseries analytics pipeline via CLI#

Follow the below commands:

cd edge_insights_system/Intel_Edge_Insights_System_2.0_Premium/IEdgeInsights/WebVision/  

./setup.sh -u 
cd ../build
# For running time series use-case, run builder.py with time-series.yml
python3 builder.py -f usecase/time-series.yml
./run.sh -v -s

This would bring up the Timeseries analytics use case. Please refer the Timeseries analytics tutorials for more details.

Summary#

In this tutorial, you learned how to deploy Timeseries analytics pipeline.

Learn More#

Tutorial 3: Adding New Services to Intel® Edge Insights System Stack#

This section provides information about adding a service, subscribing to the EdgeVideoAnalyticsMicroservice, and publishing it on a new port. Add a service to the EII stack as a new directory in the IEdgeInsights directory. The Builder registers and runs any service present in its own directory in the IEdgeInsights directory. The directory should contain the following:

  • A docker-compose.yml file to deploy the service as a docker container. The AppName is present in the environment section in the docker-compose.yml file. Before adding the AppName to the main build/eii_config.json, it is appended to the config and interfaces as /AppName/config and /AppName/interfaces.

  • A config.json file that contains the required config for the service to run after it is deployed. The config.json consists of the following:

    • A config section, which includes the configuration-related parameters that are required to run the application.

    • An interfaces section, which includes the configuration of how the service interacts with other services of the EII stack using gRPC messaging/communication library.

      • The high level interactions of the services using gRPC are as follows:

        • Client application/service would initialize the gRPC client wrapper/library

        • Server application/service would initialize the gRPC server wrapper/library

        • gRPC client and server initialize can happen in parallel and are not dependent on each other

        • gRPC client would wait for the gRPC server and vice versa

        • gRPC client wrapper would create a gRPC channel and initialize a stub to connect to the gRPC server

        • gRPC server wrapper would start the gRPC server and listen on the pre-defined endpoint

      • The below interface section is an example of a gRPC server application which starts a gRPC server and listens on the specified endpoint. Single/Multiple gRPC clients can connect to this endpoint and call the server methods. Ensure that the port is being exposed in the docker-compose.yml of the respective service.

            "interfaces":{
              "Servers": [
                  {
                      "Name": "default",
                      "Type": "grpc",
                      "EndPoint": "0.0.0.0:65117"
                  }
              ]
          }
      
      • The below interface section is an example of a gRPC client application which initialized a gRPC client and creates a stub on the the specified endpoint to connect to the gRPC server. Ensure that the hostname of the gRPC server application is added to no_proxy environment variable of the client application in the docker-compose.yml in case working behind a proxy network.

        "interfaces":{
            "Clients": [
                {
                    "Name": "default",
                    "Type": "grpc",
                    "EndPoint": "ia_python_sample_server:65117",
                    "Topics": [
                        "python_sample"
                    ]
                }
            ]
        }
      

Tutorial 4: Know more about individual services being bundled#

Refer the below links for more details about the individual services.

Tutorial 5: Sample Apps to leverage the libraries#

This section provides more information about the Intel® Edge Insights System sample apps and how to use the core libraries packages like Utils, gRPC, and ConfigManager in various flavors of Linux such as Ubuntu operating systems or docker images for programming languages such as Go and Python.

The following table shows the details for the supported flavors of Linux operating systems or docker images and programming languages that support sample apps:

Linux Flavor

Languages

Ubuntu

Go, Python

The sample apps are classified as client and server apps.

Python gRPC Client#

The Python gRPC client runs in the independent container ia_python_sample_client.

High-level Logical Flow for Sample Client Apps#

The high-level logical flow of a sample client app is as follows:

  1. Sample apps use the ConfigMgr APIs to construct the gRPC config from the ETCD. It fetches its private and public key and the Server’s public key.

  2. Client contacts the ETCD service and fetches its private key and the server’s public key. It then creates the client stub and starts streaming the data to the gRPC server on a pre-configured endpoint and receives the response from the server.

Python gRPC Server#

The Python gRPC server runs in the independent container ia_python_sample_server.

High-level Logical Flow for Sample Server Apps#

The high-level logical flow of a sample server app is as follows:

  1. Sample apps use the ConfigMgr APIs to construct the gRPC config from the ETCD. It fetches its private key and the client’s public keys.

  2. Server contacts the ETCD service and fetches its private key and the server’s public key. Then it creates a gRPC server, starts listening on a pre-configured endpoint and received the request from the connected clients.

Learning Objectives#

  • By the end of this tutorial, you will know how to run the sample apps.

Prerequisites#

  • Ensure that docker ps is clean and docker network ls does not have Intel® Edge Insights System bridge network.

Run the Samples Apps#

To run the sample-apps.yml file, execute the following command:

cd [WORKDIR]/IEdgeInsights/build
python3 builder.py -f ./usecases/sample-apps.yml
./run.sh -v -s