# video-custom-udfs **Repository Path**: Robin-yuetb_private-master/video-custom-udfs ## Basic Information - **Project Name**: video-custom-udfs - **Description**: Hosts Custom UDF services - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-12-14 - **Last Updated**: 2021-12-14 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README **Contents** - [Introduction](#introduction) - [UDF Container Directory Layout](#udf-container-directory-layout) - [Deploy Process](#deploy-process) - [Sample UDFs Directory](#sample-udfs-directory) - [GVASafetyGearIngestion](#gvasafetygearingestion) - [NativePclIngestion](#nativepclingestion) # Introduction This document describes the new approach of creating UDFs and using them inside the EII framework. Unlike [The UDF Writing Guide](https://github.com/open-edge-insights/video-common/blob/master/udfs/HOWTO_GUIDE_FOR_WRITING_UDF.md) which specifically emphasizes on the coding aspects(callbacks) of the UDFs, this document describes the workflow of a custom UDF. Currently the UDFs are to be created inside [udfs-path](https://github.com/open-edge-insights/video-common/tree/master/udfs) of EII build environment so that it can get compiled into the VI(Video Ingestion) & VA(Video Analytics) containers. In addition to aforementioned approach,each UDF can be built as an independent container based out of VI(VideoIngestion) or VA(VideoAnalytics) container image. This additional method has multiple benefits, listing some of them below: * With increased number of sample UDFs, VI and VA need not grow large in size because of the bloated Algo artifacts. * Any update to the UDF's Algo or its logic will compile and build only intended UDF specific code, instead of rebuilding every UDFs * Any change to base containers because of some unrelated changes in ubuntu packages triggers a complete container build again, this adds to build time of an UDF. * Every UDF can be versioned independently as they are represented by its own container. * A reduced size of UDF container will reduce the network overhaul while transferring the images from REGISTRY to target system. As per this approach an UDF or a chain of UDFs should be compiled and run as a separate EII container. A video streaming pipeline contains two important components among all i.e. ingestion and analytics and in EII user adds UDFs as pre-processing, post-processing or analytics Algo, hence these UDF containers need to ne inherited from VI and VA container. # UDF Container Directory Layout 1. A native(c++) & python UDF container source base looks as below, though it can look different based on use-case. ``` bash NativeSafetyGearAnalytics ├── config.json ├── docker-compose.yml ├── Dockerfile └── safety_gear_demo ├── CMakeLists.txt ├── ref │   ├── frozen_inference_graph.bin │   ├── frozen_inference_graph_fp16.bin │   ├── frozen_inference_graph_fp16.xml │   └── frozen_inference_graph.xml ├── safety_gear_demo.cpp └── safety_gear_demo.h ``` A typical python containers looks as below ``` bash PyMultiClassificationIngestion ├── config.json ├── docker-compose.yml ├── Dockerfile └── sample_classification ├── __init__.py ├── multi_class_classifier.py └── ref ├── squeezenet1.1_FP16.bin ├── squeezenet1.1_FP16.xml ├── squeezenet1.1_FP32.bin ├── squeezenet1.1_FP32.xml └── squeezenet1.1.labels ``` The top level directory ***"NativeSafetyGearAnalytics"*** & ***"PyMultiClassificationIngestion"*** hosts respective the container's build ingredients. The Algo/pre-processing logics are placed under e.g. ***"safety_gear_demo"*** & ***sample_classification*** to showcase a grouping of logically related entities otherwise it is not a mandatory directory layout. * ## *config.json* This file defines UDF specific configuration and other generic configs such as queue-depth, number-of-worker-thread etc. These generic configs can be added to overwrite any default of setting of VI and VA container. In order to know more about schema of defining these configs and its permissible values, kindly refer [UDF-README](https://github.com/open-edge-insights/video-common/blob/master/udfs/README.md) file. For ingestor related configs refer [VideoIngestion-README](https://github.com/open-edge-insights/video-ingestion/blob/master/README.md). An example snippet would look as below: ```bash { "encoding": { "level": 95, "type": "jpeg" }, "max_jobs": 20, "max_workers": 4, "queue_size": 10, "udfs": [ { "name": "safety_gear_demo", "type": "native", "device": "CPU", "model_xml": "./safety_gear_demo/ref/frozen_inference_graph.xml", "model_bin": "./safety_gear_demo/ref/frozen_inference_graph.bin" } ] } ``` * ## *Dockerfile* This file defines the container build process and what all build time and runtime artifacts need to be copied to the container. In case of native(c++) UDF we need to describe the destination path to copy the code to native UDF and compilation instruction of the same. And in case of python we just need to copy the udf defining artifacts to proper destination location. Some code comments are given for describing important key values. An example ***Dockerfile*** for C++ based UDF is pasted below: ```dockerfile ARG EII_VERSION <<<