This patch renames the InferenceItem to LastLevelTaskItem in the
three backends to avoid confusion among the meanings of these structs.
The following are the renames done in this patch:
1. extract_inference_from_task -> extract_lltask_from_task
2. InferenceItem -> LastLevelTaskItem
3. inference_queue -> lltask_queue
4. inference -> lltask
5. inference_count -> lltask_count
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit unifies the async and sync mode from the DNN filters'
perspective. As of this commit, the Native backend only supports
synchronous execution mode.
Now the user can switch between async and sync mode by using the
'async' option in the backend_configs. The values can be 1 for
async and 0 for sync mode of execution.
This commit affects the following filters:
1. vf_dnn_classify
2. vf_dnn_detect
3. vf_dnn_processing
4. vf_sr
5. vf_derain
This commit also updates the filters vf_dnn_detect and vf_dnn_classify
to send only the input frame and send NULL as output frame instead of
input frame to the DNN backends.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
The frame allocation and filling the TaskItem with execution
parameters is common in the three backends. This commit shifts
this logic to dnn_backend_common.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
It is not used here at all; instead, add it where it is used without
including it or any of the arch-specific CPU headers.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
In cases where the execution inside the function execute_model_ov fails,
the OVRequestItem must be pushed back to the request_queue before returning
the error. In case pushing back fails, release the allocated memory.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit uses TFRequestItem and the existing sync execution
mechanism to use request-based execution. It will help in adding
async functionality to the TensorFlow backend later.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
In cases where the execution inside the function execute_model_ov fails,
push the RequestItem back to the request_queue before returning the error.
In case pushing back fails, release the allocated memory.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Fix memory leak for RequestItem upon error while pushing to the
request_queue in the completion callback.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Convert output_name to char **output_names in TaskItem and use it as
a pointer to array of output names in the DNN backend.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Extract TaskItem and InferenceItem from OpenVino backend and convert
ov_model to void in TaskItem.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit corrects the type of pointer of elements from the
inference queue in ff_dnn_free_model_ov.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Different function type of model requires different parameters, for
example, object detection detects lots of objects (cat/dog/...) in
the frame, and classifcation needs to know which object (cat or dog)
it is going to classify.
The current interface needs to add a new function with more parameters
to support new requirement, with this change, we can just add a new
struct (for example DNNExecClassifyParams) based on DNNExecBaseParams,
and so we can continue to use the current interface execute_model just
with params changed.
There's one task item for one function call from dnn interface,
there's one request item for one call to openvino. For classify,
one task might need multiple inference for classification on every
bounding box, so add InferenceItem.
So the backend knows the usage of model is for frame processing,
detect, classify, etc. Each function type has different behavior
in backend when handling the input/output data of the model.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
once we mark done for the task in function infer_completion_callback,
the task is possible to be release in function ff_dnn_get_async_result_ov
in another thread just after it, so we need to record request queue
first, instead of using task->ov_model->request_queue later.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
from proc_from_frame_to_dnn to ff_proc_from_frame_to_dnn, and
from proc_from_dnn_to_frame to ff_proc_from_dnn_to_frame.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
The OpenVINO model file format changes when OpenVINO goes to a new
release, it does not work if the versions between model file and
runtime are mismatched.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
OpenVINO APIs require specify input size to run the model, while some
OpenVINO model does accept different input size. To enable this feature
adding input_resizable option here for easier use.
Setting bool variable input_resizable to specify if the input can be resizable or not.
input_resizable = 1 means support input resize, aka accept different input size.
input_resizable = 0 (default) means do not support input resize.
Please make sure the inference model does accept different input size
before use this option, otherwise the inference engine may report error(s).
eg: ./ffmpeg -i video_name.mp4 -vf dnn_processing=dnn_backend=openvino:\
model=model_name.xml:input=input_name:output=output_name:\
options=device=CPU\&input_resizable=1 -y output_video_name.mp4
Signed-off-by: Ting Fu <ting.fu@intel.com>
Move openvino model/inference request creation and initialization steps
from ff_dnn_load_model_ov to new function init_model_ov, for later input
resize support.
Signed-off-by: Ting Fu <ting.fu@intel.com>
the default number of batch_size is 1
Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
The prefix for symbols not exported from the library and not
local to one translation unit is ff_ (or FF for types).
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
'void *' is too flexible, since we can derive info from
AVFilterContext*, so we just unify the interface with this data
structure.
Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>