Commit Graph

77 Commits

Author SHA1 Message Date
Zhao Zhili 6de951923b avfilter/dnn: Remove a level of dereference
For code such as 'model->model = ov_model' is confusing. We can
just drop the member variable and use cast to get the subclass.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:31 +08:00
Zhao Zhili abfefbb33b avfilter/dnn_backend_tf: Simplify memory allocation
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:21 +08:00
Zhao Zhili a40df366c4 avfilter/dnn_backend_tf: Fix free context at random place
It will be freed again by ff_dnn_uninit.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:17 +08:00
Zhao Zhili d3db7bbc03 avfilter/dnn_backend_tf: Remove one level of indentation
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:14:10 +08:00
Zhao Zhili 4f051c746b avfilter/dnn: Use dnn_backend_info_list to search for dnn module
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-30 18:13:29 +08:00
Zhao Zhili 8c21f1e3b7 avfilter/dnn: Refactor DNN parameter configuration system
This patch trying to resolve mulitiple issues related to parameter
configuration:

Firstly, each DNN filters duplicate DNN_COMMON_OPTIONS, which should
be the common options of backend.

Secondly, backend options are hidden behind the scene. It's a
AV_OPT_TYPE_STRING backend_configs for user, and parsed by each
backend. We don't know each backend support what kind of options
from the help message.

Third, DNN backends duplicate DNN_BACKEND_COMMON_OPTIONS.

Last but not the least, pass backend options via AV_OPT_TYPE_STRING
makes it hard to pass AV_OPT_TYPE_BINARY to backend, if not impossible.

This patch puts backend common options and each backend options inside
DnnContext to reduce code duplication, make options user friendly, and
easy to extend for future usecase.

For example,

./ffmpeg -h filter=dnn_processing

dnn_processing AVOptions:
   dnn_backend       <int>        ..FV....... DNN backend (from INT_MIN to INT_MAX) (default tensorflow)
     tensorflow      1            ..FV....... tensorflow backend flag
     openvino        2            ..FV....... openvino backend flag
     torch           3            ..FV....... torch backend flag

dnn_base AVOptions:
   model             <string>     ..F........ path to model file
   input             <string>     ..F........ input name of the model
   output            <string>     ..F........ output name of the model
   backend_configs   <string>     ..F.......P backend configs (deprecated)
   options           <string>     ..F.......P backend configs (deprecated)
   nireq             <int>        ..F........ number of request (from 0 to INT_MAX) (default 0)
   async             <boolean>    ..F........ use DNN async inference (default true)
   device            <string>     ..F........ device to run model

dnn_tensorflow AVOptions:
   sess_config       <string>     ..F........ config for SessionOptions

dnn_openvino AVOptions:
   batch_size        <int>        ..F........ batch size per request (from 1 to 1000) (default 1)
   input_resizable   <boolean>    ..F........ can input be resizable or not (default false)
   layout            <int>        ..F........ input layout of model (from 0 to 2) (default none)
     none            0            ..F........ none
     nchw            1            ..F........ nchw
     nhwc            2            ..F........ nhwc
   scale             <float>      ..F........ Add scale preprocess operation. Divide each element of input by specified value. (from INT_MIN to INT_MAX) (default 0)
   mean              <float>      ..F........ Add mean preprocess operation. Subtract specified value from each element of input. (from INT_MIN to INT_MAX) (default 0)

dnn_th AVOptions:
   optimize          <int>        ..F........ turn on graph executor optimization (from 0 to 1) (default 0)

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-05-18 19:44:50 +08:00
Andreas Rheinhardt 790f793844 avutil/common: Don't auto-include mem.h
There are lots of files that don't need it: The number of object
files that actually need it went down from 2011 to 884 here.

Keep it for external users in order to not cause breakages.

Also improve the other headers a bit while just at it.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-31 00:08:43 +01:00
Wenbin Chen 3de38b9da5 libavfilter/dnn_interface: use dims to represent shapes
For detect and classify output, width and height make no sence, so
change width, height to dims to represent the shape of tensor. Use
layout and dims to get width, height and channel.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-01-28 11:18:06 +08:00
Wenbin Chen 58b6c0c327 libavfilter/dnn: Initialze DNNData variables
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-09-27 12:58:55 +08:00
Zhao Zhili 3a5d95e3fa avfilter/dnn_backend_tf: silence implicit cast warning
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili b0c0fedcda avfilter/dnn_backend_tf: fix use of uninitialized value
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili d9f41a343e avfilter/dnn_backend_tf: check TF_OperationOutputType return value
This also fixed a warning: implicit conversion from enumeration
type 'TF_DataType' (aka 'enum TF_DataType') to different
enumeration type 'DNNDataType'.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili f3495ef4f8 avfilter/dnn_backend_tf: remove unused define
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Zhao Zhili 3f52b7eedc avfilter/dnn: define each backend as a DNNModule
To avoid export multiple functions for each backend
implementation.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Ting Fu 78f95f1088 lavfi/dnn: Remove DNN native backend
According to discussion in
https://etherpad.mit.edu/p/FF_dev_meeting_20221202 and the proposal in
http://ffmpeg.org/pipermail/ffmpeg-devel/2022-December/304534.html,
the DNN native backend should be removed at first step.
All the DNN native backend related codes are deleted.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-04-28 11:07:41 +08:00
Ting Fu bc589c91f7 lavfi/dnn: add error info for TF backend filling task failure
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00
Ting Fu af052f9066 lavfi/dnn: fix mem leak in TF backend error handle
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00
Ting Fu 5c216d081d lavfi/dnn: fix corruption when TF backend infer failed
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00
Shubhanshu Saxena d0a999a0ab libavfilter: Remove DNNReturnType from DNN Module
This patch removes all occurences of DNNReturnType from the DNN module.
This commit replaces DNN_SUCCESS by 0 (essentially the same), so the
functions with DNNReturnType now return 0 in case of success, the negative
values otherwise.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Shubhanshu Saxena 3fa89bd758 lavfi/dnn_backend_tf: Return Specific Error Codes
Switch to returning specific error codes or DNN_GENERIC_ERROR
when an error is encountered. For TensorFlow C API errors, currently
DNN_GENERIC_ERROR is returned.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Andreas Rheinhardt 1ea3650823 Replace all occurences of av_mallocz_array() by av_calloc()
They do the same.

Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2021-09-20 01:03:52 +02:00
Shubhanshu Saxena 660a205b05 lavfi/dnn: Rename InferenceItem to LastLevelTaskItem
This patch renames the InferenceItem to LastLevelTaskItem in the
three backends to avoid confusion among the meanings of these structs.

The following are the renames done in this patch:

1. extract_inference_from_task -> extract_lltask_from_task
2. InferenceItem -> LastLevelTaskItem
3. inference_queue -> lltask_queue
4. inference -> lltask
5. inference_count -> lltask_count

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-28 16:19:07 +08:00
Shubhanshu Saxena 1544d6fa0a libavfilter: Remove Async Flag from DNN Filter Side
Remove async flag from filter's perspective after the unification
of async and sync modes in the DNN backend.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-28 16:19:07 +08:00
Shubhanshu Saxena 60b4d07cf6 libavfilter: Unify Execution Modes in DNN Filters
This commit unifies the async and sync mode from the DNN filters'
perspective. As of this commit, the Native backend only supports
synchronous execution mode.

Now the user can switch between async and sync mode by using the
'async' option in the backend_configs. The values can be 1 for
async and 0 for sync mode of execution.

This commit affects the following filters:
1. vf_dnn_classify
2. vf_dnn_detect
3. vf_dnn_processing
4. vf_sr
5. vf_derain

This commit also updates the filters vf_dnn_detect and vf_dnn_classify
to send only the input frame and send NULL as output frame instead of
input frame to the DNN backends.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-28 16:19:07 +08:00
Shubhanshu Saxena 2063745a93 lavfi/dnn: DNNAsyncExecModule Execution Failure Handling
This commit adds the case handling if the asynchronous execution
of a request fails by checking the exit status of the thread when
joining before starting another execution. On failure, it does the
cleanup as well.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-10 22:27:27 +08:00
Shubhanshu Saxena 371e5672f3 lavfi/dnn_backend_tf: Error Handling for tf_create_inference_request
This commit includes the check for the case when the newly created
TFInferRequest is NULL.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-10 22:27:27 +08:00
Shubhanshu Saxena 009b2e5b5e lavfi/dnn: Extract Common Parts from get_output functions
The frame allocation and filling the TaskItem with execution
parameters is common in the three backends. This commit shifts
this logic to dnn_backend_common.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-10 22:27:27 +08:00
Shubhanshu Saxena 4d627acefa lavfi/dnn_backend_tf: Add TF_Status to TFRequestItem
Since requests are running in parallel, there is inconsistency in
the status of the execution. To resolve it, we avoid using mutex
as it would result in single TF_Session running at a time. So add
TF_Status to the TFRequestItem

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-10 22:27:27 +08:00
Shubhanshu Saxena a3db9b5405 lavfi/dnn_backend_tf: Error Handling for execute_model_tf
This patch adds error handling for cases where the execute_model_tf
fails, clears the used memory in the TFRequestItem and finally pushes
it back to the request queue.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-10 22:27:27 +08:00
Shubhanshu Saxena 0985e9283c lavfi/dnn: Async Support for TensorFlow Backend
This commit enables async execution in the TensorFlow backend
and adds function to flush extra frames.

The async execution mechanism executes the TFInferRequests on
a separate thread which is joined before the next execution of
same TFRequestItem/while freeing the model.

The following is the comparison of this mechanism with the existing
sync mechanism on TensorFlow C API 2.5 CPU variant.

Async Mode: 4m32.846s
Sync Mode: 5m17.582s

The above was performed on super resolution filter using SRCNN model.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-10 22:27:27 +08:00
Shubhanshu Saxena e6ae8fc18e lavfi/dnn_backend_tf: TFInferRequest Execution and Documentation
This commit adds a function for execution of TFInferRequest and documentation
for functions related to TFInferRequest.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-10 22:27:27 +08:00
Andreas Rheinhardt ef54590f83 avfilter/internal: Don't include libavcodec/(avcodec|internal).h
The reasons for including them don't exist any longer: ff_tlog() has
been moved to libavutil/internal.h and FF_QSCALE_TYPE_* has been moved
to qp_table.h.

Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2021-08-05 21:15:25 +02:00
Andreas Rheinhardt 69f120ead7 avcodec/avcodec: Don't include cpu.h
It is not used here at all; instead, add it where it is used without
including it or any of the arch-specific CPU headers.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2021-07-22 12:59:07 +02:00
Shubhanshu Saxena 6f9570a633 lavfi/dnn_backend_tf: Error Handling
This commit adds handling for cases where an error may occur, clearing
the allocated memory resources.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-11 20:12:27 +08:00
Shubhanshu Saxena 84e4e60fdc lavfi/dnn_backend_tf: Separate function for Completion Callback
This commit rearranges the existing code to create a separate function
for the completion callback in execute_model_tf.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-11 20:12:27 +08:00
Shubhanshu Saxena b849228ae0 lavfi/dnn_backend_tf: Separate function for filling RequestItem
This commit rearranges the existing code to create separate function
for filling request with execution data.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-11 20:12:27 +08:00
Shubhanshu Saxena 08d8b3b631 lavfi/dnn_backend_tf: Request-based Execution
This commit uses TFRequestItem and the existing sync execution
mechanism to use request-based execution. It will help in adding
async functionality to the TensorFlow backend later.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-11 20:12:27 +08:00
Shubhanshu Saxena a4de605110 lavfi/dnn_backend_tf: Add TFInferRequest and TFRequestItem
This commit introduces a typedef TFInferRequest to store
execution parameters for a single call to the TensorFlow C API.
This typedef is used in the TFRequestItem.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-11 20:12:27 +08:00
Shubhanshu Saxena 68cf14d2b1 lavfi/dnn_backend_tf: TaskItem Based Inference
This commit uses the common TaskItem and InferenceItem typedefs
for execution in TensorFlow backend.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-07-11 20:12:27 +08:00
Guo, Yejun 4c705a2775 lavfi/dnn: refine code to separate processing and detection in backends 2021-05-24 09:09:34 +08:00
Limin Wang 2899fb61d2 avfilter/dnn/dnn_backend_tf: fix cross library usage
duplicate ff_hex_to_data() function from avformat and rename it to
hex_to_data() as static function.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
2021-05-11 18:46:14 +08:00
Ting Fu e42125edab lavfi/dnn_backend_tensorflow: support detect model
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-05-11 10:28:35 +08:00
Ting Fu 1b1064054c lavfi/dnn_backend_tensorflow: add multiple outputs support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-05-11 10:28:35 +08:00
Ting Fu f02928eb5a dnn: add DCO_RGB color order to enum DNNColorOrder
Adding DCO_RGB color order to DNNColorOrder, since tensorflow model
needs this kind of color oder as input.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-05-11 10:28:35 +08:00
Guo, Yejun a3b74651a0 lavfi/dnn: refine dnn interface to add DNNExecBaseParams
Different function type of model requires different parameters, for
example, object detection detects lots of objects (cat/dog/...) in
the frame, and classifcation needs to know which object (cat or dog)
it is going to classify.

The current interface needs to add a new function with more parameters
to support new requirement, with this change, we can just add a new
struct (for example DNNExecClassifyParams) based on DNNExecBaseParams,
and so we can continue to use the current interface execute_model just
with params changed.
2021-05-06 10:50:44 +08:00
Limin Wang f183d6555e avfilter/dnn/dnn_backend_tf: simplify the code with ff_hex_to_data
please use tools/python/tf_sess_config.py to get the sess_config after that.
note the byte order of session config is in normal order.
bump the MICRO version for the config change.

Signed-off-by: Limin Wang <lance.lmwang@gmail.com>
2021-04-29 20:02:29 +08:00
Guo, Yejun 59021d79a2 lavfi/dnn: refine code for frame pre/proc processing 2021-04-08 09:23:02 +08:00
Ting Fu 637bdefdeb lavfi/dnn_backend_tensorflow.c: fix mem leak in execute_model_tf
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-03-25 13:10:32 +08:00
Ting Fu b08b5f07b6 lavfi/dnn_backend_tensorflow.c: fix mem leak in load_native_model
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-03-25 13:10:32 +08:00
Ting Fu 3499ec84cc lavfi/dnn_backend_tensorflow.c: fix mem leak in load_tf_model
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-03-25 13:10:32 +08:00