Commit Graph

237 Commits

Author SHA1 Message Date
Wenbin Chen
478d97f303 libavfilter/dnn_io_proc: Take step into consideration when crop frame
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-04-04 14:26:57 +08:00
Wenbin Chen
8869f5ce86 libavfilter/dnn_backend_openvino: Check bbox's height
Check bbox's height with frame's height rather than frame's width.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-04-04 14:26:52 +08:00
Andreas Rheinhardt
790f793844 avutil/common: Don't auto-include mem.h
There are lots of files that don't need it: The number of object
files that actually need it went down from 2011 to 884 here.

Keep it for external users in order to not cause breakages.

Also improve the other headers a bit while just at it.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2024-03-31 00:08:43 +01:00
Wenbin Chen
f4e0664fd1 libavfi/dnn: add LibTorch as one of DNN backend
PyTorch is an open source machine learning framework that accelerates
the path from research prototyping to production deployment. Official
website: https://pytorch.org/. We call the C++ library of PyTorch as
LibTorch, the same below.

To build FFmpeg with LibTorch, please take following steps as
reference:
1. download LibTorch C++ library in
 https://pytorch.org/get-started/locally/,
please select C++/Java for language, and other options as your need.
Please download cxx11 ABI version:
 (libtorch-cxx11-abi-shared-with-deps-*.zip).
2. unzip the file to your own dir, with command
unzip libtorch-shared-with-deps-latest.zip -d your_dir
3. export libtorch_root/libtorch/include and
libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
4. config FFmpeg with ../configure --enable-libtorch \
 --extra-cflag=-I/libtorch_root/libtorch/include \
 --extra-cflag=-I/libtorch_root/libtorch/include/torch/csrc/api/include \
 --extra-ldflags=-L/libtorch_root/libtorch/lib/
5. make

To run FFmpeg DNN inference with LibTorch backend:
./ffmpeg -i input.jpg -vf \
dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg

The LibTorch_model.pt can be generated by Python with torch.jit.script()
api. https://pytorch.org/tutorials/advanced/cpp_export.html. This is
pytorch official guide about how to convert and load torchscript model.
Please note, torch.jit.trace() is not recommanded, since it does
not support ambiguous input size.

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-03-19 14:48:58 +08:00
Anton Khirnov
1e7d2007c3 all: use designated initializers for AVOption.unit
Makes it robust against adding fields before it, which will be useful in
following commits.

Majority of the patch generated by the following Coccinelle script:

@@
typedef AVOption;
identifier arr_name;
initializer list il;
initializer list[8] il1;
expression tail;
@@
AVOption arr_name[] = { il, { il1,
- tail
+ .unit = tail
}, ...  };

with some manual changes, as the script:
* has trouble with options defined inside macros
* sometimes does not handle options under an #else branch
* sometimes swallows whitespace
2024-02-14 14:53:41 +01:00
Wenbin Chen
3de38b9da5 libavfilter/dnn_interface: use dims to represent shapes
For detect and classify output, width and height make no sence, so
change width, height to dims to represent the shape of tensor. Use
layout and dims to get width, height and channel.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-01-28 11:18:06 +08:00
Wenbin Chen
c695de56b5 libavfilter/dnn_bakcend_openvino: Add automatic input/output detection
Now when using openvino backend, user doesn't need to set input/output
names in command line. Model ports will be automatically detected.

For example:
ffmpeg -i input.png -vf \
dnn_detect=dnn_backend=openvino:model=model.xml:input=image:\
output=detection_out -y output.png

can be simplified to:
ffmpeg -i input.png -vf dnn_detect=dnn_backend=openvino:model=model.xml\
 -y output.png

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2024-01-28 11:17:59 +08:00
Wenbin Chen
86435582a6 libavfilter/dnn_backend_openvino: Add dynamic output support
Add dynamic outputs support. Some models don't have fixed output size.
Its size changes according to result. Now openvino can run these kinds of
models.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2023-12-30 12:12:51 +08:00
Wenbin Chen
da02836b9d libavfilter/vf_dnn_detect: Add input pad
Add input pad to get model input resolution. Detection models always
have fixed input size. And the output coordinators are based on the
input resolution, so we need to get input size to map coordinators to
our real output frames.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2023-12-16 21:50:37 +08:00
Wenbin Chen
22652b576c libavfiter/dnn_backend_openvino: Add multiple output support
Add multiple output support to openvino backend. You can use '&' to
split different output when you set output name using command line.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2023-12-16 21:50:16 +08:00
Wenbin Chen
47b2328076 libavfilter/vf_dnn_detect: Add yolo support
Add yolo support. Yolo model doesn't output final result. It outputs
candidate boxes, so we need post-process to remove overlap boxes to
get final results. Also, the box's coordinators relate to cell and
anchors, so we need these information to calculate boxes as well.

Model detail please refer to: https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v2-tf

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
2023-11-26 20:38:36 +08:00
Wenbin Chen
fa81de4af0 libavfilter/dnn/openvino: Reduce redundant memory allocation
We can directly get data ptr from tensor, so that extral memory
allocation can be removed.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-11-11 09:32:31 +08:00
Wenbin Chen
58b6c0c327 libavfilter/dnn: Initialze DNNData variables
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-09-27 12:58:55 +08:00
Wenbin Chen
c8c925dc29 libavfilter/dnn: Add scale and mean preprocess to openvino backend
Dnn models has different data preprocess requirements. Scale and mean
parameters are added to preprocess input data.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-09-27 12:58:55 +08:00
Wenbin Chen
74ce1d2d11 libavfilter/dnn: add layout option to openvino backend
Dnn models have different input layout (NCHW or NHWC), so a
"layout" option is added
Use openvino's API to do layout conversion for input data. Use swscale
to do layout conversion for output data as openvino doesn't have
similiar C API for output.

Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-09-27 12:58:55 +08:00
Zhao Zhili
4f4dc0a1a2 avfilter/dnn_backend_openvino: fix wild pointer on error path
When ov_model_const_input_by_name/ov_model_const_output_by_name
failed, input_port/output_port can be wild pointer.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-09-15 13:02:15 +08:00
Zhao Zhili
791b88fcb4 avfilter/dnn_backend_openvino: fix input_port/output_port leaks 2023-09-15 13:02:15 +08:00
Zhao Zhili
37123100d2 avfilter/dnn_backend_openvino: fix leak of ov_shape_t 2023-09-15 13:02:15 +08:00
Zhao Zhili
d2c5c3b7ef avfilter/dnn_backend_openvino: fix leak or ov_core_t on error path 2023-09-15 13:02:15 +08:00
Zhao Zhili
e0880ef8cb avfilter/dnn_backend_openvino: fix use uninitialized values
Error handling was broken since neither `ret` nor `task` has being
initialized on error path.
2023-09-15 13:02:15 +08:00
Zhao Zhili
7cb6329296 avfilter/dnn_backend_openvino: reduce indentation in free_model_ov
No functional changes except ensures model isn't null.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-09-15 13:02:15 +08:00
Zhao Zhili
5369548f2e avfilter/dnn_backend_openvino: fix multiple memleaks
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-09-15 13:02:15 +08:00
Wenbin Chen
e79bd1f1b1 lavfi/dnn: Add OpenVINO API 2.0 support
OpenVINO API 2.0 was released in March 2022, which introduced new
features.
This commit implements current OpenVINO features with new 2.0 APIs. And
will add other features in API 2.0.
Please add installation path, which include openvino.pc, to
PKG_CONFIG_PATH mannually for new OpenVINO libs config.

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
2023-08-26 14:12:10 +08:00
Zhao Zhili
32a749c7a6 avfilter/dnn_backend_openvino: fix log message
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:44 +08:00
Zhao Zhili
3a5d95e3fa avfilter/dnn_backend_tf: silence implicit cast warning
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili
b0c0fedcda avfilter/dnn_backend_tf: fix use of uninitialized value
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili
d9f41a343e avfilter/dnn_backend_tf: check TF_OperationOutputType return value
This also fixed a warning: implicit conversion from enumeration
type 'TF_DataType' (aka 'enum TF_DataType') to different
enumeration type 'DNNDataType'.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:24 +08:00
Zhao Zhili
f3495ef4f8 avfilter/dnn_backend_tf: remove unused define
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Zhao Zhili
016f2f61c3 avfilter/dnn: add log context to ff_get_dnn_module
Print backend type when failed.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Zhao Zhili
505c43bb65 avfilter/dnn: refactor ff_get_dnn_module to remove allocation
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Zhao Zhili
3f52b7eedc avfilter/dnn: define each backend as a DNNModule
To avoid export multiple functions for each backend
implementation.

Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
2023-06-08 10:50:23 +08:00
Ting Fu
78f95f1088 lavfi/dnn: Remove DNN native backend
According to discussion in
https://etherpad.mit.edu/p/FF_dev_meeting_20221202 and the proposal in
http://ffmpeg.org/pipermail/ffmpeg-devel/2022-December/304534.html,
the DNN native backend should be removed at first step.
All the DNN native backend related codes are deleted.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-04-28 11:07:41 +08:00
Ting Fu
7ed6f28a7c lavfi/dnn: modify dnn interface for removing native backend
Native backend will be removed in following commits, so change the
dnn interface and modify the error message in it first.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-04-28 11:07:40 +08:00
Ting Fu
bc589c91f7 lavfi/dnn: add error info for TF backend filling task failure
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00
Ting Fu
af052f9066 lavfi/dnn: fix mem leak in TF backend error handle
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00
Ting Fu
5c216d081d lavfi/dnn: fix corruption when TF backend infer failed
Signed-off-by: Ting Fu <ting.fu@intel.com>
2023-03-26 09:19:42 +08:00
Saliev, Rafik F
8ad988ac37 libavfilter/dnn: fix openvino async mode
Bugfix: The OpenVino DNN backend in the 'async' mode sets
'task->inference_done' to 'complete' prior to data copy from
OpenVino output buffer to task's output frame.
This order causes task destroy in ff_dnn_get_result_common()
prior to model output processing.

Signed-off-by: Rafik Saliev <rafik.f.saliev@intel.com>
2022-12-17 09:55:14 +08:00
Ting Fu
23953b9eb7 lavf/dnn: dump OpenVINO model input/output names to OVMdel struct.
Dump all input/output names to OVModel struct. In case other funcs use
them for reporting errors or locating issues.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2022-07-24 08:38:50 +08:00
Shubhanshu Saxena
d0a999a0ab libavfilter: Remove DNNReturnType from DNN Module
This patch removes all occurences of DNNReturnType from the DNN module.
This commit replaces DNN_SUCCESS by 0 (essentially the same), so the
functions with DNNReturnType now return 0 in case of success, the negative
values otherwise.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Shubhanshu Saxena
1df77bab08 lavfi/dnn_backend_common: Return specific error codes
Switch to returning specific error codes or DNN_GENERIC_ERROR
when an error is encountered in the common DNN backend functions.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Shubhanshu Saxena
515ff6b4f8 lavfi/dnn_backend_native: Return Specific Error Codes
Switch to returning specific error codes or DNN_GENERIC_ERROR
when an error is encountered.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Shubhanshu Saxena
3fa89bd758 lavfi/dnn_backend_tf: Return Specific Error Codes
Switch to returning specific error codes or DNN_GENERIC_ERROR
when an error is encountered. For TensorFlow C API errors, currently
DNN_GENERIC_ERROR is returned.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Shubhanshu Saxena
91af38f2b3 lavfi/dnn_backend_openvino: Return Specific Error Codes
Switch to returning specific error codes or DNN_GENERIC_ERROR
when an error is encountered. For OpenVINO API errors, currently
DNN_GENERIC_ERROR is returned.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Shubhanshu Saxena
d0587daec2 lavfi/dnn_io_proc: Return Specific Error Codes
This commit returns specific error codes from the functions in the
dnn_io_proc instead of DNN_ERROR.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Shubhanshu Saxena
b602f11a06 lavfi/dnn: Error Specificity in Native Backend Layers
This commit returns specific error codes from the execution
functions in the Native Backend layers instead of DNN_ERROR.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2022-03-12 15:10:28 +08:00
Andreas Rheinhardt
1ea3650823 Replace all occurences of av_mallocz_array() by av_calloc()
They do the same.

Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2021-09-20 01:03:52 +02:00
Shubhanshu Saxena
660a205b05 lavfi/dnn: Rename InferenceItem to LastLevelTaskItem
This patch renames the InferenceItem to LastLevelTaskItem in the
three backends to avoid confusion among the meanings of these structs.

The following are the renames done in this patch:

1. extract_inference_from_task -> extract_lltask_from_task
2. InferenceItem -> LastLevelTaskItem
3. inference_queue -> lltask_queue
4. inference -> lltask
5. inference_count -> lltask_count

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-28 16:19:07 +08:00
Shubhanshu Saxena
1544d6fa0a libavfilter: Remove Async Flag from DNN Filter Side
Remove async flag from filter's perspective after the unification
of async and sync modes in the DNN backend.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-28 16:19:07 +08:00
Shubhanshu Saxena
60b4d07cf6 libavfilter: Unify Execution Modes in DNN Filters
This commit unifies the async and sync mode from the DNN filters'
perspective. As of this commit, the Native backend only supports
synchronous execution mode.

Now the user can switch between async and sync mode by using the
'async' option in the backend_configs. The values can be 1 for
async and 0 for sync mode of execution.

This commit affects the following filters:
1. vf_dnn_classify
2. vf_dnn_detect
3. vf_dnn_processing
4. vf_sr
5. vf_derain

This commit also updates the filters vf_dnn_detect and vf_dnn_classify
to send only the input frame and send NULL as output frame instead of
input frame to the DNN backends.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-28 16:19:07 +08:00
Shubhanshu Saxena
d39580ac11 lavfi/dnn: Task-based Inference in Native Backend
This commit rearranges the code in Native Backend to use the TaskItem
for inference.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-08-28 16:19:07 +08:00