Commit Graph

187 Commits

Author SHA1 Message Date
Shubhanshu Saxena 5b8e828dee lavfi/dnn/safe_queue.h: Add Documentation to SafeQueue
Documentation for SafeQueue

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
2021-04-27 10:01:20 +08:00
shubhanshu02 d98884be41 lavfi/dnn_backend_openvino.c: Spelling Correction in OpenVino Backend
Correct Spelling of the word `descibe` to `describe`
in init_model_ov

Signed-off-by: shubhanshu02 <shubhanshu.e01@gmail.com>
2021-04-25 09:02:54 +08:00
Guo, Yejun 13bf797ced lavfi/dnn: add post process for detection 2021-04-08 09:23:02 +08:00
Guo, Yejun 59021d79a2 lavfi/dnn: refine code for frame pre/proc processing 2021-04-08 09:23:02 +08:00
Guo, Yejun d2ccbc966b lavfi/dnn_backend_openvino.c: only allow DFT_PROCESS_FRAME to get output dim 2021-04-08 09:23:02 +08:00
Ting Fu 637bdefdeb lavfi/dnn_backend_tensorflow.c: fix mem leak in execute_model_tf
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-03-25 13:10:32 +08:00
Ting Fu b08b5f07b6 lavfi/dnn_backend_tensorflow.c: fix mem leak in load_native_model
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-03-25 13:10:32 +08:00
Ting Fu 3499ec84cc lavfi/dnn_backend_tensorflow.c: fix mem leak in load_tf_model
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-03-25 13:10:32 +08:00
Wenlong Ding b460595dd7 lavfi/dnn/dnn_backend_native_layer_mathunary: add exp support
Signed-off-by: Wenlong Ding <wenlong.ding@intel.com>
2021-03-24 13:53:50 +08:00
Guo, Yejun da12d600ea lavfi/dnn_backend_openvino.c: fix mem leak for TaskItem upon error 2021-03-18 09:30:09 +08:00
Guo, Yejun df59ae8bb2 lavfi/dnn_backend_openvino.c: fix mem leak for RequestItem upon error 2021-03-18 09:30:09 +08:00
Guo, Yejun 41f4af16fc lavfi/dnn_backend_openvino.c: fix typo upon error 2021-03-18 09:30:09 +08:00
Guo, Yejun bd3ca0859e lavfi/dnn_backend_openvino.c: fix mem leak for input_blob and output_blob upon error 2021-03-18 09:30:09 +08:00
Guo, Yejun 3ce2ee7f54 lavfi/dnn_backend_openvino.c: fix mem leak for AVFrame upon error 2021-03-18 09:30:09 +08:00
Andreas Rheinhardt 2f056def65 dnn/dnn_backend_native_layer_mathbinary: Fix leak upon error
Fixes Coverity issue #1473568.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 13:21:48 +01:00
Andreas Rheinhardt 723ebf029a dnn/dnn_backend_native_layer_conv2d: Don't pretend convolution can fail
It can't; these are just remnants of commit
3c7cad69f2 which let the worker threads
do the reallocation.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 13:20:00 +01:00
Andreas Rheinhardt 9da96d8b69 dnn/dnn_backend_native_layer_conv2d: Check thread creation for errors
Fixes Coverity issue #1473533.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 13:13:00 +01:00
Andreas Rheinhardt c2fef8f16c dnn/dnn_backend_native_layer_conv2d: Check allocation
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 13:10:46 +01:00
Andreas Rheinhardt d14ae74064 dnn/dnn_backend_native_layer_conv2d: Avoid separate, unchecked allocations
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 13:09:48 +01:00
Andreas Rheinhardt 508d7005a0 dnn/dnn_backend_native_layer_conv2d: Fix memleak on error
If an error happens when preparing the output data buffer, an already
allocated array would leak. Fix this by postponing its allocation.

Fixes Coverity issue #1473531.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 13:06:32 +01:00
Andreas Rheinhardt 16e36d5c0c dnn/dnn_backend_native_layer_conv2d: Avoid allocation when single-threaded
Also fixes a memleak in single-threaded mode when an error happens
in preparing the output data buffer; and also removes an unchecked
allocation.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 12:47:11 +01:00
Andreas Rheinhardt 639b3bbbc7 dnn/dnn_backend_native_layer_conv2d: Join two arrays, avoid allocation
Fixes Coverity issue #1473507.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 12:45:45 +01:00
Andreas Rheinhardt d915c1dd7a dnn/dnn_backend_native_layer_conv2d: Fix memleak on realloc failure
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 12:44:09 +01:00
Andreas Rheinhardt f51f13902b dnn/dnn_backend_native: Fix typo in log message
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 12:43:08 +01:00
Andreas Rheinhardt 1e47ef600d dnn/dnn_backend_native: Don't use asserts for checks
asserts should not be used instead of ordinary input checks.
Yet the native DNN backend did it: get_input_native() asserted that
the first dimension was one, despite this value coming directly from
the input file without having been sanitized.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 12:39:01 +01:00
Andreas Rheinhardt 0e078c6cfa dnn/dnn_backend_native: Fix leak in case parsing options fails
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 12:34:17 +01:00
Andreas Rheinhardt 2e2ed39dac dnn/dnn_backend_native: Avoid allocation for checking file magic
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 12:32:25 +01:00
Ting Fu b0d75a8de9 dnn_backend_openvino.c: allow out_frame as NULL for analytic case 2021-02-18 09:59:37 +08:00
Guo, Yejun 2da3a5c10f dnn: add color conversion for analytic case
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-02-18 09:59:37 +08:00
Guo, Yejun 76fc6879e2 dnn: add function type for model
So the backend knows the usage of model is for frame processing,
detect, classify, etc. Each function type has different behavior
in backend when handling the input/output data of the model.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-02-18 09:59:37 +08:00
Guo, Yejun 995c33a046 dnn_backend_openvino.c: fix multi-thread issue for async execution
once we mark done for the task in function infer_completion_callback,
the task is possible to be release in function ff_dnn_get_async_result_ov
in another thread just after it, so we need to record request queue
first, instead of using task->ov_model->request_queue later.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-02-18 09:59:37 +08:00
Guo, Yejun 51c105a62d dnn_backend_openvino.c: fix mismatch between ffmpeg(NHWC) and openvino(NCHW)
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-02-18 09:59:37 +08:00
Guo, Yejun eccc7971c2 dnn_backend_openvino.c: remove extra semicolon 2021-01-28 09:45:13 +08:00
Guo, Yejun 06c01f1763 dnn: remove type cast which is not necessary 2021-01-28 09:45:13 +08:00
Mark Thompson bb96824510 dnn: Add ff_ prefix to unnamespaced globals
Reviewed-By: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 15:03:09 +08:00
Mark Thompson 2c424d9630 dnn_backend_native.c: Add missing static to local variable 2021-01-22 12:18:03 +08:00
Mark Thompson c6a3ca2db4 dnn_backend_native_layer_mathbinary.c: Delete unused global variable 2021-01-22 10:18:36 +08:00
Guo, Yejun a11a3f358d dnn_backend_native_layer_conv2d.c: refine code with av_malloc_array and av_freep
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Guo, Yejun a76fa0caa0 dnn_backend_native_layer_conv2d.c: correct struct name with CamelCase
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Guo, Yejun d4f40c1b60 dnn/queue: remove prefix FF for Queue and SafeQueue
we don't need FF prefix for internal data struct

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Guo, Yejun c5e30d588d libavfilter/dnn: add prefix ff_ for internal functions
from proc_from_frame_to_dnn to ff_proc_from_frame_to_dnn, and
from proc_from_dnn_to_frame to ff_proc_from_dnn_to_frame.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Guo, Yejun 2d6af4a501 libavfilter/dnn: use avpriv_report_missing_feature for unsupported features
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Guo, Yejun 0d5fd4999a dnn_backend_openvino.c: add version mismatch reminder
The OpenVINO model file format changes when OpenVINO goes to a new
release, it does not work if the versions between model file and
runtime are mismatched.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 08:28:13 +08:00
Ting Fu 71b82e4ffd dnn/openvino: support model input resize
OpenVINO APIs require specify input size to run the model, while some
OpenVINO model does accept different input size. To enable this feature
adding input_resizable option here for easier use.
Setting bool variable input_resizable to specify if the input can be resizable or not.
input_resizable = 1 means support input resize, aka accept different input size.
input_resizable = 0 (default) means do not support input resize.
Please make sure the inference model does accept different input size
before use this option, otherwise the inference engine may report error(s).
eg: ./ffmpeg -i video_name.mp4 -vf dnn_processing=dnn_backend=openvino:\
      model=model_name.xml:input=input_name:output=output_name:\
      options=device=CPU\&input_resizable=1 -y output_video_name.mp4

Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-01-18 13:09:22 +08:00
Ting Fu 048d5cc620 dnn/openvino: refine code for better model initialization
Move openvino model/inference request creation and initialization steps
from ff_dnn_load_model_ov to new function init_model_ov, for later input
resize support.

Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-01-18 13:09:22 +08:00
Ting Fu 946fcd4508 dnn/openvino: remove unnecessary code
Signed-off-by: Ting Fu <ting.fu@intel.com>
2021-01-18 13:09:21 +08:00
Guo, Yejun 64ea15f050 libavfilter/dnn: add batch mode for async execution
the default number of batch_size is 1

Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2021-01-15 08:59:54 +08:00
Andreas Rheinhardt 2c6f532e0a Mark some pointers as const
Reviewed-by: Lynne <dev@lynne.ee>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-01-01 15:25:48 +01:00
Guo, Yejun 6b0cfa8399 dnn/queue: add error check and cleanup
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-31 08:31:17 +08:00
Guo, Yejun 97f520b700 dnn: fix issue when pthread is not supported
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-31 08:31:17 +08:00
Guo, Yejun 8e78d5d394 dnn: fix redefining typedefs and also refine naming with correct prefix
The prefix for symbols not exported from the library and not
local to one translation unit is ff_ (or FF for types).

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-31 08:31:17 +08:00
Guo, Yejun 5024286465 dnn_interface: change from 'void *userdata' to 'AVFilterContext *filter_ctx'
'void *' is too flexible, since we can derive info from
AVFilterContext*, so we just unify the interface with this data
structure.

Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Guo, Yejun e67b5d0a24 dnn: add async execution support for openvino backend
Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Guo, Yejun 39f5cb4bd1 dnn_interface: add interface to support async execution
Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Guo, Yejun 38089925fa dnn_backend_openvino.c: refine code for error handle
Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Guo, Yejun 2b177033bb dnn_backend_openvino.c: separate function execute_model_ov
function fill_model_input_ov and infer_completion_callback are
extracted, it will help the async execution for reuse.

Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Xie, Lin 6506ab8b03 dnn/queue: add queue and safe_queue support
Signed-off-by: Xie, Lin <lin.xie@intel.com>
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-29 09:31:06 +08:00
Ting Fu 5dbabb020e dnn: add NV12 pixel format support
Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-12-22 10:53:35 +08:00
Jun Zhao 0320dab265 lavfi/dnn: check the return value from sws_getContext
sws_getContext may be return NULL, and it's will be dereferenced,
so add the check.

Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
2020-12-12 13:34:30 +08:00
Jun Zhao ae2075265b lavfi/dnn: used the format name in debug message
Used the format name in debug message.

Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
2020-12-12 13:34:24 +08:00
Guo, Yejun c4a3dbe726 dnn_backend_tf.c: add option sess_config for tf backend
TensorFlow C library accepts config for session options to
set different parameters for the inference. This patch exports
this interface.

The config is a serialized tensorflow.ConfigProto proto, so we need
two steps to use it:
1. generate the serialized proto with python (see script example below)
the output looks like: 0xab...cd
where 0xcd is the least significant byte and 0xab is the most significant byte.

2. pass the python script output into ffmpeg with
dnn_processing=options=sess_config=0xab...cd

The following script is an example to specify one GPU. If the system contains
3 GPU cards, the visible_device_list could be '0', '1', '2', '0,1' etc.
'0' does not mean physical GPU card 0, we need to try and see.
And we can also add more opitions here to generate more serialized proto.

script example to generate serialized proto which specifies one GPU:
import tensorflow as tf
gpu_options = tf.GPUOptions(visible_device_list='0')
config = tf.ConfigProto(gpu_options=gpu_options)
s = config.SerializeToString()
b = ''.join("%02x" % int(ord(b)) for b in s[::-1])
print('0x%s' % b)
2020-10-19 20:54:29 +08:00
Chris Miceli 6bdfea8d4b libavfilter/dnn/dnn_backend{openvino, tf}: check memory alloc non-NULL
These previously would not check that the return value was non-null
meaning it was susceptible to a sigsegv. This checks those values.
2020-10-14 11:08:09 +08:00
Chris Miceli ad95e5e45d libavfilter/dnn_backend_native: check mem allocation
check that frame allocations return non-null.
2020-10-14 10:19:05 +08:00
Mingyu Yin ad2546e3b3 dnn/native: add native support for dense
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-09-29 14:19:55 +08:00
Guo, Yejun e71d73b096 dnn: add a new interface DNNModel.get_output
for some cases (for example, super resolution), the DNN model changes
the frame size which impacts the filter behavior, so the filter needs
to know the out frame size at very beginning.

Currently, the filter reuses DNNModule.execute_model to query the
out frame size, it is not clear from interface perspective, so add
a new explict interface DNNModel.get_output for such query.
2020-09-21 21:26:56 +08:00
Guo, Yejun fce3e3e137 dnn: put DNNModel.set_input and DNNModule.execute_model together
suppose we have a detect and classify filter in the future, the
detect filter generates some bounding boxes (BBox) as AVFrame sidedata,
and the classify filter executes DNN model for each BBox. For each
BBox, we need to crop the AVFrame, copy data to DNN model input and do
the model execution. So we have to save the in_frame at DNNModel.set_input
and use it at DNNModule.execute_model, such saving is not feasible
when we support async execute_model.

This patch sets the in_frame as execution_model parameter, and so
all the information are put together within the same function for
each inference. It also makes easy to support BBox async inference.
2020-09-21 21:26:56 +08:00
Guo, Yejun 2003e32f62 dnn: change dnn interface to replace DNNData* with AVFrame*
Currently, every filter needs to provide code to transfer data from
AVFrame* to model input (DNNData*), and also from model output
(DNNData*) to AVFrame*. Actually, such transfer can be implemented
within DNN module, and so filter can focus on its own business logic.

DNN module also exports the function pointer pre_proc and post_proc
in struct DNNModel, just in case that a filter has its special logic
to transfer data between AVFrame* and DNNData*. The default implementation
within DNN module is used if the filter does not set pre/post_proc.
2020-09-21 21:26:56 +08:00
Guo, Yejun 6918e240d7 dnn: add userdata for load model parameter
the userdata will be used for the interaction between AVFrame and DNNData
2020-09-21 21:26:56 +08:00
Xu Jun a39fcbdffb dnn_backend_native_layer_conv2d.c: fix bug of loop boundary in single thread mode.
Before patch, fate test for dnn may fail in some Windows environment
while succeed in my Linux. The bug was caused by a wrong loop boundary.
After patch, fate test succeed in my windows mingw 64-bit.

Signed-off-by: Xu Jun <xujunzz@sjtu.edu.cn>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-09-20 12:30:47 +08:00
Xu Jun 7d3cd9f956 dnn_backend_native_layer_conv2d.c: refine code.
Move thread area allocate out of thread function into
main thread.

Signed-off-by: Xu Jun <xujunzz@sjtu.edu.cn>
2020-09-17 08:45:23 +08:00
Xu Jun 8e67ae2cb4 dnn_backend_native_layer_conv2d.c: fix memory allocation bug in multithread function.
Before patch, memory was allocated in each thread functions,
which may cause more than one time of memory allocation and
cause crash.

After patch, memory is allocated in the main thread once,
an index was parsed into thread functions. Bug fixed.

Signed-off-by: Xu Jun <xujunzz@sjtu.edu.cn>
2020-09-17 08:45:23 +08:00
Ting Fu dc16aeb390 dnn/openvino: add input/output name info
show all input/output names when the input or output name not correct

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-09-12 16:15:30 +08:00
Ting Fu 87cb24a1ca dnn/openvino: support run inference via GPU
for enabling OpenVINO GPU please:
1. install required OpenCL drivers, see: https://github.com/intel/compute-runtime/releases/tag/19.41.14441
2. build OpenVINO c lib with GPU enabled: use cmake config with: -DENABLE_CLDNN=ON
3. then make, and include the OpenVINO c lib in environment variables
detailed steps please refer: https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md

inference model with GPU please add: optioins=device=GPU

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-09-12 16:15:30 +08:00
Andreas Rheinhardt 9beaf536fe dnn/dnn_backend_native_layer_conv2d: Fix allocation size
Found via ASAN with the dnn-layer-conv2d FATE-test.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2020-09-09 14:58:26 +02:00
Xu Jun 3c7cad69f2 dnn_backend_native_layer_conv2d.c:Add mutithread function
Use pthread to multithread dnn_execute_layer_conv2d.
Can be tested with command "./ffmpeg_g -i input.png -vf \
format=yuvj420p,dnn_processing=dnn_backend=native:model= \
espcn.model:input=x:output=y:options=conv2d_threads=23 \
 -y sr_native.jpg -benchmark"

before patch: utime=11.238s stime=0.005s rtime=11.248s
after patch:  utime=20.817s stime=0.047s rtime=1.051s
on my 3900X 12c24t @4.2GHz

About the increase of utime, it's because that CPU HyperThreading
technology makes logical cores twice of physical cores while cpu's
counting performance improves less than double. And utime sums
all cpu's logical cores' runtime. As a result, using threads num
near cpu's logical core's number will double utime, while reduce
rtime less than half for HyperThreading CPUs.

Signed-off-by: Xu Jun <xujunzz@sjtu.edu.cn>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-09-09 14:24:36 +08:00
Xu Jun 235e01f5a0 dnn_backend_native.c: parse options in native backend
Signed-off-by: Xu Jun <xujunzz@sjtu.edu.cn>
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-09-09 14:24:36 +08:00
Ting Fu 4a11a6f4cc dnn/tensorflow: add log error message
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-08-31 13:12:10 +08:00
Ting Fu 74358ff4a4 dnn/openvino: add log error message
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-08-31 13:12:10 +08:00
Ting Fu c8ba0daf8d dnn/native: add log error message
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-08-25 13:03:46 +08:00
Ting Fu 230cf9d185 dnn/native: unify error return to DNN_ERROR
Unify all error return as DNN_ERROR, in order to cease model executing
when return error in ff_dnn_execute_model_native layer_func.pf_exec

Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-08-25 13:03:46 +08:00
Guo, Yejun 0f7a99e37a dnn: move output name from DNNModel.set_input_output to DNNModule.execute_model
currently, output is set both at DNNModel.set_input_output and
DNNModule.execute_model, it makes sense that the output name is
provided at model inference time so all the output info is set
at a single place.

and so DNNModel.set_input_output is renamed to DNNModel.set_input

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-25 09:02:59 +08:00
Mingyu Yin 3477feb643 dnn_backend_native_layer_mathbinary: add floormod support
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-08-24 09:09:11 +08:00
Mingyu Yin 37ef1acedb dnn_backend_native_layer_mathbinary: change to function pointer
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-08-24 09:09:11 +08:00
Andreas Rheinhardt 128e6df1cd dnn_backend_native_layer_avgpool: Fix invalid assignment, use av_assert
dnn_execute_layer_avg_pool() contains the following line:

assert(avgpool_params->padding_method = VALID);

This statement contains an assignment where obviously a comparison was
intended. Furthermore, *avgpool_params is const, so that the attempted
assignment leads to a compilation failure if asserts are enabled
(i.e. if DEBUG is defined which leads libavutil/internal.h to not define
NDEBUG). Moreover, the enumeration constant VALID actually has the value 0,
so that the assert would be triggered if a compiler compiles this with
asserts enabled. Finally, the statement uses assert() directly instead
of av_assert*().

All these errors have been fixed.

Thanks to ubitux for providing a FATE-box [1] where DEBUG is defined.

[1]: http://fate.ffmpeg.org/history.cgi?slot=x86_64-archlinux-gcc-ddebug

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-21 22:12:39 +08:00
Ting Fu a6e830ae7f dnn/native: rename struct ConvolutionalNetwork to NativeModel
Signed-off-by: Ting Fu <ting.fu@intel.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-21 10:39:00 +08:00
Guo, Yejun 3c05c8a15f dnn_backend_tf.c: fix build issue for tensorflow backend
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-14 08:59:39 +08:00
Guo, Yejun 0a51abe8ab dnn: add backend options when load the model
different backend might need different options for a better performance,
so, add the parameter into dnn interface, as a preparation.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-12 15:43:40 +08:00
Mingyu Yin 4ed6bca4ae dnn_backend_native_layer_mathunary: add round support
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-12 10:30:46 +08:00
Ting Fu 91efc41a69 dnn/native: add native support for avg_pool
Not support pooling strides in channel dimension yet.

Signed-off-by: Ting Fu <ting.fu@intel.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-10 16:37:39 +08:00
Mingyu Yin fab00b0ae0 dnn_backend_native_layer_mathunary: add floor support
It can be tested with the model generated with below python script:

import tensorflow as tf
import os
import numpy as np
import imageio
from tensorflow.python.framework import graph_util
name = 'floor'

pb_file_path = os.getcwd()
if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)):
    os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))

with tf.Session(graph=tf.Graph()) as sess:
    in_img = imageio.imread('detection.jpg')
    in_img = in_img.astype(np.float32)
    in_data = in_img[np.newaxis, :]
    input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    y_ = tf.math.floor(input_x*255)/255
    y = tf.identity(y_, name='dnn_out')
    sess.run(tf.global_variables_initializer())
    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])

    with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f:
        f.write(constant_graph.SerializeToString())

    print("model.pb generated, please in ffmpeg path use\n \n \
    python tools/python/convert.py {}_savemodel/model.pb --outdir={}_savemodel/ \n \nto generate model.model\n".format(name,name))

    output = sess.run(y, feed_dict={ input_x: in_data})
    imageio.imsave("out.jpg", np.squeeze(output))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 {}_savemodel/tensorflow_out.md5\n  \
    or\n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow {}_savemodel/out_tensorflow.jpg\n \nto generate output result of tensorflow model\n".format(name, name, name, name))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 {}_savemodel/native_out.md5\n  \
    or \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native {}_savemodel/out_native.jpg\n \nto generate output result of native model\n".format(name, name, name, name))

Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-08-07 10:34:22 +08:00
Mingyu Yin 9fbdd5454b dnn_backend_native_layer_mathunary: add ceil support
It can be tested with the model generated with below python script:

import tensorflow as tf
import os
import numpy as np
import imageio
from tensorflow.python.framework import graph_util
name = 'ceil'

pb_file_path = os.getcwd()
if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)):
    os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))

with tf.Session(graph=tf.Graph()) as sess:
    in_img = imageio.imread('detection.jpg')
    in_img = in_img.astype(np.float32)
    in_data = in_img[np.newaxis, :]
    input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    y = tf.math.ceil( input_x, name='dnn_out')
    sess.run(tf.global_variables_initializer())
    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])

    with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f:
        f.write(constant_graph.SerializeToString())

    print("model.pb generated, please in ffmpeg path use\n \n \
    python tools/python/convert.py ceil_savemodel/model.pb --outdir=ceil_savemodel/ \n \n \
    to generate model.model\n")

    output = sess.run(y, feed_dict={ input_x: in_data})
    imageio.imsave("out.jpg", np.squeeze(output))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model=ceil_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 ceil_savemodel/tensorflow_out.md5\n \n \
    to generate output result of tensorflow model\n")

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model=ceil_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 ceil_savemodel/native_out.md5\n \n \
    to generate output result of native model\n")

Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-08-04 19:56:54 +08:00
Reimar Döffinger 584f396132 dnn_backend_native: Add overflow check for length calculation.
We should not silently allocate an incorrect sized buffer.
Fixes trac issue #8718.

Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-07-06 20:22:30 +08:00
Ting Fu c0cdeea0ee dnn_backend_native_layer_mathunary: add atanh support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')

please uncomment the part you want to test

x_sinh_1 = tf.sinh(x)
x_out = tf.divide(x_sinh_1, 1.176) # sinh(1.0)

x_cosh_1 = tf.cosh(x)
x_out = tf.divide(x_cosh_1, 1.55) # cosh(1.0)

x_tanh_1 = tf.tanh(x)
x__out = tf.divide(x_tanh_1, 0.77) # tanh(1.0)

x_asinh_1 = tf.asinh(x)
x_out = tf.divide(x_asinh_1, 0.89) # asinh(1.0/1.1)

x_acosh_1 = tf.add(x, 1.1)
x_acosh_2 = tf.acosh(x_acosh_1) # accept (1, inf)
x_out = tf.divide(x_acosh_2, 1.4) # acosh(2.1)

x_atanh_1 = tf.divide(x, 1.1)
x_atanh_2 = tf.atanh(x_atanh_1) # accept (-1, 1)
x_out = tf.divide(x_atanh_2, 1.55) # atanhh(1.0/1.1)

y = tf.identity(x_out, name='dnn_out') #please only preserve the x_out you want to test

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu cd2e3a864d dnn_backend_native_layer_mathunary: add acosh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu 9d14b38d9d dnn_backend_native_layer_mathunary: add asinh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu ea71e731f4 dnn_backend_native_layer_mathunary: add tanh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu 62fc7e3035 dnn_backend_native_layer_mathunary: add cosh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Ting Fu 91b4037101 dnn_backend_native_layer_mathunary: add sinh support
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-07-06 12:45:14 +08:00
Guo, Yejun ff37ebaf30 dnn: add openvino as one of dnn backend
OpenVINO is a Deep Learning Deployment Toolkit at
https://github.com/openvinotoolkit/openvino, it supports CPU, GPU
and heterogeneous plugins to accelerate deep learning inferencing.

Please refer to https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md
to build openvino (c library is built at the same time). Please add
option -DENABLE_MKL_DNN=ON for cmake to enable CPU path. The header
files and libraries are installed to /usr/local/deployment_tools/inference_engine/
with default options on my system.

To build FFmpeg with openvion, take my system as an example, run with:
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/deployment_tools/inference_engine/lib/intel64/:/usr/local/deployment_tools/inference_engine/external/tbb/lib/
$ ../ffmpeg/configure --enable-libopenvino --extra-cflags=-I/usr/local/deployment_tools/inference_engine/include/ --extra-ldflags=-L/usr/local/deployment_tools/inference_engine/lib/intel64
$ make

Here are the features provided by OpenVINO inference engine:
- support more DNN model formats
It supports TensorFlow, Caffe, ONNX, MXNet and Kaldi by converting them
into OpenVINO format with a python script. And torth model
can be first converted into ONNX and then to OpenVINO format.

see the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer/mo.py
which also does some optimization at model level.

- optimize at inference stage
It optimizes for X86 CPUs with SSE, AVX etc.

It also optimizes based on OpenCL for Intel GPUs.
(only Intel GPU supported becuase Intel OpenCL extension is used for optimization)

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
2020-07-02 09:36:34 +08:00
Ting Fu 13f5613e68 dnn_backend_native_layer_mathunary: add atan support
It can be tested with the model generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.atan(x)
x2 = tf.divide(x1, 3.1416/4) # pi/4
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Guo Yejun <yejun.guo@intel.com>
2020-06-25 08:41:50 +08:00