Commit Graph

16 Commits

Author SHA1 Message Date
Andreas Rheinhardt 2934a4b9a5 Remove unnecessary avassert.h inclusions
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
2021-07-22 15:02:30 +02:00
Andreas Rheinhardt 2f056def65 dnn/dnn_backend_native_layer_mathbinary: Fix leak upon error
Fixes Coverity issue #1473568.

Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-11 13:21:48 +01:00
Guo, Yejun 06c01f1763 dnn: remove type cast which is not necessary 2021-01-28 09:45:13 +08:00
Mark Thompson bb96824510 dnn: Add ff_ prefix to unnamespaced globals
Reviewed-By: Guo, Yejun <yejun.guo@intel.com>
2021-01-22 15:03:09 +08:00
Mark Thompson c6a3ca2db4 dnn_backend_native_layer_mathbinary.c: Delete unused global variable 2021-01-22 10:18:36 +08:00
Ting Fu c8ba0daf8d dnn/native: add log error message
Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-08-25 13:03:46 +08:00
Ting Fu 230cf9d185 dnn/native: unify error return to DNN_ERROR
Unify all error return as DNN_ERROR, in order to cease model executing
when return error in ff_dnn_execute_model_native layer_func.pf_exec

Signed-off-by: Ting Fu <ting.fu@intel.com>
2020-08-25 13:03:46 +08:00
Mingyu Yin 3477feb643 dnn_backend_native_layer_mathbinary: add floormod support
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-08-24 09:09:11 +08:00
Mingyu Yin 37ef1acedb dnn_backend_native_layer_mathbinary: change to function pointer
Signed-off-by: Mingyu Yin <mingyu.yin@intel.com>
2020-08-24 09:09:11 +08:00
Reimar Döffinger 584f396132 dnn_backend_native: Add overflow check for length calculation.
We should not silently allocate an incorrect sized buffer.
Fixes trac issue #8718.

Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Reviewed-by: Guo, Yejun <yejun.guo@intel.com>
2020-07-06 20:22:30 +08:00
Guo Yejun 0b3bd001ac dnn_backend_native: check operand index
it fixed the issue in https://trac.ffmpeg.org/ticket/8716
2020-06-17 13:42:52 +08:00
Guo, Yejun 71e28c5422 dnn/native: add native support for minimum
it can be tested with model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.minimum(0.7, x)
x2 = tf.maximum(x1, 0.4)
y = tf.identity(x2, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-05-08 15:22:27 +08:00
Guo, Yejun 8ce9d88f93 dnn/native: add native support for divide
it can be tested with model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 2 / x
z2 = 1 / z1
z3 = z2 / 0.25 + 0.3
z4 = z3 - x * 1.5 - 0.3
y = tf.identity(z4, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-04-22 13:15:00 +08:00
Guo, Yejun ef79408e97 dnn/native: add native support for 'mul'
it can be tested with model file generated from above python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 0.5 + 0.3 * x
z2 = z1 * 4
z3 = z2 - x - 2.0
y = tf.identity(z3, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-04-22 13:14:47 +08:00
Guo, Yejun 6aa7e07e7c dnn/native: add native support for 'add'
It can be tested with the model file generated with below python script:

import tensorflow as tf
import numpy as np
import imageio

in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]

x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 0.039 + x
z2 = x + 0.042
z3 = z1 + z2
z4 = z3 - 0.381
z5 = z4 - x
y = tf.math.maximum(z5, 0.0, name='dnn_out')

sess=tf.Session()
sess.run(tf.global_variables_initializer())

graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-04-22 13:14:30 +08:00
Guo, Yejun ffa1561608 dnn_backend_native_layer_mathbinary: add sub support
more math binary operations will be added here

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
2020-04-07 11:04:34 +08:00