Fig 4 - uploaded by Qiang Wang
Content may be subject to copyright.
the original two-layer convolutions (Dual-Conv) in DispNetC [8], while the right part shows the Dual-ResBlock module applied in our FADNet++.

the original two-layer convolutions (Dual-Conv) in DispNetC [8], while the right part shows the Dual-ResBlock module applied in our FADNet++.

Source publication
Preprint
Full-text available
Deep neural networks (DNNs) have achieved great success in the area of computer vision. The disparity estimation problem tends to be addressed by DNNs which achieve much better prediction accuracy than traditional hand-crafted feature-based methods. However, the existing DNNs hardly serve both efficient computation and rich expression capability, w...

Contexts in source publication

Context 1
... deconvolution layers. While conducting feature extraction and down-sampling, DispNetC and DispNetS first adopt a convolution layer with a stride of 1 and then a convolution layer with a stride of 2 so that they consistently shrink the feature map size by half. We call the two-layer convolutions with size reduction as Dual-Conv, as shown in Fig. 4(a). DispNetC equipped with Dual-Conv modules and a correlation layer finally achieves an end-points error (EPE) of 1.68 on the SceneFlow dataset [8], as reported in ...
Context 2
... for image classification tasks is widely used to learn robust features and train a very deep network. The residual block can well address the gradient vanish problem when training very deep networks. Thus, we replace the convolution layer in the Dual-Conv module by the residual block to construct a new module called Dual-ResBlock, as shown in Fig. 4(b). With Dual-ResBlock, we can make the network deeper without training difficulty as the residual block allows us to train very deep models. Therefore, we further increase the number of feature extraction and down-sampling layers from five to seven. Finally, DispNetC and DispNetS are evolving to two new networks with better learning ...
Context 3
... deconvolution layers. While conducting feature extraction and down-sampling, DispNetC and DispNetS first adopt a convolution layer with a stride of 1 and then a convolution layer with a stride of 2 so that they consistently shrink the feature map size by half. We call the two-layer convolutions with size reduction as Dual-Conv, as shown in Fig. 4(a). DispNetC equipped with Dual-Conv modules and a correlation layer finally achieves an end-points error (EPE) of 1.68 on the SceneFlow dataset [8], as reported in ...
Context 4
... for image classification tasks is widely used to learn robust features and train a very deep network. The residual block can well address the gradient vanish problem when training very deep networks. Thus, we replace the convolution layer in the Dual-Conv module by the residual block to construct a new module called Dual-ResBlock, as shown in Fig. 4(b). With Dual-ResBlock, we can make the network deeper without training difficulty as the residual block allows us to train very deep models. Therefore, we further increase the number of feature extraction and down-sampling layers from five to seven. Finally, DispNetC and DispNetS are evolving to two new networks with better learning ...

Similar publications

Preprint
Full-text available
For practical deep neural network design on mobile devices, it is essential to consider the constraints incurred by the computational resources and the inference latency in various applications. Among deep network acceleration related approaches, pruning is a widely adopted practice to balance the computational resource consumption and the accuracy...