Figure 1 - uploaded by Ujjwal Baid
Content may be subject to copyright.
5: Divisions of vertebral column 

5: Divisions of vertebral column 

Citations

... These panoramically mapped UL-8 FT FMs along with BL-11 FMs are together passed to the Decoder network for generating O tt panoramic images. In the F-Mat() function, Feature registration [43] of {p i ,q j } FV paris is performed based on a H-homography matrix calculated using the N-iterative RANSAC algorithm [44]. • H homography matrix parameters calculated using the RANSAC algo. ...
... Generally, the Max-pooling operation is only applied after performing fine-tuned feature extraction at respective layers. For UL-5 layer in the UL-encoder, a 2 * 2-max pooling with stride-1 and padding =2 is applied to refine its previously extracted FMs for optimal generation of semantically dense feature-units in the UL-8 FT FVs, for robust Feature matching & F-Mat * () feature registration operations [24], [25], [31], [39], [40], [43], [44]. The l,r-PanoED network's final output (i.e panoramically stitched raw image, Fig. 4 (e)) O'' is sent to the post-processing module to yield a final fully processed panoramic image O (Fig. 4 (f)) with minimal reconstruction loss, occlusion & pattern distortions even on non-homogenous stereo input. ...
... Based on the above-discussed F-Mat * () Algorithm, UL-8 FT feature maps along with the matched pairs of UL-8 p,q FT feature vectors are all-together passed to the F-Mat * () function, internally in this function/algo an optimal H-Homography matrix[44] is estimated for performing UL-8 FT feature registration. The final output of this F-Mat()VOLUME 9, 2021 ...
Article
Full-text available
Image-stitching (or) mosaicing is considered an active research-topic with numerous use-cases in computer-vision, AR/VR, computer-graphics domains, but maintaining homogeneity among the input image sequences during the stitching/mosaicing process is considered as a primary-limitation & major-disadvantage. To tackle these limitations, this article has introduced a robust and reliable image stitching methodology (l,r-Stitch Unit), which considers multiple non-homogeneous image sequences as input to generate a reliable panoramically stitched wide view as the final output. The l,r-Stitch Unit further consists of a pre-processing, post-processing sub-modules & a l,r-PanoED-network, where each sub-module is a robust ensemble of several deep-learning, computer-vision & image-handling techniques. This article has also introduced a novel convolutional-encoder-decoder deep-neural-network (l,r-PanoED-network) with a unique split-encoding-network methodology, to stitch non-coherent input left, right stereo image pairs. The encoder-network of the proposed l,r-PanoED extracts semantically rich deep-feature-maps from the input to stitch/map them into a wide-panoramic domain, the feature-extraction & feature-mapping operations are performed simultaneously in the l,r-PanoED’s encoder-network based on the split-encoding-network methodology. The decoder-network of l,r-PanoED adaptively reconstructs the output panoramic-view from the encoder networks’ bottle-neck feature-maps. The proposed l,r-Stitch Unit has been rigorously benchmarked with alternative image-stitching methodologies on our custom-built traffic dataset and several other public-datasets. Multiple evaluation metrics (SSIM, PSNR, MSE, $L_{\alpha,\beta,\gamma }$ , FM-rate, Average-latency-time) & wild-Conditions (rotational/color/intensity variances, noise, etc) were considered during the benchmarking analysis, and based on the results, our proposed method has outperformed among other image-stitching methodologies and has proved to be effective even in wild non-homogeneous inputs.
... The above stated I values which were calculated across neighborhoods are also used to calculate (3) M (structure tensor) matrix. The homography matrix and parameters are generated using the RANSAC algorithm [30]. The Feature Key-points between the images are matched using Arc-Similarity/ Euclidean Distance(d(a, b) = ( [a i − b i ] 0.5 )) using the distance similarities, the robustness of the matched Key-points aren't reliable as the dimension of the key-points are very large, so Neighborhood-matching strategies like (N.N.D.R), K-D Tree [31] are considered. ...
... Now the above generated FTC image is divided into 2 equal halves, left FTC image (Fig. 15(d)) & right FTC image( Fig. 15(e)). Generally in I 1 , Red spanned [i ∈ (255, 0, 0) -(220, 30,30)] pixels resembles nearer pixels, Green spanned [j ∈ (0, 255, 0) -(80, 230, 80)] pixels resembles farther pixels and Yellow spanned [k ∈ (255, 255, 0) -(235, 160, 160)] pixels resembled middle range pixels. We perform histogram analysis ( Fig. 15(g), (h)) on left & right FTC images to calculate respective R, G, B probability distribution functions, i.e Red PDF ( i PDF(i)), Green PDF ( j PDF(j)), Yellow PDF ( k PDF(k)). ...
... Now the above generated FTC image is divided into 2 equal halves, left FTC image (Fig. 15(d)) & right FTC image( Fig. 15(e)). Generally in I 1 , Red spanned [i ∈ (255, 0, 0) -(220, 30,30)] pixels resembles nearer pixels, Green spanned [j ∈ (0, 255, 0) -(80, 230, 80)] pixels resembles farther pixels and Yellow spanned [k ∈ (255, 255, 0) -(235, 160, 160)] pixels resembled middle range pixels. We perform histogram analysis ( Fig. 15(g), (h)) on left & right FTC images to calculate respective R, G, B probability distribution functions, i.e Red PDF ( i PDF(i)), Green PDF ( j PDF(j)), Yellow PDF ( k PDF(k)). ...
Article
Full-text available
The usage of transportation systems is inevitable; any assistance module which can catalyze the flow involved in transportation systems, parallelly improving the reliability of processes involved is a boon for day-to-day human lives. This paper introduces a novel, cost-effective, and highly responsive Post-active Driving Assistance System, which is ”Adaptive-Mask-Modelling Driving Assistance System” with intuitive wide field-of-view modeling architecture. The proposed system is a vision-based approach, which processes a panoramic-front view (stitched from temporal synchronous left, right stereo camera feed) & simple monocular-rear view to generate robust & reliable proximity triggers along with co-relative navigation suggestions. The proposed system generates robust objects, adaptive field-of-view masks using FRCNN+Resnet-101_FPN, DSED neural-networks, and are later processed and mutually analyzed at respective stages to trigger proximity alerts and frame reliable navigation suggestions. The proposed DSED network is an Encoder-Decoder-Convolutional-Neural-Network to estimate lane-offset parameters which are responsible for adaptive modeling of field-of-view range (157 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sup> -210 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sup> ) during live inference. Proposed stages, deep-neural-networks, and implemented algorithms, modules are state-of-the-art and achieved outstanding performance with minimal loss(L{p, t}, ${\text{L}}_{\delta }$ , ${\text{L}}_{Total}$ ) values during benchmarking analysis on our custom-built, KITTI, MS-COCO, Pascal-VOC, Make-3D datasets. The proposed assistance-system is tested on our custom-built, multiple public datasets to generalize its reliability and robustness under multiple wild conditions, input traffic scenarios & locations.