S Bavankumar's scientific contributions

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (3)


Fig. 1. Baseline SVM Classifier K Nearest Neighbors (KNN)
Fig. 2. Baseline KNN
Fig. 6. Input image Fig. 7. Edge Detected Image Fig. 8. ORD Feature extraction Histogram of Visual Vocabulary Generation The many kinds of sign images can each be characterized by a distinct group of key descriptors that hold common qualities, as was discussed earlier. Feature models from this collection are used to build a training photo bag, which is subsequently utilized to produce training pictures. In order to get K groups with equivalent descriptors, K-means clustering is used. Nearby clusters are allocated to each patch in this image. A histogram of codewords is generated for each picture after the descriptors have been mapped to their appropriate clusters. It is possible to generate a language of code words by bagging together a number of feature histograms as shown in Fig 8. This method assumes that the number of code words created for each picture is 150.
Fig. 9. Tested Output
A Smart System for Sign Language Recognition using Machine Learning Models
  • Conference Paper
  • Full-text available

December 2022

·

98 Reads

·

R. Santhosh Kumar

·

·

[...]

·

S. Bavankumar
Download
Share

Hybrid Integration of Transforms with Neural Network based Fusion Techniques for clinical and Healthcare Applications

August 2021

·

3 Reads

The prime objective of Hybrid Multimodal Medical Image Fusion (HMMIF) method is preservation of important features of images and details about various images from source for creating a visually robust enough single fused image provides a very promising diagnostic tool with numerous clinical and healthcare applications. The Non subsampled shearlet Transform (NSST) with Pulse Coupled Neural Network (PCNN) based hybrid algorithms are proposed for MMIF in this paper. In the proposed method, initially input images are decomposed to less and high frequencies with the application of NSST. The components with lesser frequency are applied with averaging fusion rule. The maximum fusion rule with PCNN is applied on high frequency components. The coefficients produced by every frequency bands are inverse transformed to provide fused images. The proposed algorithms provide the best fused images without distortion and false artefacts. Comparison of proposed technique is done with the pre-existing conventional techniques. The images obtained by fusing both sources' content with the help of the above algorithm gives the best with respect to visualization and diagnosis of the condition.


Datasets
Accuracy of the proposed Face Mask Detection system
Comparison of proposed methods for Facial Mask Detection
A Real Time Prediction and Classification of Face Mask Detection using CNN Model

July 2021

·

4 Reads

Current scenario of COVID-19 (Corona Virus Disease) pandemic makes almost everyone to wear a mask in order to effectively prevent the spread of the virus. This almost makes conventional facial recognition technology ineffective in many cases, such as community access control, face access control, facial attendance, facial security checks at train stations, etc. Therefore, it is very urgent to improve the recognition performance of the existing face recognition technology on the masked faces. For that detecting the people with face mask is very essential. In this work, a reliable method based on discard masked region is proposed in order to address the problem of masked face recognition process. The first step is to discard the masked face region and extract the forehead and eyes region. Next, a pre-trained deep Convolutional neural networks (CNN) is applied to extract the best features from the obtained regions. Finally, it show experimental result is achieved an accuracy of more than 98% validation dataset.