ABV-Indian Institute of Information Technology and Management Gwalior
Recent publications
In our contemporary world, where technology is omnipresent and essential to daily life, the reliability of software systems is indispensable. Consequently, efforts to optimize software release time and decision-making processes have become imperative. Software reliability growth models (SRGMs) have emerged as valuable tools in gauging software reliability, with researchers studying various factors such as change point and testing effort. However, uncertainties persist throughout testing processes, which are inherently influenced by human factors. Fuzzy set theory has emerged as a valuable tool in addressing the inherent uncertainties and complexities associated with software systems. Its ability to model imprecise, uncertain, and vague information makes it particularly well-suited for capturing the nuances of software reliability. In this research, we propose a novel approach that amalgamates change point detection, logistic testing effort function modeling, and triangular fuzzy numbers (TFNs) to tackle uncertainty and vagueness in software reliability modeling. Additionally, we explore release time optimization considering TFNs, aiming to enhance decision-making in software development and release planning.
This study explores a prey-predator model with a Holling type II functional response, focusing on how predation-induced fear affects prey dynamics. Assuming a decline in prey population growth rate attributed to predator-induced fear, the model incorporates a fear response delay representing prey detection time, along with gestation delay. The system’s positivity, boundedness, and permanence are proved under certain parametric conditions. The local stability is discussed at trivial, semi-trivial, and positive equilibria. The system exhibits Hopf bifurcation with respect to both delays. Hopf bifurcation analysis is done for different combinations of delays. Furthermore, the properties of periodic solutions in the delayed system are also determined. An extensive numerical simulation has been performed to validate analytical findings. The occurrence of Hopf bifurcation is shown for different combinations of delays by plotting eigenvalues.
The high burden of lung diseases on healthcare necessitates effective detection methods. Current Computer-aided design (CAD) systems are limited by their focus on specific diseases and computationally demanding deep learning models. To overcome these challenges, we introduce CNN-O-ELMNet, a lightweight classification model designed to efficiently detect various lung diseases, surpassing the limitations of disease-specific CAD systems and the complexity of deep learning models. This model combines a convolutional neural network for deep feature extraction with an optimized extreme learning machine, utilizing the imperialistic competitive algorithm for enhanced predictions. We then evaluated the effectiveness of CNN-O-ELMNet using benchmark datasets for lung diseases: distinguishing pneumothorax vs. non-pneumothorax, tuberculosis vs. normal, and lung cancer vs. healthy cases. Our findings demonstrate that CNN-O-ELMNet significantly outperformed (p < 0.05) state-of-the-art methods in binary classifications for tuberculosis and cancer, achieving accuracies of 97.85% and 97.70%, respectively, while maintaining low computational complexity with only 2481 trainable parameters. We also extended the model to categorize lung disease severity based on Brixia scores. Achieving a 96.20% accuracy in multi-class assessment for mild, moderate, and severe cases, makes it suitable for deployment in lightweight healthcare devices.
STATCOM is a source of controllable reactive current used to balance reactive power at power system nodes and manage reactive flow in transmission lines. Most prevalent efficient FACTS stand for Flexible AC Transmission Systems components is the Static Synchronous Compensator (STATCOM), which is frequently modeled as a controlled voltage source for reactive current regulation and increased stability of the power system. To improve the stability of the power grid, a unique robust control strategy for STATCOM controller design has been developed. The suggested scheme is characterized as an optimization problem, and the performance and stability of the controller are assessed using the robustness criteria. To change the controllers' coefficients, the proposed model is applied to each of these control systems, and used controller is a PID controller, The Pareto optimal solution is initially discovered using a modified version of the non-dominance-based genetic algorithm (NSGAII). Due to an increase in load and inadequate growth of the producing and transmission capacity, the operating limitations of a modernized integrated power system are getting tighter. Electricity networks occasionally experience small oscillations
User-Generated Content (UGC) is increasingly becoming prevalent on various digital platforms. The content generated on social media, review forums, and question-answer platforms impacts a larger audience and influences their political, social, and other cognitive abilities. Traditional credibility assessment mechanisms involve assessing the credibility of the source and the text. However, with the increase in how user content can be generated and shared (audio, video, images), multimodal representation of User-Generated Content has become increasingly popular. This paper reviews the credibility assessment of UGC in various domains, particularly identifying fake news, suspicious profiles, and fake reviews and testimonials, focusing on both textual content and the source of the content creator. Next, the concept of multimodal credibility assessment is presented, which also includes audio, video, and images in addition to text. After that, the paper presents a systematic review and comprehensive analysis of work done in the credibility assessment of UGC considering multimodal features. Additionally, the paper provides extensive details on the publicly available multimodal datasets for the credibility assessment of UGC. In the end, the research gaps, challenges, and future directions in assessing the credibility of multimodal user-generated content are presented.
The manufacturing industry is embracing Deep Learning (DL) and Edge Computing (EC) solutions to escalate productivity and computing efficiency. Proficient quality prediction (quantitative quality) of manufacturing processes and products is the foremost priority of industries, admirably accomplished by DL solutions. Industrial EC platforms host these DL solutions to maintain near-real-time performance and preserve privacy. Besides, the efficacy of such solutions relies on the abundance of labeled data, which is often a misery in manufacturing industries. Semi-Supervised Learning (SSL) and Data Augmentation (DA) paradigms are paramount in mitigating these issues. This work proposes an Ensemble of Self-Training SSL over MixUp-based DA (EnSeMUp) mechanism for Deep Neural Networks (DNNs) to acquire quality prediction under limited labeled data. The ensembled DNNs improve the training performance and confidence in pseudo-labeling the unlabeled data but demand higher storage and computation costs. To adhere to the resource constraints of the EC platform, magnitude-based pruning of ensembled DNNs is conducted. The proposed EnSeMUp mechanism is tested for three real-world manufacturing datasets and achieves a 30.35%–69.45% reduction in Mean Squared Error (MSE) loss compared to limited labeled data, 2.44%–58.34% higher MSE loss than complete labeled data, and 70% of storage and computation saving.
The quick access to information on social media networks as well as its exponential rise also made it difficult to distinguish between fake information or real information. The fast dissemination by way of sharing has enhanced its falsification exponentially. It is also important for the credibility of social media networks to distribute fake information. It thus became a study challenge to automatically check for misstatement of information through its source, content, or publisher. This paper demonstrates an approach to the identification by the artificial intelligence of false statements made by public figures. As a software system, two algos are being applied and a series of data tested. The highest result obtained for binary (true or false) labeling is 99 percent. Python 3.6 is the simulation method used here
Artificial intelligence is an analysis of how algorithms can do things that humans are doing differently right now. It is machine intelligence it industry that aims to create it. Researchers around the world have developed a lot of interest in the fake trend of news detection. Several social science research on the effect of false knowledge and how people respond to it have been conducted. Fake news would be any material that is not accurate and created to encourage its viewers to believe in something Fake. Data mining classification is a methodology focused on algorithms for machine learning that uses mathematics, probability, distributions of probabilities, or artificial intelligence. The new and hot topic of deep learning in machine learning can be described as a cascade of layers conducting non-linear processing to achieve many levels of data representation. Data preprocessing faces many particular problems that have resulted in a set of algorithms and heuristic strategies for preprocessing activities such as blending and washing, the recognition of users and sessions, etc. In this review paper, we researched, Artificial intelligence, Data preprocessing, Fake news detection, deep learning as well as we described the various classification techniques
In this article, a dual port Multiple Input Multiple Output (MIMO) cylindrical Dielectric Resonator (DR)‐based frequency tunable antenna with a machine learning (ML) approach for a 5G New Radio (NR) application is presented. According to the author's best knowledge, it is the first time‐frequency tunable MIMO hybrid DR with ML is reported. A dual port MIMO DRA is placed in the orthogonal configuration with the connected ground to obtain higher isolation S12<−19dB$$ \left(\left|{S}_{12}\right|<-19\ dB\right) $$ in the entire frequency range. The proposed dual port antenna provides a total spectrum (TS) and tuning range (TR) of 98.99% and 80.93%, respectively. The different MIMO parameters, Envelope Correlation Coefficient (ECC), Total Active Reflection Coefficient (TARC), and Diversity Gain (DG) are investigated and found within the acceptable limits. The optimization of the proposed dual port tunable antenna is done through the various ML algorithms, including Artificial Neural Network (ANN), K‐Nearest Neighbor (KNN), Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boosting (XGB). The KNN ML algorithm provides more than 98% accuracy for predicting the S‐parameters in all configurations. Hence, the proposed antenna is suitable for 5G NR applications.
Developing wireless communication systems depends on reconfigurable microwave filters (MF) and dielectric resonator antenna (DRA) because the functionality of the various filters and antennas available for a wireless communication system is limited, and reconfigurable DRA and MF are utilized to overcome these limitations. The throughput of a multiantenna system can be achieved with reconfigurable DRA and filter. Various switching strategies and classifications of reconfigurable DRA and MF are presented. The electrical reconfiguration technique is most suitable for developing a reconfigurable DRA and filter among all the switching techniques. In this approach, reconfigurability may be accomplished by utilizing a PIN diode, a varactor diode, and a radio‐frequency microelectromechanical switch (RF‐MEMS). These switches are more reliable, highly efficient, and easily integrated with the microwave circuit. Reconfigurable DRA with machine learning is demonstrated. The application of reconfigurable DRA and filters in cognitive radio (CR) is also demonstrated. Also, the review article focused on the performance parameters, design, and challenges of reconfigurable antenna and MF.
This paper investigates the information-theoretic security of multi-particle diffusive molecular timing (DMT) channels for noise-limited and interference-limited scenarios. By utilizing a channel capacity upper bound expression, we obtain the secrecy outage probability (SOP), average secrecy outage rate (ASOR), and average secrecy outage duration (ASOD) expressions when the number of molecules arriving at Eve from Alice and Interferer follows Gaussian distribution. Subsequently, the effect of various channel parameters, such as Lévy noise parameter, molecular degradation, and molecular correlation on various secrecy metrics, is studied for both scenarios. In the noise-limited system, increasing molecular degradation leads to secrecy compromisation, with an increase in SOP and ASOD and a simultaneous decrease in ASOR.We also observe a significant improvement in the system’s secrecy, especially in terms of ASOR, on increasing the Lévy noise parameter. Concurrently, we analyze the channel parameter’s effect on the secrecy of the interference-limited scenario. From the ASOR and ASOD perspectives, we find that with increasing molecular degradation and correlation, the information-theoretic security of the system increases. However, increasing the Lévy noise parameter negatively impacts the system’s secrecy. Finally, extending our analysis to a three-dimensional environment reveals that additional dimensions lead to secrecy deterioration for the noise-limited and interference-limited cases.
In this study, we present an approach to enhance software reliability, acknowledging the evolving understanding of error dynamics within software development. While traditional models predominantly attribute errors to coding mistakes, recent insights emphasize the role of human factors such as learning processes and fatigue. Our method integrates these insights by incorporating the fatigue factor of software testers and optimizing fault removal efficiency within the debugging process. This integration leads to the formulation of more realistic software reliability growth models, characterized by S-shaped learning curves and an exponential fatigue function. We conduct a thorough analysis of the models’ quality, predictive abilities, and accuracy, evaluating them against three established fit criteria. By encompassing learning, fatigue, and fault removal efficiency within our models, we provide a comprehensive framework for understanding the dynamics of software reliability.
This article investigates a tri-trophic food chain model with the epidemic at the bottom level (prey) and gestation delays for the next two levels( the intermediate predator and top predator). The steady states’ positivity, boundedness, existence, and stability have been analyzed. The study of Hopf bifurcation has been carried out for different steady states. It is shown that when the gestation delay for intermediate predator \(\tau _1\) is absent, the interior equilibrium is locally stable for gestation delays of top predators \(\tau _2\in (0,\tau _{20}^{+})\) and performs Hopf bifurcation for all \(\tau _2 >\tau _{20}^{+}\). Further, in the presence both the delay \(\tau _1\) and \(\tau _2\), the system becomes unstable when \(\tau _1\), \(\tau _2\) crosses their critical values \(\tau _{10}^{++}\), \(\tau _{20}^{++}\) respectively and executes Hopf bifurcation \(\tau _1>\tau _{10}^{++}\), \(\tau _2>\tau _{20}^{++}\). Moreover, the proposed system is stable for \(\tau _1>\tau _{10}^{++}\), \(\tau _2<\tau _{20}^{++}\) and unstable for \(\tau _1<\tau _{10}^{++}\), \(\tau _2>\tau _{20}^{++}\). The sensitivity analysis of the system at coexistence steady state are presented to identify the effect of system parameters on the variables. Finally, we have presented a numerical simulation to verify our analytical findings.
Electrocardiogram (ECG) monitoring models are commonly employed for diagnosing heart diseases. Since ECG signals are normally acquired for a longer time duration with high resolution, there is a need to compress the ECG signals for transmission and storage. So, a novel compression technique is essential in transmitting the signals to the telemedicine center to monitor and analyses the data. In addition, the protection of ECG signals poses a challenging issue, which encryption techniques can resolve. The existing Encryption-Then-Compression (ETC) models for multimedia data fail to properly maintain the trade- off between compression performance and signal quality. In this view, this study presents a new ETC with a diagnosis model for ECG data, called the ETC-ECG model. The proposed model involves four major processes, namely, pre-processing, encryption, compression, and classification. Once the ECG data of the patient are gathered, Discrete Wavelet Transform (DWT) with a Thresholding mechanism is used for noise removal. In addition, the chaotic map-based encryption technique is applied to encrypt the data. Moreover, the Burrows-Wheeler Transform (BWT) approach is employed for the compression of the encrypted data. Finally, a Deep Neural Network (DNN) is applied to the decrypted data to diagnose heart disease. The detailed experimental analysis takes place to ensure the effective performance of the presented model to assure data security, compression, and classification performance for ECG data
The strategic deployment of nodes in Underwater Acoustic Sensor Networks (UASNs) is pivotal for enabling critical network functions such as localization and efficient monitoring. Achieving optimal deployment necessitates balancing two vital parameters: area coverage and network lifetime. Improving one parameter in isolation may adversely affect the other, emphasizing the need for joint optimization of area coverage and energy utilization to enhance overall UASN efficiency. This paper introduces a novel node deployment scheme leveraging Voronoi-fuzzy-c means (FCM) and Salp Swarm Optimization (SSO) to address challenges associated with random node deployment, such as uneven distribution,coverage overlaps and holes. Firstly, FCM computes initial centroids for optimal node placement. Subsequently, Voronoi polygons around these centroids address issues of spatial overlaps and indistinct boundaries. Additionally, Voronoi facilitates the determination of an optimum sensing radius. SSO refines node locations within each polygon by converging the nodes towards centroids. Furthermore, a sleep-wake mechanism for sensor nodes further extends the network lifetime. Performance evaluation reveals that our proposed scheme outperforms existing methods in diverse network scenarios. The proposed method achieved a remarkable 99.31% area coverage, extended network lifetime, and reduced energy consumption across diverse scenarios, all accomplished with a high convergence rate.
In the current era, the Internet of Things (IoT) is an emerging technology that finds its application in the smart agriculture, smart cities, healthcare, etc. The backbone of these IoT-based applications is Wireless Sensor Networks (WSNs). The sensors in the WSNs face various challenges such as hardware failure and limited battery power. This causes network partitioning problems. Also, limited battery power causes energy-hole issues. In this paper, an Artificial Intelligence (AI)-based rendezvous points selection and rotation mechanism is proposed that provides green and fault-tolerant data collection in sensor networks. Also, an intelligent mobile sink-based data collection scheduling scheme is proposed that resolves energy holes and network partitioning issues. The simulations and testbed results demonstrate the efficiency of the proposed scheme.
Digital data has grown dramatically with the development of big data, the Internet of Things, and machine learning. In many different fields, high-performance computers are utilised to handle and process such a massive amount of data. In the financial capital market, which includes stocks, bonds, commodities, foreign currency, and cryptocurrencies, supercomputers essentially trade securities with sophisticated algorithms and excellent processing power. Big capitalizing financial institutions invest a lot of money in data analysts and programmers to create the highest accuracy trading algorithms that will propel the market as a whole. Finding profitable trades or stocks to spend their hard-earned money in over a short- to long-term time frame can be challenging for a newbie trader or investor with little to no expertise in the financial markets. They can suffer significant losses when they rely on professional guidance for investment recommendations. This paper focuses on creating a universal trend trading indicator capable of analysing and forecasting the overall future trend of any stock, bond, commodity, FX, or cryptocurrency with the highest possible profit. A colossal dataset of historically traded stock prices and investment reports from large financial institutions worldwide is compiled. Various machine learning and decision-making models are used for technical and fundamental analysis across numerous securities. The output of the trend trading indicator is displayed on charting platforms, which can provide entry-exit levels at which even novice investors can decide where to invest their money. Multi-timeframe analysis is deployed to predict short-term, medium-term, and long-term overall trends, thus increasing the output accuracy. The indicator is helpful for all types of retail traders and investors worldwide who struggle to benefit from financial markets. Our proposed approach generated annual profits of 86.28%. The entire system, including trading orders, is automated, allowing anyone to create additional passive income from the stock market every month.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,062 members
K. V. Arya
  • Computer Science and Engineering
Anurag Srivastava
  • Computational Nanoscience and Technology Laboratory
Manoj Kumar Dash
  • Behavioural Economics Experients and Analytics Labartory
Ritu Tiwari
  • Information and Communications Technology (ICT)
Mahua Bhattacharya
  • M.Tech Program in Information Communication Technology (ICT)
Information
Address
Morena Link Road, 474015, Gwalior, MP, India
Head of institution
Prof. Rajendra Sahu
Phone
+91-751-2449801
Fax
+91-751-2449813