BookPDF Available

Proceedings of the First International Conference on Computational Intelligence and Informatics: ICCII 2016

Authors:

Abstract

The book covers a variety of topics which include data mining and data warehousing, high performance computing, parallel and distributed computing, computational intelligence, soft computing, big data, cloud computing, grid computing, cognitive computing, image processing, computer networks, wireless networks, social networks, wireless sensor networks, information and network security, web security, internet of things, bioinformatics and geoinformatics. The book is a collection of best papers submitted in the First International Conference on Computational Intelligence and Informatics (ICCII 2016) held during 28-30 May 2016 at JNTUH CEH, Hyderabad, India. It was hosted by Department of Computer Science and Engineering, JNTUH College of Engineering in association with Division V (Education & Research) CSI, India.
A preview of the PDF is not available

Chapters (69)

Centre for Advanced Systems (CAS) is a newly sanctioned DRDO establishment engaged in the production of aerospace vehicle of national importance. Aerospace vehicles are fire and forget type and need rigorous testing and validation at ground before subjecting it to critical flight trial. Aerospace vehicles are very complex in nature and consist of various systems, subsystems, and assemblies that need to be tested both in standalone mode and in an integrated manner. The numerous tests and phase checks of these subsystems generate lot of data that need to be analyzed before its formal clearance for flight test or launch. This paper aims to automate this cumbersome process of data analysis and documentation using the GUI feature of Matlab®. Prior to this, the offline data were plotted using m files and figures thus obtained was inserted manually in documentation software. All these process were manual and prone to human error. Now, this entire process has been automated using the GUI feature of Matlab®. The data are plotted through a user-friendly GUI window by a single mouse click. The plots can also be exported to MS PowerPoint® and other documentation software by a single click in the required format for presentation.
In digital image watermarking, many techniques are used for obtaining optimal image representation; image decomposition as a standard set in the frequency domain is not necessary (DCT, DWT, and DFT). Therefore, another representation of transform was explored which is about using algebra methods and algorithms with watermarking like singular value decomposition algorithm based on watermarking. SVD algorithms have shown that they are highly strong against extensive range of attacks. In addition to that, Genetic Algorithm (GA) is used with SVD to optimize the watermarking. Many techniques and algorithms have already been proposed on the using of SVD and GA on digital watermarking. In this paper, we introduce a general survey on those techniques along with analysis for them based on the two measures, transparency and robustness.
The advances in mobile and communication technologies lead to the advancement of mobile social networks (MSNs). MSN changed the way people communicate and exchange the private and sensitive information among the friend groups via mobile phones. Due to the involvement of private and sensitive information, MSN demands for efficient and privacy-preserving matchmaking protocols to prevent the unintended data (attribute) leakage. Many existing matchmaking protocols are based on user’s private and specific data. Malicious participants may opt their attribute set arbitrarily so as to discover more information about the attributes of an honest participant. Hence, there is great chance of information leakage to a dishonest participant. In this context, Sarpong et al. had proposed a first of its kind of an authenticated hybrid matchmaking protocol that will help match-pair initiators to find an appropriate pair which satisfies the predefined threshold number of common attributes. Sarpong et al. had claimed that their protocol restricts attribute leakage to unintended participants and proved to be secure. Unfortunately, in Sarpong et al. scheme, after thorough analysis, we demonstrate that their scheme suffers from data (attribute) leakage in which the initiator and the participant can compute or achieve all the attributes of each other. Also, we show that Sarpong et al. scheme requires huge computation and communication cost. As a part of our contribution, we will propose an efficient and secure matchmaking protocol which is light weight and restricts attribute leakage to the participants.
An artificial neural networks model to enhance the performance index of mass transfer function in tube flow by means of entry region coil-disc assembly promoter was inserted coaxially is presented in this paper. Popular Backpropagation algorithm was utilized to test, train and normalize the network data to envisage the performance of mass transfer function. The experimental data of the study is separated into two sets one is training sets and second one is validation sets. The 248 sets of the experimental data were used in training and 106 sets for the validation of the artificial neural networks using MATLAB 7.7.0, particularly tool boxes to predict the performance index of the mass transfer in tube for faster convergence and accuracy. The weights were initialized within the range of [–1, 1]. The network limitations in all attempts taken learning rate as 0.10 and momentum term as 0.30. The finest model was selected based on the MSE, STD and R2. In this, network with 5_8_1 configuration is recommended for mass transfer training. This research work reveals that artificial neural networks with adding more number of layers and nodes in hidden layer may not increase the performance of mass transfer function.
Implementation of wireless indicator network is mainly in aggressive surroundings like army fight field, environment monitoring, atomic energy vegetation, focus on monitoring, seismic monitoring, fire and overflow recognition, etc., where continuous monitoring and real-time reaction are of innovator need. A wireless signal program has a preferable number of signal nodes that are connected mutually each distinctive easily. These signal nodes are used to sense and assess the heterogeneous factors, such as ecological stress, heat range, wetness, ground beauty products and heat, and therefore, it is very persevering to secure from the various attacks. Being restricted by sources are battery power or energy, memory potential and computational energy; these techniques are vulnerable against various types of inner and external attacks. One such challenge of attack is wormhole attack, where attackers create a postponement in between the two points in the program. In this paper, the recommended method discovers and furthermore prevents wormhole attack in wireless signal techniques. The recommended strategy uses stations information of nodes in program and uses Euclidean distance system to further recognize and restrain wormhole attack and make the connections at intervals signal nodes more secured and efficient.
Widespread and easily available tools have become common for video synthesis and maneuvring in the digital era. It is therefore necessary, imperative and difficult as well to ensure the authenticity of video information. Authenticity and trustworthiness of the video is of paramount importance in a variety of areas like court of law, surveillance systems, journalism, advertisement, movie industry and medical world. Any malicious alteration or modification could affect the decisions taken based on these videos. Video authentication and tampering detection techniques due to intentional changes which are visible or invisible in a video are discussed in this paper.
This paper deals about the visibility position accuracies of four Indian Regional Navigation Satellite System (IRNSS). The position accuracies have been determined accurately after launching the fourth satellite with respect to standard time and it is observed by the receiver placed in School of Engineering and Technology, Jain University. The IRNSS satellite is independent of Global Position system (GPS). It is used for mapping, object tracking, navigation, surveying, and other services for users within the coverage area 1500 km.
Multicore processors have become an exemplar for high-performance computing. Designing a real-time scheduler that enhances throughput of the entire system in load scheduling in shared caches is the midpoint of the paper. Real-time scheduling schemes are appropriate for shared caches that these methods are aware of these above-mentioned issues. The existing methods work using simple memory sets; all the priorities are static and non-scalable data structure makes task priorities less flexible. Our work addresses the above issues in the soft real-time system, where task scheduling systems require functionally accurate result, but the timing constraints are softened. In order to exhibit this we use: SESC (Super Escalar Simulator) and CACTI (Cache Access Cycle Time Indicator) for investigating the efficiency of our approach on various multicore platforms.
With the tremendous growth of population and the consequent road traffic increase, the demand for optimized traffic data collection and management framework is demanding extensive research. The collection of traffic data using multiple sensors and other capture devices have been addressed in multiple researches deploying the mechanism using geodetically static sensor agents. However, the sheath factors for the parallel research outcomes have significantly ignored the fact of data replication control during processing. This work proposes a novel framework for capturing and storing traffic data. During the multi-node traffic data analysis, controlling the replication in order to reduce the cost hasalso been a challenge. Recent research outcomes demonstrate the use of agent-based sensor networks to accumulate road traffic data. However, a multipurpose framework for accumulating and managing the traffic data is still a demand. The outcomes of this research is also to consider the most effective cloud-based storage for the traffic data with the knowledge of most popular cloud-based storage service providers. The accumulation of the data is also followed by the predictive system for road traffic data analysis. Hence in this work we also explore the use of standard machine learning techniques to identify the most suitable technique with performance consideration. Also this work proposes a performance evaluation matrix for comparing the traffic frameworks.
Graph is a way of representing data as a web of relationships. Graphs consist of nodes and edges where nodes are concepts and edges are the relationships. Graph analytics is the science of solving problems expressed on graphs. Graph analytics is useful for finding hidden patterns, relationships, similarities and anomalies in graphs. These tasks are useful in many application areas like protein analysis, fraud detection, health care, computer security, financial data analysis, etc. Minimum description length (MDL) comes from information theory, which can be used for universal coding or universal modeling. SUBstructure Discovery Using Examples (SUBDUE) algorithm uses MDL to discover substructures. In this paper, the use of MDL for graph analytics is shown by applying MDL encoding to various graph datasets and in particular graph matching is solved using MDL. Further, comparative analysis is done to show how MDL value changes w.r.t. varying graph properties. subgen tool is used to generate graph datasets. Statistical tests are applied and we came to know in which cases MDL value changes significantly.
In any organization, information can be transmitted over the Internet. Maintaining confidentiality of the data during transmission plays a vital role. Many existing approaches like firewalls, antivirus, encryption and decryption techniques are available to provide security. But still these approaches suffer due to the sophisticated nature of the attackers. So we are moving towards swarm intelligence approaches to build intrusion detection system. In this paper, we use a swarm intelligence approach namely the artificial bee colony to implement classifier. It will generate classification rules to detect the intruder. For this KDD dataset was used. Before classification rule generation subsets were generated depending on the correlation existing between attributes and target class label to reduce complexity. The results show that ABC effectively identified the different types of attacks compared to the existing ones.
This paper presents various security issues related to hypervisor in cloud. This paper also brings issues possible with a malicious virtual machine running over hypervisor such as exploiting more resources than allocated by VM, stealing sensitive data by bypassing isolation of VM through side channel attacks, allowing attacks to compromise hypervisor. In this paper, we also bring security measures or requirements to be taken and architectures that are needed by hypervisor to handle various security concerns.
Security in wireless sensor framework is a noteworthy test for all researchers now-a-days. Reinforcing the verification framework before association foundation is the right way to upgrade the structural planning furthermore to give the secured correspondence from busybody. This paper proposes a mixture security instrument utilizing chaotic neural network while setting up the association between the two hubs. In this paper, an arbitrary pair savvy key created is scrambled utilizing chaotic neural network. The proposed calculation incredibly enhances the security of the key shared between the hubs. The CPU time for the proposed calculation is assessed for different key generation calculation.
Nuclear medicine images suffer from blur due to the scattering of emitted radiations. An image processing technique is proposed in this paper to reduce blur in nuclear images. This is achieved in two main stages. A maximum likelihood estimate of the distortion operator or the point spread function is computed from the image itself. Then, regularized least-squares filtering is performed constrained to the noise power computed from the image. Pre-filtering is also done to avoid unwanted high frequency drops. The algorithm is tested on real cardiac single-photon emission computed tomography images. Quantitative and qualitative evaluations of the algorithm show the potential of proposed algorithm in reducing blur while maintaining high peak signal-to-noise ratio.
Owing to the rapid growth of multimedia technology, multimedia information is easily accessed by any user and the same information construction and distribution are also very easy. Due to technology development, the multimedia information increases due to variety of factors: it can be uploaded by unprofessional users nowadays. Due to the low quality and the large number of duplicated video files available, this leads video extraction more and more complex. The general method of representing each video segment is shot that consists of series of frames. Among this series, the input frame based shot method is specifically assisted for searching the video content as clients provided image query/search where an image will be matched with the indexed key frames with assist of resemblance distance. As a result, the key frames selection is most significant, and several methods are used to automate the process. This paper proposes a new technique for key frame selection. The proposed method shows significantly good and the experiments prove the above statement.
It is envisaged that Quality-of-Service (QoS) support for multimedia-rich applications is the key to the success of next-generation wireless mesh networks (WMNs). However, QoS support in WMN is challenging, because of the limited network capacity and intensive resource requirements. In the recent past, multi-radio multi-channel (MRMC) networking is used as a promising approach to boost network capacity. By assigning non-overlapping channels to radios, MRMC-WMN can reduce interference and, hence, increase the network capacity. The performance of MRMC-WMNs is very much dependent on the working of routing and channel assignment. Routing and channel assignment are tightly coupled and should be jointly optimized. Over the years, different research works have been done to address the issues in a joint channel assignment and routing (JCAR). This paper critically reviews the existing JCAR approaches taking different parameters, such as routing metrics used, interference model, and methodology used. The authors also present the pros and cons of these approaches. Future research directions in JCAR are also discussed.
Improved imputation has a major role in the research of data pre-process for data analysis. The missing value treatment is implemented with many of the traditional approaches, such as attribute mean/mode, cluster-based mean/mode substitution. In these approaches, the major concentration is missing valued attribute. This paper presents a framework for correlated cluster-based imputation to improve the quality of data for data mining applications. We make use the correlation analysis on data set with respect to missing data attributes. Based on highly correlated attributes, the data set is divided into clusters using suitable clustering techniques and imputes the missing content with respect to cluster mean value. This correlated cluster-based imputation improves the quality of data. The imputed data are analyzed with K-Nearest Neighbor (KNN) and J48 Decision Tree multi-class classifiers. The efficiency of imputation is ascertaining 100 % accuracy with correlated cluster mean imputed data compared with attribute mean imputed data.
Hand gesture recognition is the temporal pattern analysis with mathematical interpretation. It provides the means for the non-verbal communication among the people, more natural and powerful means of human–computer interaction (HCI) for the virtual reality application. The development of human-computer stochastic processes has led to a 1-D hidden Markov models (1DHMMs) and training algorithms to find the high recognition rate and low computational complexity. Due to their dimensionality and computational efficiency, Pseudo 2-D HMMs (P2DHMMs) are often favored for a flexible way of presenting events with temporal and dynamic variations. Both 1-D HMM and 2-D HMM are present in hand gestures, which are of increasing interest in the research of hand gesture recognition (HGR). The main issue of 1-D HMM is the fact that the recursiveness in the forward and backward procedures typically multiply probability values between themselves. Hence, this product quickly tends to zero and goes beyond any machine storage capabilities. This work presents an application of Pseudo 2-D HMM to classify the hand gestures from measured values of an accelerating image. Comparing an experimental result between 1-D HMM and Pseudo 2-D HMM with respect to recognition rate and accuracy, it shows a prominent result for the proposed approach.
Today, vast amount of news in various forms is hosted on the web. They include news articles, digital newspapers, news clips, podcasts, and other sources. Traditionally, news articles and writings have been used to carry out sentiment analysis for topics. However, news channels and their transcripts represent vast data that have not been examined for business aspects. In this light, we have charted out a methodology to gather transcripts and process them for sentiment tasks by building a system to crawl Webpages for documents, index them, and aggregate them for topic analysis. Vector space model has been used for document indexing with predetermined set of topics and sentiment analysis carried out through the SentiWordNet data set, a lexical resource used for opinion mining. The areas of insight are mainly the polarity index (degree of polarity or subjectivity) of the news presented as well as their coverage. This research shows insights that can used by businesses to assess the content and quality of their content ...
In this paper, we are presenting an efficient system for managing the inventory for various applications dealing with solid or liquid assets. By implementing the inventory management based on IOT, we eliminate the unnecessary man power and make it automated between the measurement and order placement stages, thereby improving the efficiency of inventory management. The idea utilizes the ultrasonic transducer and a processing device with capability to connect to the Internet, such as a Raspberry Pi, to measure the inventory and send a mail to the supplier and/or to the company personnel for order placement, as well as display the present stock availability on a Web page hosted by our system.
E-commerce applications are popular as a requirement of emerging information and are becoming everyone’s choice for seeking information and expressing opinions through reviews. Recommender systems plays a key role in serving the user with the best Web services by suggesting probable liked items or pages that keeps user out of the information overload problem. Past research of the recommenders mostly focused on improving the quality of suggestions by the user’s navigational patterns in history, but not much emphasis has been given on the concept drift of the user in the current session. In this paper, a new recommender model is proposed that not only identifies the access sequence of the user according to the domain knowledge, but also identifies the concept drift of the user and recommends it. The proposed approach is evaluated by comparing with existing algorithms and perhaps does not sacrifice the accuracy of the quality of the recommendations.
There are many attacks possible on wireless sensor network (WSN) and replay attack is a major one among them and, moreover, very easy to execute. A replay attack is carried out by continuously keeping track of the messages exchanged between entities and replayed later to either bring down the target entity or affect the performance of the target network. Many mechanisms have been designed to mitigate the replay attack in WSN, but most of the mechanisms are either complex or insecure. In this paper, we propose a mechanism to mitigate the replay attack in WSN. In the proposed work at each node, assorted value of a received packet is maintained in a table and the reply attack is detected or mitigated using their assorted value of the already received packets. The proposed mechanism was simulated and its performance evaluated and it was found that the proposed mechanism mitigates the replay attack, secures the network and takes less time for processing.
The finest way of semantic Web representation for further indexing and querying becomes robust due to the RDF structure. The magnified growth in semantic Web data, RDF query processing, emerged as complex due to numerous joins. Henceforth, to achieve scalability and robustness toward search space and time, the query joins must be optimized. The majority of existing benchmarking models have been evaluated on a single source. These methods utterly failed to optimize the queries with nested loop join, bind join, and AGJoin, which is due to the search cost only being considered as the optimization factor by all of the existing models. Hence to optimize the distribute chain queries, here in this paper we propose a novel evolutionary approach, which is based on progressive genetic evolutions to identify the optimized chain queries even for distributed triple stores. The experimental results show that the significance of the proposed model over the existing evolutionary approaches is optimal. Also, it is obvious to confirm that the metrics and evolution process introduced here in this paper motivates future research to identify the new dimensions of chain query optimization to search in distributed triple stores.
In this era of big data, a huge volume of data is produced. Storage and analysis of such data is not possible by traditional techniques. In this paper, a good method to implement the MapReduce Apriori algorithm using vertical layout of database along with power set and concept of Set Theory of Intersection have been proposed. The vertical layout has the advantage of scanning only a limited number of records for calculating the support of an item. By use of the power set concept, we are able to generate frequent item set, in just two scans of database, reducing the complexity. The concept of set intersection is used to calculate its support. The result shows good improvement over the existing MapReduce Apriori algorithm.
Cognitive radio technology was first introduced by J. Mitola in 1999 to solve the problem of spectrum scarcity for wireless communication. A Cognitive Radio Ad hoc Network (CRAHN) is one of the cognitive radio-based architectures in which wireless unlicensed nodes communicate in infrastructureless environment. Each node in CRAHN has the cognitive capability of sensing the surrounding radio environment for accessing the underutilized licensed spectrum in an opportunistic manner. But those unlicensed users (secondary users) should not make any interference to the communication of licensed users (primary users). In CRAHNs, each node operates as an end system and also as a router to forward packets for multi-hop communication. Due to the dynamic topology, time and space-varying spectrum availability and lack of centralized system, CRAHNs are vulnerable to various security attacks. Hello flood attack is a network layer attack in CRAHNs which can drain off the limited resources of CRAHN nodes by making excessive flooding of hello messages in the network. The proposed distributed cooperative algorithm mitigates the hello flood attack in CRAHNs.
DTN abbreviated as disruption-tolerant networks is the opportunistic network, characterized by irregular network connectivity, long and variable delays, asymmetric data rates and low node density. Data access is the big research issue in DTNs because of its distinguishing characteristics. To improve the data access in DTN, schemes like caching and replication were introduced, to provide a distributed storage of data in the network thereby increasing the data availability. This paper differentiates some of the intentional caching techniques such as cooperative caching, duration aware caching, adaptive caching and distributed caching based on various parameters like contact duration, caching cost, cache node election process and forwarding schemes. Upon simulation with ONE, these schemes were found to increase delivery probability and reduce data access delay to a greater extent, thereby improving the performance of the network.
Nearest neighbor classifiers demand high computational resources, i.e., time and memory. Two distinct approaches are followed by researchers in pattern recognition to reduce this computational burden. The first approach is reducing the reference set (training set) and the second approach is dimensionality reduction which is referred to as prototype selection and feature reduction (a.k.a feature extraction or feature selection), respectively. In this paper, we cascaded the two methods to achieve the reduction in both directions. The experiments are done on the bench mark datasets, and the results obtained are satisfactory.
This paper describes an approach for mitigating distributed denial-of-services attacks in Cloud. DDoS attacks are huge pitfall for Cloud, and still, this is not very well handled. We presented a survey of existing work to defend DDoS attacks and mechanisms. In Cloud, intruder detection systems can be deployed at various positions like front-end, back-end or at Virtual Machine. Most of the existing IDS have been deployed at Virtual Machine in cloud. We proposed a new frame work using ensemble classifiers, namely, Bagging and Stacking to detect intrusions of both insiders and outsiders, and our proposed frame work will deploy at back-end. We focused on defending DDoS attacks in cloud environment which are the bottle neck of a Cloud environment compared to other type of attacks.
In this paper, the spectrum holes in licensed frequency bands are utilized for satisfying quality-of-service (QoS) demands of cognitive users. The available spectrum holes are classified as white, gray, and black. The bands are then fairly allocated to the cognitive users based on their QoS requirements. The data traffic from cognitive users are classified as high, medium, and low QoS required flows. Then, the white, gray, and black spaces are, respectively, assigned to them. Thus, spectrum utilization is improved, and fairness is achieved, since more no of secondary users are accommodated, but QoS is not compromised.
Functional magnetic resonance imaging (fMRI) is a safe non-invasive technique used for understanding the brain functions against various stimuli and hence to predict the brain disorders. The fMRI signal patterns and brain mapping have been found promising the medical science in the recent days. Adequate contributions have been made in the literature on fMRI signal analysis. This paper identifies notable research works that have contributed on fMRI signal analysis and predicting the brain functions and performs systematic review on them. The review provides the strengths and weaknesses, and the research gaps exist in the works based on 11 useful parameters, such as sparsity, non-linearity, robustness, etc.
Characterizing program behavior using static analysis is a challenging problem. In this work, we focus on the fundamental problem of program similarity quantification, i.e., estimating the behavioral similarity of two programs. The solution to this problem is a sub-routine for many important practical problems, such as malware classification, code-cloning detection, program testing, and so on. The main difficulty is to be able to characterize the run-time program behavior without actually executing the program or performing emulation. In this work, we propose a novel behavior tracing approach to characterize program behaviors. We use the call-dependency relationship among the program API calls to generate a trace of the API calling sequence. The dependency tracking is done in a backward fashion, so as to capture the cause and effect relationship among the API calls. Our hypothesis is that this relationship can capture the program behavior to a large extent. We performed experiments by considering several “versions” of a given software, where each version was generated using the code obfuscation techniques. Our approach was found to be resilient up to 20 % obfuscation, i.e., our approach correctly detected that all obfuscated programs that are similar in behavior based on the API call sequences.
The Web contains large amount of data of unstructured nature which gives the relevant as well as irrelevant results. To remove the irrelevancy in results, a methodology is defined which would retrieve the semantic information. Semantic search directly deals with the knowledge base which is domain specific. Everyone constructs ontology knowledge base in their own way, which results in heterogeneity in ontology. The problem of heterogeneity can be resolved by applying the algorithm of ontology mapping. All the documents are collected by Web crawler from the Web and a document base is created. The documents are then given as an input for performing semantic annotation on the updated ontology. The results against the users query are retrieved from semantic information retrieval system after applying searching algorithm on it. The experiments conducted with this methodology show that the results thus obtained provide more accurate and precise information.
As the size and complexity of the online biomedical databases are growing day by day, finding an essential structure or unstructured patterns in the distributed biomedical applications has become more complex. Traditional Hadoop-based distributed decision tree models such as Probability based decision tree (PDT), Classification And Regression Tree (CART) and Multiclass Classification Decision Tree have failed to discover relational patterns, user-specific patterns and feature-based patterns, due to the large number of feature sets. These models depend on selection of relevant attributes and uniform data distribution. Data imbalance, indexing and sparsity are the three major issues in these distributed decision tree models. In this proposed model, an enhanced attributes selection ranking model and Hadoop-based decision tree model were implemented to extract the user-specific interesting patterns in online biomedical databases. Experimental results show that the proposed model has high true positive, high precision and low error rate compared to traditional distributed decision tree models.
Mobile Cloud Computing (MCC) presents new kinds of offering and amenities to mobile customers to have maximum benefits and advantages of cloud computing. In spite of these advantages, protection is still the main concern and a reason of worry for cloud customers. This paper specializes in the use of biometric authentication framework for comfortable entry to confined information within the cloud making use of a mobile. Certainly, biometrics supplies a bigger measure of protection than average authentication methods. In this paper, we proposed the pre-processing steps and algorithms for extracting the features and matching the biometrics traits. By making use of the proposed algorithm, the user is not going to experience handiest benefit from local computing power and storage capacity, in addition to that they are going to get advantages and benefit of higher authentication accuracy, customized services with low hardware cost and secure entry.
Evaluation of learners’ response is an important metric in determining learners’ satisfaction for any learning system. E-Learning systems currently use string matching or regular expression-based approaches in evaluating short answers. While these endorse the correctness of an answer, they are limited to handling predictable errors only. The nature of errors, however, may vary and it is important to intelligently judge the nature of the error to correctly gauge the state of learning of the learner. A better learning experience requires the system to also display benevolence, which is an innately human behavior characteristic, in evaluating the response. The current paper presents a k-variable fuzzy finite state automaton-based approach to implement an evaluation system for short answers. The proposed method attempts to emulate human behavior in the context of errors committed which may be knowledge based or inadvertent in nature. The technique is explained with sample scores from test conducted on a group of learners.
Content-Based Video Retrieval (CBVR) is an approach for retrieving similar videos from the database. Need for efficient techniques of retrieval is increasing day by day. This paper used both color and shape features to retrieve the similar videos. In our system, we identified Key Frames of the video shots in the first step. We found the most dominant color of each key frame and also edge points of each key frame and stored in the feature database. The color and shape features of query video are calculated and compared with the features stored in the feature database. Videos falling within the threshold are retrieved as most similar videos. The combination of two features results in high performance of our system.
Throughput of the system in multiprogramming and time sharing systems mainly depends on the careful scheduling of the CPU and other I/O devices. CPU scheduling should control the waiting time, response time, turnaround time, and number of context switches. One of the most extensively used scheduling algorithms is shortest next remaining time first (SRTF), which gives the reduced amount of average waiting time. But this algorithm suffers from some drawbacks. One such is that, every upcoming process if selected for execution, causes a context switch even though it is slightly shorter than the currently running process. As the number of such situations increases, the number of context switches increases, causing the reduction in performance of the system. In this paper, we modify the traditional SRTF to intelligent SRTF, by changing the decision of the preemption, to decrease the number of context switches. The main idea of our proposed algorithm is to make a context switch only if the next process plus context switch over head is shorter than the currently running process. By this we can reduce the number of context switches and thereby the performance of the system is improved.
Now the security of information and applications is getting abnormal attention in the public. Because the millions of expenditure spending to combat on continuous threats. The threats (anomalies) are widely occurred at programming scope by exploitation of coding and other side is at application scope due to bad structure of development. Today various machine learning techniques are applied over application level behavior to discriminate the anomalies, but not much work is done in coding exploits. So in this paper, we have given some rich extension work to detect wide range of anomalies at coding exploits. Here, we used some standard tracing tricks and tools available in Linux platform, which describe how to observe the behavior of program execution’s outcomes and model the necessary information collected from system as part of active learning. The experimental work done on various codes of artificial programs, Linux commands and also compared their performance on artificial datasets collected while program normal runs.
Fractals are known for their aesthetic appeal. We have calculated the Fractal Dimension (FD) with the Box-Counting method for the Adavus, pure dance movements in BharataNatyam. These poses were found to be Fractal with the FD in the range of 1.3–1.5. This FD range has already been proved to be naturally aesthetically appealing to the human eye. Fractals have not been used so far for Indian Classical Dance (ICD) pose analysis. In this paper we have used FD for auto classification of system generated dance poses. This experimental study also reveals that the dance poses in the FD range of 1.5–1.6 are also found to be creative and appealing by the dance experts. Considering the classification ratings of system generated dance poses by an International dancer as Gold Standard data we have found that the Accuracy, Recall and F_Score to be 46 %, 52.63 % and 48.77 % respectively. The results are promising and encouraging for further research using FD with other parameters to measure the aesthetics of dance pose.
Virtualized environment generates a large amount of monitoring data; even then it’s very hard to correlate such a monitoring data effectively with underlying virtualized environment, due to its dynamic nature. This paper introduces new method of mapping relationship in a virtualized data center by identifying dependent variables within monitored performance data. Dependent variables have an association relationship which can be measured and validated through statistical calculations. The new algorithm introduced here, automatically searches such relationship between various devices of the virtualized environment. Due to its dynamic nature of the virtualized environment, we have to take a measurement at multiple points of time, any relationship which holds good across these time intervals are considered as dependent variables. These dependent variables are used to characterize the complex interaction of the virtual data center device. Such relationship details can be used to build model to predict the fault occurrence. Paper explains the algorithm and experimental results obtained during our validation phase.
Efficient query processing plays a critical role in numerous settings especially in case of data centric applications. Starting from the architecture to the final stage of result compilation a query processing system has to undertake several challenges. In this paper, we compare different architectures and existing approaches of query processing. In modern days, database systems have to deal with data distribution. To make challenges more complex, the participating databases might be heterogeneous in nature. In this paper, we discuss various challenges of query processing systems in centralized, distributed, and multidatabase backgrounds. We also analyze various parameters and metrics that directly impact query processing.
Increase in traffic in cities makes emergency vehicles, like ambulance, to take more time to reach the destination from the source due to which the life of human beings are in danger. So, emergency vehicles, like ambulance and fire engines, require better traffic management for safe and fast travel to safeguard the lives of human beings. In this paper, a new method is proposed for the better management of the traffic of emergency vehicles through the use of internet of things (IOT). The proposed method enables the emergency vehicles to signal the traffic signal controller placed in the traffic junction regarding their arrival so that the traffic will be regulated. This system requires the users traveling in the emergency vehicle to signal the traffic controller hardware through the android application deployed in their mobile phones. We have also proposed the idea for an advanced system which controls the traffic automatically.
In this paper, Wireless sensor sequence data mining model is demonstrated for the smart home and Internet of Things data analytics. Exploration of the sensor data patterns by correlating with the multi stream sensor data that are fused from the wireless sensor network is presented. The effective realization of the sensor data patterns from heterogeneous sensing systems for various applications of IoT can be known from the proposed conceptual data model. The conceptual data model includes the discovering of frequent pattern item sets using various computational archetypal. Results of the explicit patterns augmented for data analytics are encouraging as the prototype was tested through real-time data rather than test bed scenario data or synthetic data.
Exposure to violent content in movies will have an effect on both behavioural and psychological aspects of children. Efficient algorithms are needed to automatically detect the violent content in movies. Existing work in identifying violent content in Hollywood movies is studied along with the datasets on which the evaluation is carried out. We have also examined the different set of challenges posed by Tollywood movies and how the existing work may not be able to handle the challenges is presented. An approach based on rough sets is proposed to handle the highly dynamic nature of violent scenes.
Optical Character Recognition (OCR) is a unique and challengeable filed in pattern recognition. Identically demand is still present in OCR, where various works has been coming up to provide acceptable solutions. In this field, Tamil hand written recognition is getting popular due to the desire of Tamil lovers in computerizing the Tamil language which includes handwritten documents also. This task is not an easiest one, due to its curvy shape and variation in structure of shape when people are writing. Here the treatment must be happened on the structure level to address entire shapes. This paper mainly focuses on identifying the shape of the structure, where the shape of the structure is derived from the formation of the triangle based hierarchical representation. Prism Tree algorithm is utilized to complete this task, where the shape is located by the tree representation. Finally vector values are extracted from the shape representation of tree. Hierarchical based Support Vector Machine (SVM) is used for predicting the character from those vector values. Good results are achieved when the shape of the character structure is well suited for real character nature.
Due to the emergence of innovative technologies over Internet, the traditional broadcasting of TV is changed to use IPTV for next generation networks. IPTV is the delivery of multimedia content to multiple subscribers through multicasting over well known IP. The IPTV depends on the Internet and telecommunications. Therefore, the inherent issues are to be identified and resolved for the success of IPTV. In this paper, we review the IPTV concept and its related aspects. The paper throws light into the issues of IPTV, integration of different communication media and approaches like P2P (unicast model) and IPTV (multicast model), and the need for QoS requirements and evaluation of QoS in the context of IPTV networks. Moreover, the paper provides user behaviour that can have impact on the content distribution infrastructure and the underlying strategies for content delivery and reduction of zapping time. We also present a hybrid approach that combines the features of delay insensitive and delay sensitive approaches to strike balance between highly popular and least popular IPTV channels. The insights of the paper are useful in further research in the area of IPTV which is going to be the next generation network for high quality and complex content delivery.
Present era of web services are evolving and extending its services with Cloud computing based remote mobile devices rapidly. The contact of the information world moving from programmable system era to cognitive systems era (Kelly III John E (2015) IBM Research Whitepaper on Cognitive computing, [1]). So the web users have high levels of expectations for the quality of interactions, accuracy of the results and the availability of the services. In the world of cognition and reasoning based web systems to answer the selected problems, to recommend a product information result in right ‘context’, we need more diverse reasoning strategies and on the fly strategies based intelligent Cognition Algorithms to supply Cognition-as-a-service to the cloud based web users. Here we summarize recent works and early findings such as: (1) How natural extension takes place in cognitive knowledge of dynamic web services. (2) How information processing time differs in between HDES and AES. (3) How arbitrage opportunity of cognition algorithm improves the frequency of exact findings in information search.
Biometric is gaining its important in security due to its unique characteristics. Number of methods in the biometric and cryptography has been proposed for secured message communication. Cryptography based security that employed cryptographic keys which are generated using various key generation algorithms. However, a user has to remember the cryptographic key or maintain it in the database in secure manner by his own risk. Once the stored key is compromised, then an attacker can access the user’s data easily. This motivates us to develop a new security mechanism to protect the user’s persona data. In this paper, we present a novel approach to generate a biometric-based cryptography key generation using fingerprint data of a user. With respect to the security aspects, our approach is secure and more flexible for the key generation. This key can be used to encrypt the user’s personal data. The fingerprints features (Reference Points) have been generated by directional area (directional component) and probability distribution. These points are having uniform probability across all the points. The advantage of this approach includes the reduction in pre-processing time. A 1024-bits key is generated from the extracted fingerprints attributes. Experimental result has been discussed in this paper.
Feature Selection can be done in most of the medical domains to identify the most suitable features that result in the accuracy of classification and to reduce time of computation; as it works on reduced number of features. The nature of the problem domain and the design issues of soft computing methods used determines the effectiveness of feature selection methods. The study includes the feature selection using Genetic Algorithm (GA), to generate the best feature subset of WBCD breast cancer dataset. The features with the best fitness value are selected for classification. Classification is done using a guided approach called Support Vector Machine (SVM) along with some constraints to specify the performance measures of classification.
A Co-clustering approach for heart disease analysis using a weight based approach is presented. Towards the performance improvement in database mining, co-clustering approaches were used to minimize the search overhead. For the co-clustering of data, information based co-clustering (ITCC) has been used as an optimal means of clustering. However, in this co-clustering approach, elements are clustered based on Bregman divergence criterion, following the convergence of Bregman Index optimization using Euclidean distance (ED) approach. The ED approach works over the magnitude values of the elements, without consideration of the data relations. In many applications, relationship between elements played a significant role in making decision. In this paper, a relation oriented co-clustering logic following weight allocation process is presented. The proposed Weighted ITCC (W-ITCC) method/technique is applied over Cleveland data set for heart disease analysis to do performance comparisons.
Most of the users access the Internet or web services by means of smart phones, which are capable of handling all kinds of computations. There is a necessity to provide mutual authentication among the clients (Mobile Hosts) and servers. In general, users authenticated to system or website by means of User ID and Password, a claim for user’s identity. Passwords have their own vulnerabilities such as dictionary, brute-force, guessing, observation and spyware attacks etc. Users are the weakest link in any secure system because, they choose simple, short and easy to remember passwords. we proposes a strong password generation algorithm with help of PassText and Graphical Password concepts. PassText Password concept is used to generate a unique strong password with help of password images (graphical password). User need to remember only the password images instead of text. This paper analyzes the security issues with help of scyther tool.
The order-weighted averaging operator is commonly used in decision-making processes, where its powerful yet simple nature allows to aggregate output from several data sources into a meaningful result. Key to the order-weighted averaging operator is assignment of weights. Several approaches have been suggested and their properties examined for decision-making, except in the area of heuristic search assignment. In this paper, weight assignment is experimentally examined for supervised classification tasks using a fuzzy pattern classifier with order-weighted averaging, using maximum entropy and two separate heuristic search approaches, namely genetic algorithm and pattern search. The experiment is conducted using a sample of 20 UCI data sets. Results are discussed and recommendations made for when and how to apply heuristic search for this type of classifier.
The voluminous amounts of data generated from the web applications and social networking and online auction sites are highly unstructured in nature. To store and analyze such data, traditional ways of using relational databases are not suitable. This yields path towards the acceptance of emerging nosql databases as an efficient means to deal with bigdata. This work presents an efficient nosql data store and proves its effectiveness by analyzing the results in terms of efficient querying by evaluating the performance estimation on read and write operations on simple and complex queries, also for storage and retrieval of increasing number of records. The results presented depict that the chosen nosql datastore—Cassansdra is efficient over the relational database—mysql and the other nosql databases—HBase and MongoDB, that leads to achieving cost saving benefits to any organization willing to use Nosql-Cassandra for managing Bigdata for heavy loads.
Wireless Sensor Networks (WSNs) consists of densely populated large number of spatially distributed configurable sensors, to meet the requirements of industrial, military, precision agriculture and health monitoring applications with ease of implementation and maintenance cost. Transmission of data requires both energy and quality of service (QoS) aware routing to ensure efficient use of the sensors and effective access of gathered information. In turn routing technique must provide reliable transmission of data without compromising QoS requirements of applications. We have addressed different routing protocol categories with range of QoS metrics to be achieved, to improve performance of WSNs applications.
Secured transmission of data is the most challenging and critical issue in Wireless Sensor networks (WSN). QoS parameters such as energy efficiency, power consumption, end-to-end delay and so on plays an important role for system performance. Clustering approach is an effective way that can enhance system and network performance. In Clustered Wireless Sensor Network (CWSN), any type of attacks that are malicious and harmful to the network can occur and security of CWSN is affected. One of the major attack that may occur in CWSN is sinkhole attack that needs to be detected. An agent-based protocol is used to detect or prevent such type of attack. This can help improve the performance of the network that includes QoS parameters. An intrusion detection system (IDS) is configured using agent-based protocol to detect if an intrusion occurs.
In the wireless sensor networks, probability of detection is a challenging task. There are various factors like coverage area, node density and efficiency of the sensors affecting the performance. In this paper, we proposed efficient intruder detection algorithms in a wireless sensor network. We developed a mathematical expression for the sensing range, node density and other required parameters and it is useful for any designer to predict the detection probability. We used homogeneous network environment with uniform distribution of sensor nodes to find the detection probability. We evaluated the proposed detection probability methods in both single-sensing and multi-sensing scenarios and observed that the results are satisfactory.
Many algorithms came into existence for mining association rules. Since the databases in the real world are subjected to frequent changes, the algorithms need to be rerun to generate association rules that can reflect record insertions. It causes overhead the algorithm needs to scan entire database every time and repeat the process. Incremental updating of mined association rules is challenging. Recently, Deng and Lv proposed an algorithm named FIN (Frequent Itemsets using Nodesets) for fast mining of frequent itemsets. They proposed a data structure named Nodesets which consume less memory. In this paper, we proposed an algorithm named FIN_INCRE based on FIN which updates mined association rules without reinventing the wheel again. When new records are inserted, only the nodes in the data structure are updated adaptively using the concept of pre-large itemsets that effectively avoid re-scanning original data set. We built a prototype application to demonstrate the proof of concept. The empirical results reveal that the proposed algorithm improves the performance significantly.
Majority of the data available for knowledge discovery and information retrieval are prone to identity disclosure. The major act to disclose the identity is through exploring the pattern of attributes involved in data formation. The existing benchmarking models are anonymizing the data either by generalizing, deleting the sensitive attributes, or adding noise to the data. Either of these approaches is not guaranteed in optimality and accuracy in results that obtained from the mining models applied on that data set. The deviation in results often causes falsified decision-making, which is unconditionally not acceptable in certain domains like health mining. To fill the gap, here, we proposed a novel hybridization of feature set partitioning and data restructuring to achieve the pattern anonymization. The model is particularly aimed to restructure the data for supervised learning. To the best of our knowledge, pattern anonymization is first of its kind that attempted to anonymize the patterns rather individual attributes. The experiment results also indicate the scope of robustness and scalability of the supervised learning on restructured data.
E-commerce has changed the way which the users select and purchase items. Most of the e-commerce applications deployed in the Web today use recommender systems to recommend items to the online users based on their earlier purchases. The tourist recommender systems discussed in the literature, so far, cover, regarding the best routes from one city to another city by including the tourist spots and beautiful scenery-based sites and the destination tourist spots by accepting images or description of the tourist spots as input. In this paper, we have proposed the architecture for a tourist recommender system and then a novel scheduling algorithm for preparing the visit schedules in a city for the tourists based on user requirements and we have implemented the same using Hadoop framework.
High-frequency radiations from mobile towers are known to have adverse effect on life of human beings, live stock, and birds. The rapid development of mobile communication has led to the installation of mobile towers in heavily habitat areas, and the cluster of towers being put up by different network providers has led to increased levels of radiation. At certain areas, this level may have reached dangerous levels as to cause long-term effect. The paper aims to map the levels of radiation in selected areas of possible vulnerability and to locate high-risk areas along with the measured radiation levels.
Nowadays, it is a common practice to cultivate various crops in a field. Normal watering system will not work well when there is multi-crop in the field, because each crop requires different levels of watering. The same problem exists when it is multi-soil land. When moisture sensor is used in this context, many numbers of sensors are placed at different crop locations, and their data are analyzed to water the plants. To collect the data from various sensors, and processes them, and to turn on/off, the sprinklers or some other watering system based upon the data is a complex process. Each moisture sensor requires some power source for its operation (normally battery is used). Here, an idea is proposed to use a robot with moisture sensor and GPS. The robot will move around the field, which will test the moisture for every 10 m, it has moved, based up on the moisture data, the sprinklers system in that region will be on/off. Using weather data provide by meteorological department, watering to the crop can be adjusted.
In this paper, a new algorithm for image encryption using the scan method is proposed. Using SCAN language, it is possible to generate a wide range of scanning paths based on the spatial accessing methodology. The proposed algorithm is implemented using different gray and color images, and the experimental results and security analysis indicate the advantages of the proposed algorithm. The original image can be reproduced using this algorithm without any loss of information. The algorithm is simple and fast as compared with other recent approaches. It is secure enough to be used in a wide range of applications, as it has passed all the security requirements. This paper presents an overview of the encryption and decryption process using the proposed algorithm. The implementation of the algorithm is done in MATLAB and tested on various gray and color images.
Integrated pest management (IPM) is a combination of different techniques to increase crop production in eco-friendly manner. Minimizing use of pesticides with IPM will reduce risk of human diseases and will also reduce environmental risks. Various computerized systems are used for IPM, where agricultural experts provide their pest management knowledge as input for decision-making. Integrated pest management knowledge if represented as ontology, it can be shared by heterogeneous agricultural computerized systems. This paper presents a tool to develop IPM ontology using upper IPM ontology and domain specific crop IPM ontology. Tool is named IPMOntoDeveloper. IPM ontologies developed by distinct agricultural experts can be integrated into one to enrich knowledge base of IPM practices for specific crop. This paper presents a system named IPMOntoShare to merge IPM ontologies developed by various agricultural experts. It combines several approaches of ontology matching, including name matching and structure matching.
Word-sense disambiguation is an open challenge in natural language processing. It is the process of identifying the actual meaning of the word based on the senses of the surrounding words of the context in which it is used. Knowledge-based approaches are becoming most popular than other approaches for word-sense disambiguation. Knowledge-based approaches do not require large volumes of training data instead uses Lexical knowledge bases to construct undirected graphs. In this paper, traditional Page Rank algorithms and random walk approaches are compared extensively.
Internet has sophisticated the life of a populist. The cloud is an Internet-based technology where data are location independent. It provides “on demand” service to the customer on pay-per-use model. The multi-tenancy is an essential characteristic in cloud, which has motivated many researchers to contribute their work. Scaffold is a platform, where programmers can specify and integrate application and database. The pattern is a reusable component used to develop applications. NoSQL document data model has an advantage of storing the values in a tiered storage. In this contribution, a pattern language for the scaffold is proposed, which will be useful for the researchers to develop malleable multi-tenant SaaS application.
This paper presents the modelling of three-phase asynchronous motor using vector control. The purpose of using this control theory with respect to other methodologies as it holds the advantage of accessing all the internal variables required for machine operation. The control theory employed here is vector control. This theory can also be used without estimators. Park, Clarke, and Krause and its Inverse transformations have been used in this modelling. This modelling is done in MATLAB/SIMULINK platform by initializing the variables and parameters of three-phase induction motor, and the output of speed and torque is verified.
Classification of handwritten and printed text in pre-printed documents enhances the performance of optical character recognition technologies. The objective of work presented lies in devising an approach to perform automatic classification of printed and handwritten text at word level, which is inherently found in pre-printed documents. The proposed work consists of three stages to perform the classification of printed and handwritten words in Telugu pre-printed documents. The stage one encompasses the feature computation from the segmented words, stage two determines text discrimination coefficient, and finally, the classification of printed and handwritten text using a decision model is accomplished in stage three. The statistical and geometrical moment features are computed with respect to the text block under consideration, and furthermore, these features are employed for determination of text discrimination coefficient. The results of experimentation are proved to be promising and robust with an accuracy of around 98.2 %.
Thermal power plants are controlled and monitored in real time using state-of-the-art distributed control systems (DCS). Both at real-time control and online monitoring of these plants, thermodynamic properties of water/steam along with their partial derivatives are required in computation and optimization of process. Particularly during data reconciliation and process optimization, the equipment models appear as non-linear constraints which require Jacobians/Hessians of thermodynamic properties. Thermodynamic properties of steam/water are complex functions (Gibbs and Helmholtz functions), and they are repeatedly called during non-linear optimization. The Jacobians/Hessians of these properties are numerically approximated involving further repeated calls to the functions. Legacy approaches involve approximating these functions using higher-order polynomial algorithms or look-up tables. These methods have the limitation of high computational time or need large memory. However, using artificial neural networks (ANNs), these limitations can be overcome, and in this paper, it is demonstrated that ANNs with back-propagation neural networks (BPN) are an effective means to improve the computational performances. The computational time is reduced by a factor of nearly four times.
Synthetic aperture radar (SAR)-based platforms have to process increasingly large number of complex floating-point operations and have to meet hard real-time deadlines. However, real-time use of SAR is severely restricted by computation time taken for image formation. One of the classical methods of reducing this computation time to make it suitable for real-time application is multi-processing. A successful attempt has been made by the authors to develop and test a parallel algorithm for synthetic aperture radar image formation, and the results are presented in this paper.
ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.