Similar publications

Preprint
Full-text available
The movement of large quantities of data during the training of a Deep Neural Network presents immense challenges for machine learning workloads. To minimize this overhead, especially on the movement and calculation of gradient information, we introduce streaming batch principal component analysis as an update algorithm. Streaming batch principal c...
Preprint
Full-text available
This paper aims to reconstruct the initial condition of a hyperbolic equation with an unknown damping coefficient. Our approach involves approximating the hyperbolic equation's solution by its truncated Fourier expansion in the time domain and using a polynomial-exponential basis. This truncation process facilitates the elimination of the time vari...
Article
Full-text available
Graph embedding-based discriminative dimensionality reduction has attracted much more attention over the past few decades. In constructing adjacent graphs in graph embedding, the weight functions are crucial. The weight function is always found experimentally in practice. So far, there is no any theorem to guide the selection of weight functions. I...
Preprint
Full-text available
The objective of this work is to train noise-robust speaker embeddings for speaker diarisation. Speaker embeddings play a crucial role in the performance of diarisation systems, but they often capture spurious information such as noise and reverberation, adversely affecting performance. Our previous work have proposed an auto-encoder-based dimensio...
Article
Full-text available
Classical linear discriminant analysis (LDA) is based on squared Frobenious norm and hence is sensitive to outliers and noise. To improve the robustness of LDA, this paper introduces a capped l2,1-norm of a matrix, which employs non-squared l2-norm and “capped” operation, and further proposes a novel capped l2,1-norm linear discriminant analysis, c...

Citations

... The separation between hyperplane and data points is known as margin. SVM tries to maximize this margin and once it is maximized, it signifies the optimal hyper-plane [11]. ...
Article
Full-text available
It has been spotted an exponential growth in terms of dimension in real world data. Some example of higher dimensional data may includes speech signal, sensor data, medical data, criminal data and data related to recommendation process for different field like news, movies (Netflix) and e-commerce. To empowering learning accuracy in the area of machine learning and enhancing mining performance one need to remove redundant feature and feature not relevant for mining and learning task from this high dimension dataset. There exist many supervised and unsupervised methodologies in literature to perform dimension reduction. The objective of paper is to present most prominent methodologies related to the field of dimension reduction and highlight advantages along with disadvantages of these algorithms which can act as starting point for beginners of this field.