Figure 3 - uploaded by Jabar H. Yousif
Content may be subject to copyright.
the short vowels in Arabic text

the short vowels in Arabic text

Source publication
Article
Full-text available
this paper aims to explore the implementation of part of speech tagger (POS) for Arabic Language using neural computing. The Arabic Language is one of the most important languages in the world. More than 422 million people use the Arabic Language as the primary media for writing and speaking. The part of speech is one crucial stage for most natural...

Contexts in source publication

Context 1
... are three short vowels in Arabic text, called fatHa, Damma and kasra which makes it more complicated for writing and understanding than other languages. The diacritical signs are also nominal, initial, or significant as shown in Figure 3. ...
Context 2
... are three short vowels in Arabic text, called fatHa, Damma and kasra which makes it more complicated for writing and understanding than other languages. The diacritical signs are also nominal, initial, or significant as shown in Figure 3. ...

Similar publications

Preprint
Full-text available
Attribution methods assess the contribution of inputs (e.g., words) to the model prediction. One way to do so is erasure: a subset of inputs is considered irrelevant if it can be removed without affecting the model prediction. Despite its conceptual simplicity, erasure is not commonly used in practice. First, the objective is generally intractable,...

Citations

... The third generation of ANNs, also known as deep learning, emerged in the 2000s. It is characterized by multilayer deep neural networks, which can learn complex patterns from large datasets [13,14]. Deep learning involves using several hidden layers with particular types like convolutional, max-pooling, dense, and other unique layers. ...
Article
In the last few years, Neural Networks have become more common in different areas due to their ability to learn intricate patterns and provide precise predictions. Nonetheless, creating an efficient neural network model is a difficult task that demands careful thought of multiple factors, such as architecture, optimization method, and regularization technique. This paper aims to comprehensively overview the state-of-the-art artificial neural network (ANN) generation and highlight key challenges and opportunities in machine learning applications. It provides a critical analysis of current neural network model design methodologies, focusing on the strengths and weaknesses of different approaches. Also, it explores the use of different deep neural networks (DNN) in image recognition, natural language processing, and time series analysis. In addition, the text explores the advantages of selecting optimal values for various components of an Artificial Neural Network (ANN). These components include the number of input/output layers, the number of hidden layers, the type of activation function used, the number of epochs, and the model type selection. Setting these components to their ideal values can help enhance the model's overall performance and generalization. Furthermore, it identifies some common pitfalls and limitations of existing design methodologies, such as overfitting, lack of interpretability, and computational complexity. Finally, it proposes some directions for future research, such as developing more efficient and interpretable neural network architectures, improving the scalability of training algorithms, and exploring the potential of new paradigms, such as Spiking Neural Networks, quantum neural networks, and neuromorphic computing.
... The third generation of ANNs, also known as deep learning, emerged in the 2000s. It is characterized by multilayer deep neural networks, which can learn complex patterns from large datasets [13,14]. This was made possible by the availability of large amounts of labeled data and advances in computing power, which made it feasible to train models with hundreds of layers and billions of parameters. ...
Preprint
Full-text available
In recent years, Neural networks are increasingly deployed in various fields to learn complex patterns and make accurate predictions. However, designing an effective neural network model is a challenging task that requires careful consideration of various factors, including architecture, optimization method, and regularization technique. This paper aims to comprehensively overview the state-of-the-art artificial neural network (ANN) generation and highlight key challenges and opportunities in machine learning applications. It provides a critical analysis of current neural network model design methodologies, focusing on the strengths and weaknesses of different approaches. Also, it explores the use of different learning approaches, including convolutional neural networks (CNN), deep neural networks (DNN), and recurrent neural networks (RNN) in image recognition, natural language processing, and time series analysis. Besides, it discusses the benefits of choosing the ideal values for the different components of ANN, such as the number of Input/output layers, hidden layers number, activation function type, epochs number, and model type selection, which help improve the model performance and generalization. Furthermore, it identifies some common pitfalls and limitations of existing design methodologies, such as overfitting, lack of interpretability, and computational complexity. Finally, it proposes some directions for future research, such as developing more efficient and interpretable neural network architectures, improving the scalability of training algorithms, and exploring the potential of new paradigms, such as Spiking Neural Networks, quantum neural networks, and neuromorphic computing.
... The third generation of ANNs, also known as deep learning, emerged in the 2000s. It is characterized by multilayer deep neural networks, which can learn complex patterns from large datasets [13,14]. This was made possible by the availability of large amounts of labeled data and advances in computing power, which made it feasible to train models with hundreds of layers and billions of parameters. ...
... Sigmoid function is one of the common transfer functions used in neural networks, which maps the input to a value between 0 and 1 [79,80]. The sigmoid function is commonly used in 14 classification tasks, producing outputs that can be interpreted as probabilities. The sigmoid function can cause neural networks to converge slower during training or not at all due to the gradients of the loss function (vanishing gradient) problem. ...
Preprint
Full-text available
In recent years, Neural networks are increasingly deployed in various fields to learn complex patterns and make accurate predictions. However, designing an effective neural network model is a challenging task that requires careful consideration of various factors, including architecture, optimization method, and regularization technique. This paper aims to comprehensively overview the state-of-the-art artificial neural network (ANN) generation and highlight key challenges and opportunities in machine learning applications. It provides a critical analysis of current neural network model design methodologies, focusing on the strengths and weaknesses of different approaches. Also, it explores the use of different learning approaches, including convolutional neural networks (CNN), deep neural networks (DNN), and recurrent neural networks (RNN) in image recognition, natural language processing, and time series analysis. Besides, it discusses the benefits of choosing the ideal values for the different components of ANN, such as the number of Input/output layers, hidden layers number, activation function type, epochs number, and model type selection, which help improve the model performance and generalization. Furthermore, it identifies some common pitfalls and limitations of existing design methodologies, such as overfitting, lack of interpretability, and computational complexity. Finally, it proposes some directions for future research, such as developing more efficient and interpretable neural network architectures, improving the scalability of training algorithms, and exploring the potential of new paradigms, such as Spiking Neural Networks, quantum neural networks, and neuromorphic computing.
... A sequence or a collection of most frequently occurring consecutive words are fed into an RNN. It analyses the data with the technique of finding the words occurring more frequently and creates a model that predicts the next or upcoming word in the sentence, i.e., it Auto-Fills the most probable data [13]. Recurrent neural networks are preferred because in feed forward neural network, it only considers the current input and cannot memorize previous outputs. ...
... A number of approaches have been used to address the POS tagging based supervised and unsupervised methods, as shown in Figure 1. The POS tagging has been applied to Multilanguage using various methods like Rule-Based Models [3,4], Statistical Models [5,6,7], Neural Network Models [8,9,10,11], and hybrid models [12,13]. The Arabic language is essential media as it is an official language to around 250 million people, in addition to the Muslims of Arabic culture around the world [14]. ...
... A number of approaches have been used to address the POS tagging based supervised and unsupervised methods, as shown in Figure 1. The POS tagging has been applied to Multilanguage using various methods like Rule-Based Models [3,4], Statistical Models [5,6,7], Neural Network Models [8,9,10,11], and hybrid models [12,13]. The Arabic language is essential media as it is an official language to around 250 million people, in addition to the Muslims of Arabic culture around the world [14]. ...
Article
Full-text available
The immense increase in the use of the Arabic Language in transmitting information on the internet makes the Arabic Language a focus of researchers and commercial developers. The developing of an efficient Arabic POS tagger is not an easy task due to the complexity of the Language itself and the challenges of tagging disambiguation and unknown words. This paper aims to explore and review the use of Part of speech Tagger for Arabic text based on Hidden Markov Model. Besides, it is discussed and explored the implementation of POS tagger for different languages. This study examined a group of research papers that applied the Part of Speech to Arabic using the Hidden Markov Model. The results have shown that a large number of researchers achieved high accuracy rates in the classification of parts of speech correctly. Handi and Alshamsi achieved a high accuracy rate of 97.6% and 97.4% respectively. Kadim obtained an average accuracy of 75.38% for a Parallel Hidden Markov Model.
Chapter
There is a huge shortage of scientific research in the Arabic language, especially in natural language processing and relationships between Arabic documents and other specific documents. This shortage is also reflected in Arabic Books’ introduction, Topics abstraction, and content summary engines. Furthermore, there are some good samples of Arabic words inside the Quran. The Quran is the holy book of Islam, that is divided into chapters (surah) and verses (ayat) of differing lengths and topics. This paper introduces a framework for both specialized researchers in Islamic studies as well as non-specialized researchers to find hidden relationships between one of the most important chapters of the Holy Quran which is Al Fatiha surah and the remaining chapters of the Holy Quran using Hierarchical Technique data modeling as an unsupervised learning technique. a new framework that can access tokens of the Holy Quran in different granule parts such as chapter (sura) part of the chapter (Aya) of the Holy Quran Sura, words, word roots, Aya roots, and Aya meaning in the Arabic language, Moreover, We had developed a lot of statistics related to Fatiha Sura and the holy Quran like (roots distinct for every Sura, Words Redundancy, Roots Redundancy, Matrix report by roots and every sura, Matrix report shows Percentage of roots similarity by every sura and whole Quran distinct roots, etc.). Furthermore, we enhance the search engine results by adding search by roots and Aya meaning for every Sura. And the results for sample queries show accuracy with more than 3% using meaning and roots compared to the text of the Holy Quran only.KeywordsHoly Quran text analysisText miningArabic text mining1N form2N formData modelingHoly Quran Tafseer text
Article
Full-text available
With the increasing role of technology in transferring information in our daily lives, the Arabic language has become the fourth language used on the Internet. Therefore, to develop different information systems in the Arabic language, we should determine the syntax and semantics of creating a text efficiently and accurately. Part of speech (POS) is one of the primary methods employed to develop any language corpus. Each language consists of several tags applied in different applications, such as natural language processing (NLP), speech synthesis, and information extraction. One of the main benefits of adopting cloud computing services is the offer a low cost and time to store your company data compared to traditional methods. This paper presents and deploys a cloud computing architecture for Tagging Arabic text using a hybrid model, which will help reduce the efforts and cost. The results show an excellent accuracy rate in tagging an Arabic text and quickly respond. Previous studies are compared based on relevant rating factors, which achieved high accuracy, procession, and recall rate of more than 95%. The cloud computing tagger attained an accuracy of 99.2%.
Article
Full-text available
With the increasing role of technology in transferring information in our daily lives, the Arabic language has become the fourth language used on the Internet. Therefore, to develop different information systems in the Arabic language, we should determine the syntax and semantics of creating a text efficiently and accurately. Part of speech (POS) is one of the primary methods employed to develop any language corpus. Each language consists of several tags applied in different applications, such as natural language processing (NLP), speech synthesis, and information extraction. One of the main benefits of adopting cloud computing services is the offer a low cost and time to store your company data compared to traditional methods. This paper presents and deploys a cloud computing architecture for Tagging Arabic text using a hybrid model, which will help reduce the efforts and cost. The results show an excellent accuracy rate in tagging an Arabic text and quickly respond. Previous studies are compared based on relevant rating factors, which achieved high accuracy, procession, and recall rate of more than 95%. The cloud computing tagger attained an accuracy of 99.2%.