Figure - available from: Soft Computing
This content is subject to copyright. Terms and conditions apply.
a Sample image of the original Brahmi Script in 3rd BC collected from the Department of Archeology (1993). b Brahmi consonants. c Brahmi vowels

a Sample image of the original Brahmi Script in 3rd BC collected from the Department of Archeology (1993). b Brahmi consonants. c Brahmi vowels

Source publication
Article
Full-text available
Soft computing is an emerging technology, which is more powerful with fuzzy logic by choosing the degree of membership function. This work is an effort to extract the foreground character from stone inscription images using fuzzy logic. Differentiating the character pixel from the stone background is a challenging task. Moreover, several collection...

Citations

... Scale Invariant Feature Transform is used to extract features and identify the decoration and the text areas in ancient manuscripts in [9] In [10,11], complex transforms such as Shape and Hough transforms are used to extract features from epigraph images, and characters are classified using swarm optimization algorithms like Group Search Optimization and Firefly optimization. Also along with [11,12] uses Median fuzzy filters to classify the characters. ...
Preprint
Full-text available
Computational Epigraphy refers to the process of extracting text from stone inscription, transliteration, interpretation, and attribution with the aid of computational methods. Traditional epigraphy methods are time consuming, and tend to damage the stone inscriptions while extracting text. Additionally, interpretation and attribution are subjective and can vary between different epigraphers. However, using modern computation methods can not only be used to extract text, but also interpret and attribute the text in a robust way. We survey and document the existing computational methods that aid in the above-mentioned tasks in epigraphy.
... The advent of image processing, data science, machine learning, and artificial intelligence presents a unique opportunity for the modern world to decipher and analyze ancient information. A substantial volume of Tamil historical documents, once stored in libraries, museums, and temples, is now being digitized and made accessible through digital platforms [4][5][6]. However, this digitization process is met with numerous challenges. ...
Article
Full-text available
In recent times, there has been a proactive effort by various institutions and organizations to preserve historic manuscripts as repositories of traditional knowledge and cultural heritage. Leveraging digital media and emerging technologies has proven to be an efficient way to safeguard these invaluable documents. Such technologies not only facilitate the extraction of knowledge from historic manuscripts but also hold promise for global applications. However, transforming inscribed stone artifacts into binary formats presents significant challenges due to angle distortion, subtle differences between foreground and background, background noise, variations in text size, and related issues. A pivotal aspect of effective image processing in preserving the rich information and wisdom encoded in stone inscriptions lies in employing appropriate pre-processing methods and techniques. This research paper places a special focus on elucidating various preprocessing techniques, encompassing resizing, grayscale conversion, enhancement of brightness and contrast, smoothening, noise removal, morphological operations, and thresholding. To comprehensively assess these techniques, we undertake a study involving stone inscription images extracted from the Tanjore Brihadeeswar Temple, dating back to the eleventh century during the reign of Raja Raja Chola. This choice is informed by the manifold challenges associated with image correction, such as distortion and blurring. We undertake an evaluation encompassing a diverse array of stone background structures, including types like flawless-bright-moderately legible, dark-illegible, flawless-bright-illegible, flawless-dull, flawless-irregular-moderate, highly impaired-dark-legible, highly impaired-irregular-illegible, impaired-dark-moderate, impaired-dull-moderately legible, impaired-dusky dark-moderate, and very impaired-dusky dark-legible. Subsequently, the processed outputs are subjected to character recognition and information extraction, with a focus on comparing the outcomes of various pre-processing methods, including binarization and grayscale conversion. This study seeks to contribute insights into the most effective pre-processing strategies for enhancing the legibility and preservation of ancient Indian script images etched onto diverse stone background structures.
... The advent of image processing, data science, machine learning, and artificial intelligence presents a unique opportunity for the modern world to decipher and analyze ancient information. A substantial volume of Tamil historical documents, once stored in libraries, museums, and temples, is now being digitized and made accessible through digital platforms [4,5,6]. However, this digitization process is met with numerous challenges. ...
Preprint
Full-text available
In recent times, there has been a proactive effort by various institutions and organizations to preserve historic manuscripts as repositories of traditional knowledge and cultural heritage. Leveraging digital media and emerging technologies has proven to be an efficient way to safeguard these invaluable documents. Such technologies not only facilitate the extraction of knowledge from historic manuscripts but also hold promise for global applications. However, transforming inscribed stone artifacts into binary formats presents significant challenges due to angle distortion, subtle differences between foreground and background, background noise, variations in text size, and related issues. A pivotal aspect of effective image processing in preserving the rich information and wisdom encoded in stone inscriptions lies in employing appropriate pre-processing methods and techniques. This research paper places a special focus on elucidating various preprocessing techniques, encompassing resizing, grayscale conversion, enhancement of brightness and contrast, smoothening, noise removal, morphological operations, and thresholding. To comprehensively assess these techniques, we undertake a study involving stone inscription images extracted from the Tanjore Brihadeeswar Temple, dating back to the 11th century during the reign of Raja Raja Chola. This choice is informed by the manifold challenges associated with image correction, such as distortion and blurring. We undertake an evaluation encompassing a diverse array of stone background structures, including types like flawless-bright-moderately legible, dark-illegible, flawless-bright-illegible, flawless-dull, flawless-irregular-moderate, highly impaired-dark-legible, highly impaired-irregular-illegible, impaired-dark-moderate, impaired-dull-moderately legible, impaired-dusky dark-moderate, and very impaired-dusky dark-legible. Subsequently, the processed outputs are subjected to character recognition and information extraction, with a focus on comparing the outcomes of various pre-processing methods, including binarization and grayscale conversion. This study seeks to contribute insights into the most effective pre-processing strategies for enhancing the legibility and preservation of ancient Indian script images etched onto diverse stone background structures.
Article
Handwritten character recognition has become a focus of research since the emergence of deep learning technologies. This research objective is to develop and establish an intelligent model to automate an existing manual system for digitising Ancient Tamil script in order to save time and to preserve antique data. In contrast to Western languages, Tamil script is an old popular script in India that lacks institutional digitising methods. In this paper, To segment and extract the features for further classification, a fuzzy C-means algorithm is employed. This research uses the most appropriate strategies to improve recognition rates and configures a convolutional neural network for effective ancient inscription text recognition. This method uses the dataset of estampage images of Ancient Copper plate inscriptions that contains 65 classes of Ancient Tamil characters and each class has 50 different images. In terms of accuracy and training duration, this method has shown to be more effective for Ancient Character recognition.
Article
Binarization of Tamizhi (Tamil-Brahmi) inscription images are highly challenging as it is captured from very old stone inscriptions that exists around 3rd century BCE in India. The difficulty is due to the degradation of these inscriptions by environmental factors and human negligence over ages. Though many works have been carried out in the binarization of inscription images, very few research was performed for inscription images and no work has been reported for binarization of inscriptions inscribed on irregular medium. The findings of the analysis hold true to all writings that are carved in irregular background. This paper reviews the performance of various binarization techniques on Tamizhi inscription images. Since no previous work was performed, we have applied the existing binarization algorithms on Tamizhi inscription images and analyzed the performance of these algorithms with proper reasoning. In future, we believe that this reasoning on the results will help a new researcher, to adapt or combine or devise new binarization techniques.
Article
The Tamizhi inscriptions, one of the earliest ever discovered, is predominantly found on memorial stones and caves which dates 5th century BCE to 3rd century CE. Today’s generations need ways to interpret the script in order to know the historical figures and events because the Tamizhi script evolved into the modern Tamil script over time. Currently, only few epigraphists are available to manually decode the inscriptions into modern Tamil. Hence, there is a need for an alternate way to preserve this cultural heritage. Image processing is one such digital technology that enables binarization on inscription images, and the retrieved text may then be utilized to convert them to the needed target language. Nevertheless, binarization of Tamizhi inscription images are highly complex due to aging, environmental factors, handwritten, similar foreground & background and uneven size and shapes of the stones. Also, due to the small dataset, deep learning techniques are inapplicable. Furthermore, existing approaches produce poor results for Tamizhi inscriptions since they can only be used on flat stone backgrounds and require sufficient light illumination for effective binarization. This research suggests a multi-level improvised binarization solution for Tamizhi inscription images to address these challenges. It achieves this utilizing post-processing with shrink and swell filters together with an improved median filter with modified adaptive thresholding. Outperforming the current binarization techniques, which achieved a maximum accuracy of about 74%, MLIBT produced an accuracy of around 92.19%.