Conference PaperPDF Available

Understanding Art with AI: Our Research Experience

Authors:

Abstract and Figures

Artificial Intelligence solutions are empowering many fields of knowledge, including art. Indeed, the growing availability of large collections of digitized artworks, coupled with recent advances in Pattern Recognition and Computer Vision, offer new opportunities for researchers in these fields to help the art community with automatic and intelligent support tools. In this discussion paper, we outline some research directions that we are exploring to contribute to the challenge of understanding art with AI. Specifically, our current research is primarily concerned with visual link retrieval, artwork clustering, integrating new features based on contextual information encoded in a knowledge graph, and implementing these methods on social robots to provide new engaging user interfaces. The application of Information Technology to fine arts has countless applications, the most important of which concerns the preservation and fruition of our cultural heritage, which has been severely penalized, along with other sectors, by the ongoing COVID pandemic. On the other hand, the artistic domain poses entirely new challenges to the traditional ones, which, if addressed, can push the limits of current methods to achieve better semantic scene understanding.
Content may be subject to copyright.
Understanding Art with AI: Our Research Experience
Giovanna Castellano, Gennaro Vessio
Department of Computer Science, University of Bari Aldo Moro, Italy
Abstract
Articial Intelligence solutions are empowering many elds of knowledge, including art. Indeed, the
growing availability of large collections of digitized artworks, coupled with recent advances in Pattern
Recognition and Computer Vision, oer new opportunities for researchers in these elds to help the
art community with automatic and intelligent support tools. In this discussion paper, we outline
some research directions that we are exploring to contribute to the challenge of understanding art
with AI. Specically, our current research is primarily concerned with visual link retrieval, artwork
clustering, integrating new features based on contextual information encoded in a knowledge graph, and
implementing these methods on social robots to provide new engaging user interfaces. The application
of Information Technology to ne arts has countless applications, the most important of which concerns
the preservation and fruition of our cultural heritage, which has been severely penalized, along with
other sectors, by the ongoing COVID pandemic. On the other hand, the artistic domain poses entirely
new challenges to the traditional ones, which, if addressed, can push the limits of current methods to
achieve better semantic scene understanding.
Keywords
Digital Humanities, Visual arts, Articial Intelligence, Deep Learning
1. Introduction
Articial Intelligence is revolutionizing numerous elds of knowledge and has established
itself as a key enabling technology. Among the various domains that have been powered by
AI-based solutions there is also the artistic one. In fact, in recent years, a large-scale digitization
eort has been made, which has led to the increasing availability of huge digitized artwork
collections. And this availability, combined with the recent advances in Pattern Recognition
and Computer Vision, has opened up new opportunities for researchers in these elds to assist
domain experts, particularly art historians, in the study and analysis of visual arts. Among other
benets, a deeper understanding of visual arts can favor their use by an ever wider audience,
thus promoting the spread of culture. Visual arts, and more generally our cultural heritage, play
a role of primary importance for the economic and cultural growth of our society [1,2].
The ability to recognize characteristics, similarities and, more generally, patterns within and
between digitized artworks, in order to favor a deeper study, inherently falls within the domain
of human aesthetic perception [
3
]. Since this perception is highly subjective, and inuenced
by various factors, not least the emotion the artwork evokes in the observer, it is extremely
dicult to conceptualize. However, representation learning techniques, such as those on which
AIxIA 2021 Discussion Papers
giovanna.castellano@uniba.it (G. Castellano); gennaro.vessio@uniba.it (G. Vessio)
0000-0002-6489-8628 (G. Castellano); 0000-0002-0883-2691 (G. Vessio)
© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
      
      
           
              
               
 
              
    

         
                
   
           
               
              
              
             
   
2. Visual Link Retrieval
               
            
            
             
           
   
             
            
             
       visual link retrieval        
           
         

   
            
               
               
              
               
      historical knowledge discovery     
               
           
             
             
              
   
Figure 1: Query examples and corresponding visually linked paintings.
3. Artwork Clustering
             
              
             
            
              
               
        
              
            

 
        

      
            
             
            
            
            
              
                 
              
            
                
Figure 2: Sample images from the clusters found among Picasso’s artworks.
             
           
    

         
       
4. Computer Vision & Knowledge Graphs
              
             
              
           
               
           
               
             
             
            
              
             
            
   
             

 
 

           
            
    

         
 
      
   
             
          
                
             
           
              
            
          
5. Social Robotics
             
            
social robots            
             

  
             
               
             
  

           
              
             
              
                
              
              
              
             
                
               
            
            
      



6. Conclusion
              
              
         Digital Humanities    
           
             
               
               
             
              
               
                  
             
           
  
          
            
              
               
             
          
               
            
            
    
Acknowledgments
            
     
References

           
           


             
  

             
       

                 
      

            
       

               
        

            
          

           
          
   

                
        

              
          
      

           
            
 

             
         

            
        

             
       

            
        

             
 

             
          


           
            
 

                 
            
  

              
          
 

           
           
      

           
    2 
Article
Full-text available
Technologies related to artificial intelligence (AI) have a strong impact on the changes of research and creative practices in visual arts. The growing number of research initiatives and creative applications that emerge in the intersection of AI and art motivates us to examine and discuss the creative and explorative potentials of AI technologies in the context of art. This article provides an integrated review of two facets of AI and art: (1) AI is used for art analysis and employed on digitized artwork collections, or (2) AI is used for creative purposes and generating novel artworks. In the context of AI-related research for art understanding, we present a comprehensive overview of artwork datasets and recent works that address a variety of tasks such as classification, object detection, similarity retrieval, multimodal representations, and computational aesthetics, among others. In relation to the role of AI in creating art, we address various practical and theoretical aspects of AI Art and consolidate related works that deal with those topics in detail. Finally, we provide a concise outlook on the future progression and potential impact of AI technologies on our understanding and creation of art.
Conference Paper
Full-text available
Automatic art analysis has seen an ever-increasing interest from the pattern recognition and computer vision community. However, most of the current work is mainly based solely on digitized artwork images , sometimes supplemented with some metadata and textual comments. A knowledge graph that integrates a rich body of information about artworks, artists, painting schools, etc., in a unified structured framework can provide a valuable resource for more powerful information retrieval and knowledge discovery tools in the artistic domain. To this end, this paper presents ArtGraph: an artistic knowledge graph based on WikiArt and DBpedia. The graph, implemented in Neo4j, already provides knowledge discovery capabilities without having to train a learning system. In addition, the embeddings extracted from the graph are used to inject "contextual" knowledge into a deep learning model to improve the accuracy of artwork attribute prediction tasks.
Article
Full-text available
This paper provides an overview of some of the most relevant deep learning approaches to pattern extraction and recognition in visual arts, particularly painting and drawing. Recent advances in deep learning and computer vision, coupled with the growing availability of large digitized visual art collections, have opened new opportunities for computer science researchers to assist the art community with automatic tools to analyse and further understand visual arts. Among other benefits, a deeper understanding of visual arts has the potential to make them more accessible to a wider population, ultimately supporting the spread of culture.
Article
Full-text available
In this paper, we investigate the use of a social robot as an engaging interface of a serious game intended to make children more aware and well disposed towards waste recycle. The game has been designed as a competition between the robot Pepper and a child. During the game, the robot simultaneously challenges and teaches the child how to recycle waste materials. To endow the robot with the capability to play as a game opponent in a real-world context, it is equipped with an image recognition module based on a Convolutional Neural Network to detect and classify the waste material as a child would do, i.e. by simply looking at it. A formal experiment involving 51 primary school students is carried out to evaluate the effectiveness of the game in terms of different factors such as the interaction with the robot, the users’ cognitive and affective dimensions towards ecological sustainability, and the propensity to recycle. The obtained results are encouraging and draw promising scenarios for educational robotics in changing children’s attitudes toward recycling. Indeed Pepper turns out to be positively evaluated by children as a trustful and believable companion and this allows children to be concentrated on the “memorization” task during the game. Moreover, the use of real objects as waste items during the game turns out to be a successful approach not only for perceived learning effectiveness but also for the children’s engagement.
Article
Full-text available
Visual arts are of inestimable importance for the cultural, historic and economic growth of our society. One of the building blocks of most analysis in visual arts is to find similarity relationships among paintings of different artists and painting schools. To help art historians better understand visual arts, this paper presents a framework for visual link retrieval and knowledge discovery in digital painting datasets. Visual link retrieval is accomplished by using a deep convolutional neural network to perform feature extraction and a fully unsupervised nearest neighbor mechanism to retrieve links among digitized paintings. Historical knowledge discovery is achieved by performing a graph analysis that makes it possible to study influences among artists. An experimental evaluation on a database collecting paintings by very popular artists shows the effectiveness of the method. The unsupervised strategy makes the method interesting especially in cases where metadata are scarce, unavailable or difficult to collect.
Conference Paper
Full-text available
With the recent advances in technology, new ways to engage visitors in a museum have been proposed. Relevant examples range from the simple use of mobile apps and interactive displays to virtual and augmented reality settings. Recently social robots have been used as a solution to engage visitors in museum tours, due to their ability to interact with humans naturally and familiarly. In this paper, we present our preliminary work on the use of a social robot, Pepper in this case, as an innovative approach to engaging people during museum visiting tours. To this aim, we endowed Pepper with a vision module that allows it to perceive the visitor and the artwork he is looking at, as well as estimating his age and gender. These data are used to provide the visitor with recommendations about artworks the user might like to see during the visit. We tested the proposed approach in our research lab and preliminary experiments show its feasibility.
Article
Full-text available
Painting style transfer is an attractive and challenging computer vision problem that aims to transfer painting styles onto natural images. Existing advanced methods tackle this problem from the perspective of Neural Style Transfer (NST) or unsupervised cross-domain image translation. For both two types of methods, attention has been focused on reproducing artistic painting styles of representative artists (e.g., Vincent Van Gogh). In this paper, instead of transferring styles of artistic paintings, we focus on automatic generation of realistic paintings, for example, making the machine draw a gouache before a still life, paint a sketch of a landscape, or draw a pen-and-ink portrait of a person, etc. Besides capturing the precise target styles, synthesis of realistic paintings is more demanding in preserving original content features and image structures, for which existing advanced methods are not sufficient to generate satisfactory results. Aimed at this problem, we propose RPD-GAN (Realistic Painting Drawing Generative Adversarial Network), an unsupervised cross-domain image translation framework for realistic painting style transfer. At the heart of our model is the decomposition of the image stylization mapping into four stages: feature encoding, feature de-stylization, feature re-stylization, and feature decoding, where the functionalities of these stages are implemented by additionally embedding a content-consistency constraint and a style-alignment constraint at feature space to the classic CycleGAN architecture. By enforcing these constraints, both the content-preserving and style-capturing capabilities of the model are enhanced, leading to higher-quality stylization results. Extensive experiments demonstrate the effectiveness and superiority of our RPD-GAN in drawing realistic paintings.
Article
Full-text available
In automatic art analysis, models that besides the visual elements of an artwork represent the relationships between the different artistic attributes could be very informative. Those kinds of relationships, however, usually appear in a very subtle way, being extremely difficult to detect with standard convolutional neural networks. In this work, we propose to capture contextual artistic information from fine-art paintings with a specific ContextNet network. As context can be obtained from multiple sources, we explore two modalities of ContextNets: one based on multitask learning and another one based on knowledge graphs. Once the contextual information is obtained, we use it to enhance visual representations computed with a neural network. In this way, we are able to (1) capture information about the content and the style with the visual representations and (2) encode relationships between different artistic attributes with the ContextNet. We evaluate our models on both painting classification and retrieval, and by visualising the resulting embeddings on a knowledge graph, we can confirm that our models represent specific stylistic aspects present in the data.
Article
The increasing availability of extensive digitized fine art collections opens up new research directions. In particular, correctly identifying the artistic style or art movement of paintings is crucial for large artistic database indexing, painter authentication, and mobile recognition of painters. Even though the implementation of CNN on artwork classification improved the performance dramatically compared to tradition classifier, the feature extraction methods are still valuable to help establishing better image representation for both common classifiers and neural networks. The main goal of this article is to present three novel features and a mature model structure for artistic movement recognition of portrait paintings. The proposed features include two unique color features and one texture feature: (a) Modified Color Distance (MCD), (b) ColorRatio Feature and (c) Weber's law Based Texture Feature. We demonstrate the superiority of our proposed method over the state-of-the-art approaches, and how successful our features are to support features from various neural networks. Another contribution of our work is a new portrait database that consists of 927 paintings from 6 different art movements. Extensive computer evaluations on this database show that we achieved an average accuracy of 98% for classifying two categories and 82.6% for classifying all 6 categories. Besides, our novel features improved the performance of pre-trained CNN significantly.
Article
In view of the difficulty in training the algorithm of image oil painting style migration and reconstruction based on the generative adversarial network, and the loss gradient of generator and discriminator disappears, this paper proposes an improved generative adversarial network based on gradient penalty, and constructs the total variance loss function to carry out the research of image oil painting style migration and reconstruction. Firstly, the Wasserstein distance (WGAN) is added to the loss function of the generative adversarial network to improve the stability of the alternative iterative training; secondly, the gradient penalty (WGAN-GP) is added to the loss function to deal with the problem of gradient disappearance in the training; finally, the LBP texture feature and total variation of the prototype are introduced based on the CycleGAN Loss noise constraint is used to improve the edge and texture strength of the image after migration of oil painting style. The experimental results show that the WGAN-GP algorithm constructed in this study has the ability of stable gradient and alternating iterative convergence, and the total variation loss noise constraint can provide good edge and texture details for the migration process of image oil painting style. Compared with the existing mainstream algorithm, the algorithm proposed in this study has better performance of image oil painting style migration and reconstruction, and better effect of image oil painting style migration and reconstruction.