Fig 7 - uploaded by Thorsten Arendt
Content may be subject to copyright.
UML component model of an application module

UML component model of an application module

Source publication
Article
Full-text available
The paradigm of model-based software development has become more and more popular since it promises an increase in the efficiency and quality of software development. Following this paradigm, models become primary artifacts in the software development process. Therefore, software quality and quality assurance frequently leads back to the quality an...

Contexts in source publication

Context 1
... above, RefactoringWizard is a class of the LTK API. Figure 7 shows the architecture of an application module. It uses the Java code of the custom QA plugins generated by the corresponding specification module (com- pare right-hand side of Fig. 6 and left-hand side of Fig. 7) and consists of two compo- nents. ...
Context 2
... selection)) and the process is started by method RefactoringWizard show(). As above, RefactoringWizard is a class of the LTK API. Figure 7 shows the architecture of an application module. It uses the Java code of the custom QA plugins generated by the corresponding specification module (com- pare right-hand side of Fig. 6 and left-hand side of Fig. 7) and consists of two compo- nents. The configuration component maintains project-specific configurations of met-rics, smells, and refactorings. The runtime component is responsible for metrics cal- culation, smell detection, and refactoring execution. Depending on the concrete spec- ification approach, the runtime component uses the ...
Context 3
... the final check has passed, a preview of model changes to be performed by the refactoring is provided using EMF Compare (EMF Compare 2012). Figure 17 shows the resulting EMF Compare dialog using a tree-based model view. The left-hand side shows the original example model (see Fig. 3) whereas the right-hand side presents the refactored model. ...

Similar publications

Article
Full-text available
Refactoring, a widely adopted technique, has proven effective in facilitating and reducing maintenance activities and costs. Nonetheless, the effects of applying refactoring techniques on software quality exhibit inconsistencies and contradictions, leading to conflicting evidence on their overall benefit. Consequently, software developers face chal...
Article
Full-text available
In an intelligent smart city like Sejong city in Korea, automatic and smart software is absolutely necessary for autonomous traffic and vehicles control systems. Therefore, these systems need to have an accurate and timely performance; otherwise, safety issues may arise. To resolve this, we propose our code visualization approach to adapt an object...
Conference Paper
Full-text available
Current-day programming languages include constructs to embed meta-data in a program's source code in the form of annotations. More than mere documentation, these annotations are used in modern frameworks to map source-level entities to domain-specific ones. A common example being the Hibernate Object-Relational Mapping framework that relies on ann...

Citations

... Currently, these data clumps cannot be refactored automatically. Data clumps are a manifestation of code smells in source code [7] and design smells in, for example, class diagrams [8]. Recognizing the significance of addressing data clumps in improving software quality, the next logical step is to explore automated solutions to refactor these clumps, leveraging advancements in AI [9]. ...
Article
Full-text available
Data clumps, groups of variables that repeatedly appear together across different parts of a software system, are indicative of poor code structure and can lead to potential issues such as maintenance challenges, testing complexity, and scalability concerns, among others. Addressing this, our study introduces an innovative AI-driven pipeline specifically designed for the refactoring of data clumps in software repositories. This pipeline leverages the capabilities of Large Language Models (LLM), such as ChatGPT, to automate the detection and resolution of data clumps, thereby enhancing code quality and maintainability. In developing this pipeline, we have taken into consideration the new European Union (EU)-Artificial Intelligence (AI) Act, ensuring that our pipeline complies with the latest regulatory requirements and ethical standards for use of AI in software development by outsourcing decisions to a human in the loop. Preliminary experiments utilizing ChatGPT were conducted to validate the effectiveness and efficiency of our approach. These tests demonstrate promising results in identifying and refactoring data clumps, but also the challenges using LLMs.
... Mansoor et al. in two recent works used a popular evolutional algorithm, a genetic algorithm, to identify an optimal set of refactoring to promote software design [28]. Arendt et al. presented an instrumental environment to ensure model quality based on EME [7]. ...
Article
Full-text available
Evolution is one of the most important parts of the software development process. One of the negative consequences of software development is design erosion. Refactoring is a technique that aims to prevent this issue. Therefore, refactoring is an important software development process to promote software quality without changing its external behavior. Refactoring at the model level is the same as refactoring at the code level and has similar advantages, and the only difference is that refactoring at the model level, due to its formation over the initial steps of software development process, has a greater impact on cost reduction and efficiency improvement. Timely and consistent utilization of this procedure in a software project has extremely positive long-term impacts, especially when this is done by its technical tools. Then, refactoring will be a rapid, easy, and safe way to promote software system quality. The main idea of this study is the automatic checking of consistency in model refactoring in order to retain model behavior using Alloy modeling language. Thus, by employing structural and behavioral patterns as a reusable and well-defined component, as well as consistency rules, this objective can be achieved.
... EMF Refactor tool was introduced in [3,4]. It supports metrics reporting, smells detection and resolution for models based on Eclipse Modeling Framework (EMF) [28]. ...
... To perform this in a single edit step, one can create an edit operation that executes the entire change, including all class and sequence diagram changes. Some tasks can even be completely automatized and reduced to the definition of edit operations: Edit operations are used for model repair, quick-fix generation, auto completion (Ohrndorf et al, 2018;Hegedüs et al, 2011;Kögel et al, 2016), model editors (Taentzer et al, 2007;Ehrig et al, 2005), operation-based merging (Kögel et al, 2009;Schmidt et al, 2009), model refactoring (Mokaddem et al, 2018;Arendt and Taentzer, 2013), model optimization (Burdusel et al, 2018), meta-model evolution and model co-evolution (Rose et al, 2014;Arendt et al, 2010;Herrmannsdoerfer et al, 2010;Getir et al, 2018;Kolovos et al, 2010), semantic lifting of model differences (Kehrer et al, 2011(Kehrer et al, , 2012aben Fadhel et al, 2012;Langer et al, 2013;Khelladi et al, 2016), model generation (Pietsch et al, 2011), and many more. ...
... More generally, our approach, Ockham, is based on the assumption that it should be possible to derive "meaningful" patterns from the repositories. These patterns could then be used for many applications (Ohrndorf et al, 2018;Kögel et al, 2016;Taentzer et al, 2007;Arendt and Taentzer, 2013;Getir et al, 2018;Kehrer et al, 2012a;ben Fadhel et al, 2012;Langer et al, 2013;Khelladi et al, 2016). ...
Preprint
Full-text available
Model transformations play a fundamental role in model-driven software development. They can be used to solve or support central tasks, such as creating models, handling model co-evolution, and model merging. In the past, various (semi-)automatic approaches have been proposed to derive model transformations from meta-models or from examples. These approaches require time-consuming handcrafting or the recording of concrete examples, or they are unable to derive complex transformations. We propose a novel unsupervised approach, called Ockham, which is able to learn edit operations from model histories in model repositories. Ockham is based on the idea that meaningful domain-specific edit operations are the ones that compress the model differences. It employs frequent sub-graph mining to discover frequent structures in model difference graphs. Learning Domain-Specific Edit Operations We evaluate our approach in two controlled experiments and one real-world case study of a large-scale industrial model-driven architecture project in the railway domain. We found that our approach is able to discover frequent edit operations that have actually been applied before. Furthermore, Ockham is able to extract edit operations that are meaningful to practitioners in an industrial setting. We also discuss some of the use cases for the discovered edit operations in this industrial setting.
... The view can be filtered, e.g., by selecting a class only and the metamodels connected to it will be shown. 2 The contents of the view can be easily navigated, rotated, and zoomed. The context menu automatically opens the generated graph model and the Picto view. ...
... In [2] a tool called EMFRefactor is presented with the intent of specifying and applying refactorings on models. This tool uses Henshin's model transformation engine for executing refactorings. ...
Article
Full-text available
Metamodels play a crucial role in any model-based application. They underpin the definition of models and tools, and the development of model management operations, including model transformations and analysis. Like any software artifacts, metamodels are subject to evolution to improve their quality or implement unforeseen requirements. Metamodels can be defined in terms of existing ones to increase the separation of concerns and foster reuse. However, the induced coupling can give additional evolution complexity, and dedicated support is needed to avoid breaking metamodels defined in terms of those being changed. This paper presents a tool-supported approach that can automatically analyze the available metamodels and alert modelers in case of change operations that can give place to invalid situations like dangling references. The approach has been implemented in the Edelta development environment and successfully applied to metamodels retrieved from a publicly available Ecore models dataset.
... The detection and resolution of meta-model smells is presented in [39], an approach that is based on quality assurance for models in general [40]. Furthermore, there is an approach to meta-model testing via unit test suites and domain-specific expected properties with metaBest as well as an example-based construction of meta-models with metaBup [9], [41], [42]. ...
... Furthermore, experts shall be supported in categorising detected problems and improving the data model (e.g. similar to class model smells and refactorings [40]) as well as adapting the data to the changes. Ultimately, the whole workflow and the GUI will be evaluated empirically. ...
Preprint
Data is of high quality if it is fit for its intended use. The quality of data is influenced by the underlying data model and its quality. One major quality problem is the heterogeneity of data as quality aspects such as understandability and interoperability are impaired. This heterogeneity may be caused by quality problems in the data model. Data heterogeneity can occur in particular when the information given is not structured enough and just captured in data values, often due to missing or non-suitable structure in the underlying data model. We propose a bottom-up approach to detecting quality problems in data models that manifest in heterogeneous data values. It supports an explorative analysis of the existing data and can be configured by domain experts according to their domain knowledge. All values of a selected data field are clustered by syntactic similarity. Thereby an overview of the data values' diversity in syntax is provided. It shall help domain experts to understand how the data model is used in practice and to derive potential quality problems of the data model. We outline a proof-of-concept implementation and evaluate our approach using cultural heritage data.
... The framework provides the modeling language Ecore to model classes, for which Java Source Code can be generated. The works of [15] and [16] have shown how to apply model refactoring and they also described a tool set that is capable of performing the operations with graph transformation algorithms. UML also needs to be inspected, as this is the most researched topic in Model-Driven Software Refactoring [17]. ...
... Some tasks can even be completely automatized and reduced to the definition of edit operations. Edit operations are used for model repair, quick-fix generation, auto completion [24,39,48], model editors [19,62], operation-based merging [38], model refactoring [4,17], model optimization [12], meta-model evolution and model co-evolution [3,25,53], artifact co-evolution in general [21,41], semantic lifting of model differences [8,33,34,37,42], model generation [50], and many more. ...
... These patterns could then be used for many applications [4,8,21,33,37,39,42,48,62]. In our case study, the models have become huge over time (approx. ...
Preprint
Full-text available
Model transformations play a fundamental role in model-driven software development. They can be used to solve or support central tasks, such as creating models, handling model co-evolution, and model merging. In the past, various (semi-)automatic approaches have been proposed to derive model transformations from meta-models or from examples. These approaches require time-consuming handcrafting or recording of concrete examples, or they are unable to derive complex transformations. We propose a novel unsupervised approach, called Ockham, which is able to learn edit operations from model histories in model repositories. Ockham is based on the idea that meaningful edit operations will be the ones that compress the model differences. We evaluate our approach in two controlled experiments and one real-world case study of a large-scale industrial model-driven architecture project in the railway domain. We find that our approach is able to discover frequent edit operations that have actually been applied. Furthermore, Ockham is able to extract edit operations in an industrial setting that are meaningful to practitioners.
... where NUR is the number of unidirectional references measured as the difference between bidirectional and number of references, and UND is the understandability value measured as defined in Equation 3. The reusability of a given model can be measured in different ways. One of these is to use the attribute inheritance factor AIF as proposed in (Arendt & Taentzer 2013). As presented in (Al-Jáafer & Sabri 2007), AIF can be defined as follows: ...
... According to (Arendt & Taentzer 2013) a model quality assurance framework should implement three important iterative phases: i) model analysis, ii) identification of smells and iii) removing of the smells. In order to confirm that removing the smells had a positive effect not only formally but also practically, quality evaluation is crucial and can be considered as a litmus test of the refactoring activity. ...
... To overcome these limitations and complement the automatic approaches as (Bettini et al. 2019;Arendt & Taentzer 2013) we propose a new application of PARMOREL to find a balance between smells refactoring and models quality. ...
... For the highly relevant area of research software and data services in the field of research on material and immaterial cultural assets there currently exists no dedicated infrastructure that would allow knowledge exchange and coordination for the specific requirements of the NFDI4Culture community. There are many institutions and DH-centers with expertise (like UPB/ZenMEM, DCH, UZK/prom, UMR/MCDCI, TIB, FIZ, mainzed) that offer support for sustainable development and operation of research tools but there is no consulting agency that covers the topics of the NFDI4Culture consortium in relation to the development, consolidation, operation and certification of sustainable, interoperable research tools and data services on the basis of the FAIR principles (Manovich 2011,Brett and Croucher 2017, Arendt and Taentzer 2013, Arendt et al. 2011, Röwenstrunk 2018. Within the DHd association, there is a working group Research Software Engineering (co-founded by the designated Speaker of NFDI4Culture), closely connected to the international RSE community, which pursues the goal of sustainable software development that can serve as a model (Czmiel et al. 2018, Schrade 2017. ...
Article
Full-text available
Digital data on tangible and intangible cultural assets is an essential part of daily life, communication and experience. It has a lasting influence on the perception of cultural identity as well as on the interactions between research, the cultural economy and society. Throughout the last three decades, many cultural heritage institutions have contributed a wealth of digital representations of cultural assets (2D digital reproductions of paintings, sheet music, 3D digital models of sculptures, monuments, rooms, buildings), audio-visual data (music, film, stage performances), and procedural research data such as encoding and annotation formats. The long-term preservation and FAIR availability of research data from the cultural heritage domain is fundamentally important, not only for future academic success in the humanities but also for the cultural identity of individuals and society as a whole. Up to now, no coordinated effort for professional research data management on a national level exists in Germany. NFDI4Culture aims to fill this gap and create a user-centered, research-driven infrastructure that will cover a broad range of research domains from musicology, art history and architecture to performance, theatre, film, and media studies. The research landscape addressed by the consortium is characterized by strong institutional differentiation. Research units in the consortium's community of interest comprise university institutes, art colleges, academies, galleries, libraries, archives and museums. This diverse landscape is also characterized by an abundance of research objects, methodologies and a great potential for data-driven research. In a unique effort carried out by the applicant and co-applicants of this proposal and ten academic societies, this community is interconnected for the first time through a federated approach that is ideally suited to the needs of the participating researchers. To promote collaboration within the NFDI, to share knowledge and technology and to provide extensive support for its users have been the guiding principles of the consortium from the beginning and will be at the heart of all workflows and decision-making processes. Thanks to these principles, NFDI4Culture has gathered strong support ranging from individual researchers to high-level cultural heritage organizations such as the UNESCO, the International Council of Museums, the Open Knowledge Foundation and Wikimedia. On this basis, NFDI4Culture will take innovative measures that promote a cultural change towards a more reflective and sustainable handling of research data and at the same time boost qualification and professionalization in data-driven research in the domain of cultural heritage. This will create a long-lasting impact on science, cultural economy and society as a whole.