ArticlePDF Available

Api Slicer: A Feature-Based Approach to Decomposing Monolithic Apis into Microservices

Authors:

Figures

Content may be subject to copyright.
API SLICER: A feature-based approach to decomposing monolithic APIs into
microservices
Carlos Xaviera, Kleinner Fariasa
aPPGCA, University of Vale do Rio dos Sinos (Unisinos), Av. Unisinos, 950, Sao Leopoldo, RS, Brazil
Abstract
Many approaches related to decomposing monolithic applications have been proposed in recent studies. However, such studies
are not sensitive to the decomposition of Monolithic APIs into microservices, resulting in the manual decomposition of monolithic
APIs, based solely on the developer’s experience. This article introduces the API Slicer, a similarity level-based approach
between features, which uses the execution trace to understand the monolithic API and generate suggestions for microservices.
The API Slicer stands out for: (1) showing which functionalities should be transformed into microservices and which should
remain in the monolithic API; (2) being agnostic of the technology, as long as the target application is object-oriented; and (3) the
similarity is made based on the list of packages that the user provides at the beginning of the recommendation process. The new
aspect of our article is the proposed feature-based approach to decompose monolithic APIs into microservices. Furthermore, we
propose an intelligible workflow and an architectural model for assisting software developers in the decomposition of monolithic
APIs. Developers benefit from the API Slicer because it allows the semi-automatic decomposition of a monolithic API. The
approach was evaluated through a case study, where 3 target applications were tested to verify the eectiveness of the microservice
recommendation. For each target application, two scenarios were created, one showing microservice recommendations and another
indicating which services should remain in the monolithic API. In addition, in each of the scenarios, the similarity level varied from
10% to 90%, to verify from what percentage, the API Slicer and Mon´
olise approaches would achieve the desired result. Mon ´
olise
is an approach for decomposing monolithic applications into microservices, proposed in the literature. Precision and recall metrics
were applied. The API Slicer approach had the highest level of precision in two of the three target applications used by this study,
showing to be a viable alternative.
Keywords: API, Decomposition, Microservices, Monolithic Application, Distributed Applications, Architectural Model
PACS: 0000, 1111
2000 MSC: 0000, 1111
1. Introduction
Enterprise software systems are embedded in dynamic and
highly volatile environments. Enterprise systems are composed
of APIs. API (Application Programming Interface) aims to
make the integration between systems. As dynamic systems
change, APIs need to adapt to the new rules. With each
readjustment, APIs need to add more rules and functionality,
causing APIs to grow in size and complexity. In the long
run, the increase in size and complexity turns an API into a
monolithic API
A monolithic API is an API in which features are intertwined
and spread across modules that implement endpoints.
Figure 1 presents the monolithic API. Login functionality
(red circle) and comment functionality (green polygon) are
intertwined in the JWtTokenProvider, CustomerDetailService,
and UserRepository classes. This results in (1) tight coupling
between classes; (2) maintenance issues for developers, as if
the login functionality is changed, the comment functionality
must also change; (3) possible insertion of bugs with each new
change.
Decomposition is the act of breaking an application into
smaller parts. In most cases, this process is done manually,
using only the experience of the software architect [1]. A
monolithic application is usually characterized as a single
executable software artifact, made up of tightly coupled
modules and requirements implemented in an intertwined way
and distributed among the application modules [2]. This
implies (1) maintenance problem, as the monolithic application
gets bigger, (2) diculty in introducing new technologies, and
(3) greater complexity to introduce a new functionality, as
there is a great risk of having undesired eects on existing
functionality [3].
The current literature has several approaches for
decomposing monolithic applications into microservices.
Among the approaches, decomposition is highlighted, using,
among other factors: coupling and cohesion [4] and [5]; the
database structure along with the application code [6]; the static
analysis of the code, considering classes, methods and change
history [7]; application logs [1]; and execution traces [8].
However, such studies are not sensitive to the decomposition
of monolithic APIs into microservices. This results in the
manual decomposition of monolithic APIs, based solely on the
Preprint submitted to Future Generation Computer Systems November 26, 2022
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
experience of the developers, and lack of theoretical foundation
to follow best practices and possible maintenance problems,
which the microservice set out to solve.
Therefore, this article proposes the API Slicer, a feature-
based approach, to decompose monolithic APIs into
microservices. The API Slicer approach is divided into 7
steps: (1) reading the execution trace (file containing the
methods and classes called when a functionality was executed);
(2) reading the packages that must be considered in the
decomposition; (3) similarity level reading (value provided
by the user, which is used as a cut-ovalue, where if two
functionalities reach or exceed this cut-ovalue, both will
stay in the same microservice, otherwise, they will go to
individual microservices; (4) process the files to identify the
features, (5) verify the similarity between features, (6) perform
the necessary groupings, and (7) generate the microservices
recommendations. Developers will benefit from the API Slicer,
as the approach allows the semi-automatic decomposition of a
monolithic API and its decomposition was created based on an
empirical study. The approach was evaluated through a case
study, where 3 target applications were tested to verify the
eectiveness of the microservices recommendation. For each
target application, two scenarios were created, one showing
microservices recommendations and another indicating which
services should remain in the monolithic API. Furthermore,
in each of the scenarios, the similarity level ranged from 10%
to 90%, to verify from what percentage, the API Slicer and
Mon´
olise approaches would achieve the ideal result. The
metrics used to compare the result generated with the ideal
were precision and recall. And the results indicated that the
API Slicer approach had the highest level of accuracy in two of
the three target applications used by this study, proving to be a
viable alternative.
The study is divided according to the following structure.
Section 2 contains the theoretical framework, with the main
concepts for understanding the proposed study. Section 3
addresses related works, exploring the selection process used
and also carrying out a comparison of these with the present.
Section 4 presents the proposed approach. Section 5 describes
the protocol followed to evaluate the proposed approach.
Section 6 draws some additional discussions. Section 7 comes
up with some conclusions and future work.
2. Background
This section covers the theoretical concepts used during the
construction and development of the study.
2.1. Microservice
Microservices are services that expose a contract so that
other services can consume it, without there being a strong
dependence between them, since neither of them knows the
implementation of each other. It is premised on being small
(micro) so that a small team can manage its code. And its
growth is limited by the business functionality that it proposes
to serve, so that it doesn’t grow indefinitely to the point of
Figure 1: Example of a monolithic API
suering the problems of a monolithic application, that is, large,
complex and with high risk when making a change [9].
The main characteristics of microservices are: (1)
technological heterogeneity, or that is, each microservice
created can be made with a technology stack that makes the
most business sense. (2) Autonomy, the microservice can
be deployed and redeployed without relying on other services
to make it happen. This allows more features to be added
and technologies changed more easily, as long as the contract
used by consumers is not changed. And (3) lightweight
communication, services communicate via REST, using HTTP
or an asynchronous messaging service, which avoids tight
coupling between them [10].
2.2. Monolithic system
According to [11], a monolithic system is an application
where the interface with the user, business logic, and database
persistence are located in a single unit. And this unit is
self-contained and covers not just one, but all the steps
to complete a macro functionality. For [2], monolithic
applications are usually characterized as a single executable
software artifact, consisting of highly coupled modules and
requirements implemented in an intertwined and distributed
manner among the application modules.
The main advantages of the monolithic system are: (1) a
full understanding of the macro flow, as the application covers
all stages of functionality, (2) good performance, as it does
not need to make multiple calls to external services, and (3)
independence from others services, as all the features that the
application needs are inside it [12]. And as disadvantages
can be highlighted: (1) maintenance problem, the proportion
that the monolith increases in size, (2) diculty in inserting
new technologies, (3) greater complexity to introducing a
new feature, as there may be undesired eects on existing
features [3].
2.3. Feature-oriented development
Feature-oriented development is a paradigm for building,
customizing, and synthesizing software systems. The basic
2
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
idea of feature-oriented development is to decompose software
systems into features, to provide configuration options, and to
facilitate the generation of software based on a selection of
features [13].
From an abstract point of view, a feature is an aspect, quality,
or characteristic visible to the prominent or distinctive user
of a software system. From a technical point of view, the
feature is a structure that extends and modifies the structure of
a given program to from satisfying a stakeholder requirement,
and implement and encapsulate a design decision. And oer a
configuration option [14]. This study will look at the feature
from a technical point of view, that is, as a functionality, as it is
implemented to satisfy business requirements. The reasons why
the feature idea was used to decompose monolithic APIs into
microservices were: (1) obtaining the complete behavior of the
software, concerning each functionality it has; (2) possibility of
reusing the feature for other systems; (3) ease of maintenance,
as the developer will need to be concerned with a feature and
not with an entire system.
3. Related Work
This section aims to discuss the related works, present a
comparative analysis of the same, as well as pointing out
research opportunities. Related works were selected from
two digital repositories: Google Scholar and IEEE. These
repositories were used because they have vast content on
software engineering. Previous studies [15, 16] demonstrated
the usefulness of such repositories for this section. In the case
of the IEEE repository, a search string was applied, filtering for
the period from 2019 to 2021, as can be seen below:
(”All Metadata”:monolithic) AND (”All
Metadata”:decomposition) AND (”All Metadata”:
microsservice)
A search string is a set of terms used to make a more detailed
search, restricted and directed. From this research, 16 works
were identified, but only 5 were selected for two reasons: (1)
similarity of purpose and (2) the year the study was published.
3.1. Related Work analysis
Assunc¸ ˜
ao et al. (2022) [4]. This study presents
toMicroservice, a multi-objective approach, composed of 5
objectives, to identify microservices from monolithic systems.
The objectives used were: coupling, cohesion, modularization
of resources, network overload, and reuse. A tool was created
and applied in a monolithic system of the oil and gas industry
to identify if the chosen objectives are conflicting and if the
approach has better results than the random search. The
evaluation was performed using the Spearman correlation test.
The results showed that cohesion and coupling objectives are
inversely proportional, that is, as coupling decreases, cohesion
increases, and vice versa. Furthermore, the modularization of
resources slightly aects the cohesion and network overhead
functions. Concerning the comparison with the random
search, toMicroservice obtained better results, reaching a
greater diversity of non-dominated solutions with dierent
compromises between the objectives. However, it was not
specified whether any monolithic application can make use of
the tool and whether it is possible to control the granularity of
the generated microservices recommendations.
Filippone et al. (2021) [5]. This study proposes an approach
that aims to identify microservices, extract architecture and
implement microservices in an automated way. In phase 1
(microservice identification), the source code of the legacy
system is received as input and after inspection of the code at
the method level, a representation of the system in the form
of a graph is generated. In phase 2 (architecture extraction)
an optimization technique is performed on the graph generated
in the previous step, to obtain greater cohesion, less coupling
and minimal communication overhead between the identified
microservices. And in phase 3 (microservice implementation)
the synthesis algorithm is executed, in the result generated by
phase 2, to automatically generate the microservices source
code. However, there was no evaluation regarding the generated
decomposition, it was not mentioned if the tool can be used
in any monolithic application or if it is possible to adjust the
granularity of the suggested recommendation.
Ivanov and Tasheva (2021) [17]. This article discusses
a procedure for decomposing monolithic applications into
microservices, based on the refactoring strategy. This
procedure consists of eight steps to decompose monolithic
applications, without causing downtime. The eight steps
to decomposing a monolithic application without causing
downtime are: (1) evaluating the benefits of migrating from
monolith to microservice, (2) defining context boundaries, (3)
separating user interfaces, (4) removing the monolith user
interface, (5) separation of service, (6) data synchronization,
(7) redirecting trac to the new service, and (8) repeating the
process for each context. To evaluate the idea, the eight steps
were applied in a case study, however, no tool was implemented
to automate the process and the design of the microservice
structure was done by an expert, so there is no way to know
which criteria the used for architectural design.
Kirby et al. (2021) [18]. This article presents an
exploratory multi-method study, which used the experience
of 10 professionals experienced in microservices extraction,
to try to answer two questions: (1) what is the applicability
and usefulness of dierent types of relationships during the
extraction process? And (2) what features would benefit
from tool automation extractions that use element-to-element
relationships? An online tool that uses graphs were created
and tested by 10 professionals in the area of microservices
extraction. And the result raised was that practitioners prefer
an analysis tool that can help to examine and experiment with
dierent relationships and decompositions, as they consider
various types of relationships during the extraction process.
However, the study did not say whether the developed
application could be applied to any monolithic application, and
not there was no comparison of the results of the proposed
approach, with the approach created by another study.
Zhao and Zhao (2021) [6]. This study proposes an
approach that extracts microservices from the object-oriented
3
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
Table 1: Related work comparative analysis.
Related work
Comparison Criteria
CC1 CC2 CC3 CC4 CC5 CC6
Current work
Assunc¸˜
ao et al. (2022) [4] # ###
Filippone et al. (2021) [5] ######
Ivanov; Tasheva (2021) [17] #####
Kirby et al. (2021) [18] # #
Zhao; Zhao, (2021) [6] #G# ##
Santos; Paula (2020) [7] # #
Sheikh; Bs (2020) [1] #####
Rocha (2018) [8] ##
Similar G# Partially similar #Not similar
monolithic system, through the combination of domain
identification and the logical division of the target application.
Domain identification takes into account information from the
application database and the logical division takes into account
legacy system source code. The main steps of the microservice
extraction process are composed of (1) division domain, (2)
tier splitting, (3) business splitting, and (4) cluster merging.
Is for to evaluate the process, a tool was created and used in
two systems. The first system was an online e-commerce called
ShopMaster, a monolithic system in which its structure is based
on a software framework. And the second was a blog called
Solo, a built-in MVC pattern, but without having the basis of
any framework, that is, its structure is not clear. The result
of extracting candidates for microservices from ShopMaster
showed that only one candidate was not extracted. In relation
to Solo, the number of candidates for microservices was
inconsistent in terms of the number of services. Furthermore,
this project did not use any metrics to infer that microservices
extraction was significant, and the tool is not open source.
Santos and Paula (2020) [7]. This study proposes a tool
that analyzes the source code of a monolithic application and
suggests decomposition into microservices. The tool combines
the Monobreak algorithm [8], proposed by the author Rocha
(2018), with the logical coupling strategy [19] to suggest
microservices. The decomposition of monolithic applications
into microservices is done through static analysis in the code,
considering classes, methods, and change history. During the
static analysis in the code, the grouping of services is performed
considering how similar they are. The similarity is calculated
by identifying the items in common between services. These
items are the classes, methods, and source files that have
changed together in the version control change history. And to
assess the quality of the recommendations, the tool was applied
in three monolithic systems. The metrics used for the evaluation
were the silhouette coecient and granularity. However, it
was not mentioned whether the approach can be applied in a
monolithic API and the proposed approach was not compared
with the approach of another study.
Sheikh and Bs (2020) [1]. This study uses a data-driven
approach to identify candidates for microservices based on
log files collected during runtime. This approach consists
of 6 steps: (1) Path-to-Execution Analysis. In this step,
through the application log, the classes and database tables used
are identified and a graphical representation is generated; (2)
Frequency of Modules. In this phase, it is identified which
forms of execution are used most frequently and which modules
are occasionally or never used during runtime; (3) Identification
and removal of circular dependency. In this step, the Union
Find algorithm is applied to identify the circular dependencies
and after that, the dependencies are removed, generating an
acyclic graph; (4) Pre-processing chart. In this step the acyclic
graph is processed; (5) Selection of decomposition choice using
the graphs. In this step, a hash-based algorithm is applied to
find out if the system graph has any dependencies. If so, the
monolithic application cannot be decomposed; (6) Selection of
a solution that solves the problem. In this step, the suggested
microservices are made available. The proposal was evaluated
in a financial system, however, no tool was created to automate
the process, there was no mention of whether any monolithic
system could make use of the approach and it was not said how
the final result of the recommendation looks and whether it is
possible to adjust its granularity.
Rocha (2018) [8]. This study presents Mon´
olise,
which is a semi-automatic technique that uses an algorithm
sensitive to the application’s architecture to recommend
microservices. Mon´
olise is a programming language agnostic
technique, which makes use of three input parameters (system
configuration, execution trace of monolithic application
functionality, and similarity value) so that the MonoBreak
algorithm can recommend microservices. The Mon´
olise
recommendation makes it possible to demonstrate, at the code
level, which classes and methods of the functionalities will
have to be migrated to new microservices. Mon ´
olise was
evaluated through a case study. This evaluation consisted
of comparing the decomposition performed by Mon´
olise with
the decomposition performed by a specialist in the target
application used in the case study. However, the developed tool
is not open source, preventing the evolution of the work and the
proposed approach was not compared with any approach from
another study.
3.2. Comparative Analysis and Research Opportunities
Comparison criterion. Six Comparison Criteria (CC)
were defined to identify similarities and dierences between
the proposed work and the selected articles. This mode of
comparison has already been validated in previous studies [15,
20, 21, 22, 23] and proved to be eective in identifying research
opportunities. The criteria are described below:
4
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
Figure 2: General process diagram
Metrics (CC1): This criterion verifies whether the study
used any metrics to assess the result obtained;
Support for decomposing monolithic APIs (CC2):
This criterion assesses whether the proposed approach can
decompose a monolithic API into microservices;
Tool support (CC3): This criterion assesses whether the
study produced a prototype to automate (even partially)
the process of generating microservices;
Open source tool (CC4): This criterion assesses whether
the study has developed an open source tool so that the
community can test the project on its own developed and
create enhancements based on the exposed source code;
Assessment method (CC5): This criterion assesses
whether a case study was used, where the approach
proposed by the article was compared with the approach
of another study.
Granularity of microservices (CC6): This criterion
assesses whether the study has an option that allows you
to adjust the granularity of the recommendations so that
you can have microservices composed of many (coarse
granularity) or few (fine granularity) services.
Research Opportunities. Table 1 presents a comparative
analysis. This table contrasts the works related to the proposed
one, highlighting what is similar and dierent between them.
The observed gap is that of the 48 comparison criteria, only
37.5% of them were fully satisfied, resulting in the following
research opportunities: (1) no related work is premised on
decomposing a monolithic API (explored in Section 4), (2)
none paper compared the proposed approach with the approach
of another study (explored in Section 5.4). And (3) only 2
studies [18] [7] created an open source tool, where you can
see the implementation and possible improvements (explored
in Section 4.3).
4. Proposed Approach
This section describes the API Slicer, a feature-
based approach to decomposing of monolithic APIs into
microservices. Section 4.1 describes the vision of the process,
Section 4.2 talks about the proposed architecture, Section 4.3
details the proposed algorithms and Section 4.4 talks about
implementation aspects.
4.1. Process view
This section describes the phases used for the API Slicer
approach to be able to decompose a monolithic API into
microservices. The name of the approach comes from the
junction of the term API (the target application of this
approach) +Slicer (proposes a decomposition, making the
API smaller). API Slicer aims to support the development
team, proposing microservice suggestions, in case the team
understands that it is time for the monolithic API to be
decomposed, as it has many functionalities intertwined with
each other. The complete flow of the microservice extraction
process can be seen in Figure 2. And below, all the steps of the
process are detailed:
Step 1: Execution List Capture. In this step the user will
execute all the functionality of the target monolithic API, which
he wants the API Slicer to evaluate. At each execution of
a new functionality, the user will capture the execution trace
and save it inside a text file (.txt), which will have the name
of the executed functionality. The trace is lines that show
which method and class were called when certain functionality
was executed. The functionality execution flow, trace capture,
and creation of a text file (.txt) containing the execution trace,
will be repeated, until the user has gone through all the
functionalities of the target API.
Step 2: Feature Slicing. Step where an API is evaluated and
the API Slicer suggests decomposition into microservices. The
API Slicer will receive as input parameters: (1) the execution
traces made in step 1, (2) the packages that must be considered
5
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
Figure 3: Proposed component-based architectural model
to generate the recommendation, and (3) the similarity level
that must be applied in the recommendation. Similarity level
is a cutovalue that tells you when two or more features
must be present in the same microservice. After receiving
the input parameters, the files are unzipped and converted into
functionality. Each functionality is composed of classes and
their respective methods. And to identify how similar each
feature is in relation to another, a comparison is made between
the name of the classes of both features. The more equal
classes, the higher the level of similarity. After identifying
the level of similarity between the features, a grouping is
generated. In this grouping, all functionalities that reach
a similarity level equal to or greater than that provided by
the user will be present in the same microservice. Finally,
the microservices recommendation list is printed, which is
composed of classes, methods, and features that should be
present in the microservice.
4.2. Architectural view
Figure 3 presents a component-based architecture. Each
component has an implicit purpose to implement the process
proposed in Figure 2. Next, each architectural component is
described.
User interface: This component is the terminal where the
user will provide: (1) the similarity level value, (2) the
packages that should be considered in the recommendation
and (3) the path of the zip folder that contains all the
files of execution traces captured from the monolithic
API. The similarity value is the cutovalue that will
tell you whether two or more features will be in the
same microservice, or will go to individual microservices.
For example, if the similarity value is 67% and two
features have a similarity equal to or greater than 67%,
both features will be in the same microservice. And
the packages that should be considered are the packages
that the API Slicer will rely on, to say how similar the
functionality is.
Data extraction: This component is responsible for
decompressing the files from the user-supplied zip folder,
in a directory that the application has access to. And do
mapping these unzipped files into functionalities, to serve
as input for the other components of the architecture.
Identification of similarities: This component is
responsible for comparing the name of each class of
functionality A, with the classes of functionality B. And
based on that, perform the similarity calculation, to
identify how close the functionalities are. This comparison
process is done with all the features mapped by the trace
provided by the user. And after completion, the result of
the similarities is stored in a dictionary.
Functionality grouping: This component is responsible
for going through all the mapped functionality and
checking if any functionality in the dictionary has a
similarity level equal to or greater than that provided by
the user. If the answer is yes, the functionalities will
be together in the same microservice. Otherwise, the
functionalities will go to dierent microservices.
Microservices generation: This component is
responsible for assembling the microservices
recommendation list, the list of functionalities that
should remain in the API, and printing these lists on
the terminal. Each microservice recommendation is
composed of functionalities and each functionality has its
respective classes and methods.
Slicer Orchestrator: This component is responsible
for intermediating between the components: (1) User
interface, (2) Data extraction, (3) Identification of
similarities, (4) Grouping of functionalities and (5)
Generation of microservices, to be able to suggest
microservices recommendations and indicate which
functionality should remain in the monolithic API.
6
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
4.3. Algorithms
This section describes the algorithm used to recommend
microservices (Algorithm 1). The algorithm consists
of five subroutines: setInputData,getFileNames,
convertFileToFeatures,createSimilarityMap and
groupFeatures. And it aims to transform the zip folder
with the execution traces of the target API, the similarity level
and packages provided by the user, into recommendations of
microservices. Below each of the algorithm’s five subroutines
is described in detail:
Algorithm 1: Microservice recommendation
1function executeSlicerRecommendation()
2consoleS ervice.set In putData();
3files f ileS ervice.getF ileN ames(consoleS er vice.getReadDirectory());
4packages
Arrays.a sLi st(consoleS ervice.get Im portant Packages().s plit(”,”));
5functionalities
f eatureS ervice.convert File sT oFeat ures(f ile s,packages);
6similarityMap similarityS ervice.creat eS imilarityMa p(f unctionalities);
7completeFuncionality f eatureS ervice.convert File sT oFeat ures(f ile s);
8similarity consoleS ervice.getS imilar ityValue();
9return
microservice.gr oupF eature s(similarity Ma p,complet eFunctionalit ies,similarity);
10 end
setInputData: Algorithm 2 is the first subroutine to be
executed by the algorithm of microservices recommendation
and aims to capture the input data provided by the user, via the
terminal. The input data are: (1) path where the zip folder with
all traces collected from the target API is located (line 4); (2)
the packages that must be considered when identifying the level
of similarity between the features (line 6) and (3) the similarity
value, which is the cut-ovalue that will tell if two or more
features will share the same microservice (line 8).
Algorithm 2: Microservices recommendation:
Subroutine 1
1function setInputData ()
2input newS canner(S ystem.in);
3S ystem.out.println(Zi p f ilepath :);
4readDirectory input .nextLine();
5S ystem.out.println(Enter packages :);
6packages input.ne xtLine();;
7S ystem.out.println(Enter similarity :);
8similarity Integer.par seInt(in put.ne xtLine());
9end
getFileName: Algorithm 3 is the second subroutine
executed by the microservices recommendation algorithm. It
receives as a parameter the path where the zip folder is, which
contains all the execution traces of the target API and aims to
unzip the zip folder and inform the location of the unzipped
files. The unzipping process starts on line 2, where the directory
where the unzipped files will be placed is inserted. On line 4,
the decompression is executed, passing a copy of all the files
present inside the zip folder, into the directory provided on line
2. And on line 5, the directory with the name of the files is
concatenated to identify the location of the unzipped files.
convertFilesToFeatures: Algorithm 4 is the third subroutine
executed by the microservices recommendation algorithm.
Algorithm 3: Microservices recommendation:
Subroutine 2
1function getFileNames (readDirectory)
2destinationDirectory src/main/re sources/out put;
3fileNames list();
4unzip(read Directory,d estination Directory,f ileN ames);
5return getPathArquivo s(destinationDirectory,f ile Names);
6end
This subroutine receives as parameters the list containing the
location of the unzipped files and the list of packages that
will be prioritized for the microservices recommendation. This
subroutine aims to convert unzipped files into functionalities
that will be used by later subroutines. Features are composed
of feature names and class lists. And each class has a package
and its respective methods. The process of converting files into
system features starts at line 7, where one file path at a time
is converted into a file (line 8), the feature name is identified
(line 9), the file is inserted into a buer (line 11). And for each
line of this buer (line 13) the name of the class (line 15), the
name of the package (line 16) is identified and if the identified
package is equal to any of the packets received by parameter
(line 17), it is constructed the class with its respective name
(line 19), package (line 20), methods (line 23) and added inside
a dictionary, where the key is the name of the functionality (line
37).
createSimilatiryMap: Algorithm 5 is the fourth subroutine
executed by the microservices recommendation algorithm. This
subroutine receives as parameters the functionalities mapped by
the system in the previous step and aims to create a dictionary
where the key is the name of the functionality and the value is
the similarity levels of the other functionalities, in relation to
this one. The process of creating the dictionary of similarities
between the features starts on lines 3 and 5, where for each
feature, the feature name and its classes are taken. And if the
functionalities are not the same (line 6), all classes that are
common between both functionalities are searched (line 7), and
if the functionalities are not the same (line 6), all classes that
are common between both functionalities are searched (line 7),
the similarity is calculated (line 8) and the structure that has the
name of the functionality and its level of similarity to another
functionality (line 9) and added to the dictionary (line 12).
groupFeatures: Algorithm 6 is the fifth subroutine executed
by the microservices recommendation algorithm. This
subroutine receives as parameters (1) the dictionary containing
the functionality similarity levels and (2) a dictionary with the
functionality structure (name, methods, classes, and package).
This subroutine is intended to generate the recommended
microservices. The process starts at line 3, where for each
similarity mapped, the line (name of the feature) and the
columns (containing the level of similarity of the features) are
taken. In line 4, the name of the functionality is retrieved.
In line 5, all the features that had a high similarity level are
taken, in relation to the functionality obtained in line 4. If
the microservices list is empty (line 6), the microservice class
structure is generated (line 8), for the functionality obtained in
line 4. Afterward, the class structure is generated (line 9) of the
7
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
Algorithm 4: Microservices recommendation:
Subroutine 3
1function convertFilesToFeatures ( f unct ionalityFile s, package s)
2functionalityMaps map();
3class null;
4classes null;
5file null;
6functionalityName null;
7foreach file Name in f unctionalityF iles do
8file File(f ileN ame);
9functionalityName
f ile.getName().sub string(0,f ile.get Name().la stInde xO f (”.”));
10 classes list();
11 bufferedReader Buf fer edReader(F ileReader(f ile));
12 auxiliary null;
13 while (auxiliary =bu f fer edReader.readLine()) ,null do
14 line auxiliary.s plit(”,”);
15 className line[0].substring(line[0].la stInd exO f (” : ”) +2);
16 packageName
class Name.sub string(0,clas sN ame.last IndexO f (”.”));
17 if isAuthor izedPackage(package Name,package s)then
18 class Class();
19 classe.setC las sName(cla ss Name);
20 classe.set PackageN ame(packageN ame);
21 methodList list();
22 methodLi st.add(line[1].sub string(line[1].la stInde xO f (” :
”) +2));
23 classe.set Met hodN ame(methodLi st);
24 classIndex getClas sInde x(clas ses,cla ss);
25 if Ob jects.i sNull(cla ssInde x)then
26 classe s.add(cla sse);
27 else
28 classA classe s.get(classI ndex);
29 classe s.remove(cla sseA);
30 class A.add Method Name(line[1].sub string(line[1].last Inde xO f (” :
”) +2));
31 classe s.add(cla ssInde x,clas sA);
32 end
33 end
34 end
35 f unctionalityM aps.put (f unctionalityN ame,classe s);
36 end
37 return f unctionalityMa ps;
38 end
Algorithm 5: Microservices recommendation -
Subroutine 4
1function createSimilarityMap ( f unctionalit ies)
2similarityMap map();
3foreach (funct ionality1,class1) in f unctionalitie s do
4columns list();
5foreach (funct ionality2,class2) in f unctionalitie s do
6if !f unctionality1.equals(f unctionalit y2) then
7equalClasses intersection(classe s1,clas ses2);
8similarity Double.valueO f (equalClas ses.size() (0.1
equalClas ses.size()))/Double.valueO f (cla sses1.size()
(0.1classe s1.size())) 100;
9columns.add(C olumn(f unctionality2,similarity));
10 end
11 end
12 similarityM ap.put (f unctionality1,columns);
13 end
14 return similarityMa p;
15 end
filtered functionalities in line 5 and the microservice containing
this class structure is generated (line 10). However, if the list
of microservices is not empty (line 11), it is checked if the
functionality is already present inside the microservice (lines
12 and 13). After that, the most appropriate strategy is chosen
(line 14) to update the microservices structure (line 15). Finally,
all identified microservices are returned (line 19).
Algorithm 6: Microservices recommendation:
Subroutine 5
1function groupFeatures(similarity Ma p,f uncionalidade sMa p)
2microservices list();
3foreach (row,columns)in similarit yMa p do
4functionalities row;
5filteredColumns f iltredC olumnsByT hre shold(colmns);
6if microservice s.isEm pty() then
7classResponses list();
8generateM icroser viceClas ses(f unctionalitie sMa p,f unctionalitie s,classRe spon ses);
9functionalities f ilterColumn(f unctionalitie sMa p);
10 microservice s.add(generateMicroservie(f unct ionalities,clas sRes ponse s));
11 else
12 microserviceOneIndex
getMicroservice Inde x(microser vices,f unctionalitie s);
13 microserviceTwoIndex
getMicroservice Inde x(microser vices,f ilteredC olumns);
14 microservice Facade.get S trategyList ().stream().f ilter(strategy>
strategy.isCompat ible(microserviceOne Inde x,microser viceT woInd ex))
15 .f orEach(st rategy>
strategy.genarateRecomment ation(microser vices,colunasFiltr adas,
16 f uncionalidades Ma p,f inalFunctionalitie s));
17 end
18 end
19 return microservice s;
20 end
4.4. Implementation aspects
To implement the API Slicer approach, an open-source tool
was created. The tool’s technologies are: (1) The Java 11
language, which is an object-oriented programming language,
usually used in back-end applications; (2) Jackson Databind
library to convert objects into JSON structures and with that
present recommendation in a more user-friendly way; (3)
Maven dependency manager, to manage all libraries inserted
into the project. And as support tools were used: (1) the Intellij
IDE, the environment used to implement code; (2) Github, a
repository to store the application’s source code and (3) Visual
VM, a graphical tool that was used to capture the execution
traces of the tested target APIs.
5. Evaluation
This section focuses on describing our case study to evaluate
the proposed approach. For this, we followed well-known
guidelines [24]. Section 5.1 presents descriptions of target
applications used in our evaluation. Section 5.2 presents the
metrics used to evaluate the result. Section 5.3 describes how
the result is evaluated. Section 5.4 discusses the obtained
results.
8
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
5.1. Description of the target application
This section describes the three target applications used in
our study:
Blog API: It is an API whose main functions are to
login, create a post and comments for it, using JWT
authentication as a security criterion. It was chosen
because it has several endpoints, makes use of springboot
and has good documentation for reproduction on the local
machine. The application can be found on GitHub1.
The technologies applied were Springboot, Postgres, Java
11, Lombok, Maven, JJWT, JPA, SpringSecurity and
ModelMapper.
Shopping Cart API: It is an API used to make online
purchases. And Among the functionalities are: add
product to cart, search cart, search user profile, search
all products and finalize order, in addition to using JWT
authentication. The application was chosen because it
has several endpoints, has documentation and makes use
of springboot. The technologies used were: Springboot,
Postgres, Java 11, Maven, JJWT, JPA and SpringSecurity.
The application is available on GitHub2. Due to this work
using only APIs, the layer responsible for the frontend was
completely removed.
Order API: It is a request CRUD that uses the hexagonal
architecture, and makes use of technologies such as
Postgres, Java 11, SpringBoot, Gradle, Lombok and
Swagger. And the main supported functionalities are (1)
search branch, (2) create branch, (3) delete branch, (4) add
items to branch, (5) create item, (6) query item, (7) add
items to list of price, (8) delete price list items, (9) create
price list, (10) fetch price list, (11) save payment, (12)
fetch payment, (13) remove payment, (14) update payment
and (15) generation of the inventory report by branch.
However, for testing purposes only 4 functionalities were
used, they are: search for items, search for items by id,
search branches and generate a report by branch. The
application was chosen because it has several endpoints,
the author has mastery over its operation and for making
use of Spring Boot. The application is available on
GitHub3.
5.2. Metrics
In order to evaluate the results generated by the API
Slicer and Mon´
olise approach, the present work will make
use of precision and recall measures to identify the quality
of what is being recommended and the number of correct
recommendations being made, in relation to the desired result.
1Blog API repository: https://github.com/RameshMF/springboot-blog-rest-
api
2Shopping-cart repository: https://github.com/zhulinn/SpringBoot-
Angular7-Online-Shopping-Store
3Order API repository: https://github.com/CarlosFernandoXavier/hexagonal-
architecture
Precision. It is the eectiveness measure that aims to exclude
non-relevant elements from the retrieved set. And it is seen
as the proportion of the number of relevant elements, coming
from the retrieved elements [25]. In this study, precision will
be used to identify (1) how many recommended microservices
and (2) functionality that should remain in the API are correct,
in relation to the result generated. Below is the precision
formula [26].
precision(q)=P(retrieved,relevant|q)
P(retrieved|q)(1)
Recall. It is a measure of eectiveness that includes relevant
items in the retrieved set and is seen as the proportion of the
number of relevant elements retrieved, in relation to the total
of relevant items [25]. In this work, recall will be used to
identify (1) the number of recommended microservices and
(2) functionality that should remain in the API, are correct, in
relation to the expected result. The following is presented recall
formula [26].
recall(q)=P(retrieved,relevant|q)
P(relevant|q)(2)
5.3. Experimental process
Figure 4 shows all the steps followed to perform the analysis
and comparison of the results obtained by the API Slicer
approach and the Mon´
olise approach when they were submitted
to three dierent target applications. The way of demonstrating
the experimental process was based on the work of the authors
[27]. The stages of the experimental process are presented
below:
Phase 1: Choosing the target application. This
study used three target applications to evaluate the API
Slicer and Mon´
olise approaches, including Request API,
Shopping Cart API, and Blog API (further details can be
found in Section 5.1). The requirement used for choosing
the APIs was: (1) the APIs are in some public repository,
(2) the APIs can be executed locally, (3) the APIs have
more than two functionalities, and (4) the APIs have been
developed in Java, using the Spring Boot framework.
Phase 2: Manual decomposition. This study used
three target applications, they are Request API, Shopping
Cart API, and Blog API (further details can be found
in Section 5.1. The target applications, one at a time,
were evaluated for their functionality and code. After the
evaluation, the author suggested the recommendation of
which feature should become a microservice and which
feature should remain in the monolithic API. Based on
these recommendations, there was a basis for what the
expected result should be after executing the approaches.
Phase 3: Execute the implementation of the chosen
approach. For the API Slicer, the following were
provided as input data: (1) the execution traces of the
features that you want to generate the recommendation,
(2) the applied similarity level (cut-ovalue that says
9
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
when the services should be in the same microservice
) and (3) which packages should be considered when
recommending microservices. As for the Mon´
olise
approach by the author Rocha (2018), the following were
used as input data: 1) the execution traces of the features
that you want to generate the recommendation, (2) the
level of similarity applied (cut-ovalue that says when
the services must be in the same microservice) and (3)
the configuration file, where the model, DAO, service,
repository, and controller packages gained weights that
can vary from 0.1 to 1.
Phase 4: Collect and analyze data. The results provided
by both approaches were collected and then structured
in a spreadsheet. In this worksheet, the data of the
expected recommendation (prepared by the author) and the
recommendations suggested by API Slicer and Mon´
olise
were placed. Both API Slicer and Mon ´
olise approaches
used a similarity level that started at 10% and increased
from 10 to 10, until reaching 90%. And at each
variation of the similarity level, the precision and recall
metrics were measured. Precision and recall measures
were used to identify which approach could reach the
expected recommendation faster, using the lowest level
of similarity. In Figure 5, Figure 6, and Figure 7, the
scenarios where at least one of the approaches reached the
expected result can be seen. One caveat to be made is
that Mon´
olise does not say which functionalities should
remain in the monolithic API, so to cover this gap, the
microservice that had the largest number of microservices
was classified as the service that should remain in the
monolithic API.
5.4. Results
This section presents the collected results after performing
the steps described in Section 5.3. Figure 5, Figure 6 and
Figure 7 present the obtained results, applying the approaches
API Slicer and Mon´
olise to the target applications Blog API,
Shopping Cart API and Request API.
Case 1: Shopping Cart API. The API Slicer approach
had better results both for recommending microservices and
indicating which features should remain in the monolithic
API. When recommending microservices, starting at a
40% similarity level, the API Slicer starts to recommend
microservices with 100% precision and recall, while in
Mon´
olise, this result was only reached after 60%. Regarding
the functionalities that must be kept in the monolithic API,
although the API Slicer and Mon´
olise reached 100% in the
recall metric, the API Slicer managed to reach the maximum
level of precision from 40% of similarity level. Mon´
olise
achieved this feat only when the similarity level reached 70%.
Case 2: Order API. In the Order API target application, the
API Slicer approach and Mon´
olise had a tie. In recommending
microservices, both approaches achieved the same precision
and recall metrics, for dierent levels of similarity. Regarding
the functionalities that must be kept in the monolithic API, both
Figure 4: Overview of the experimental process
Figure 5: Case 1: shopping-cart-api
the API Slicer and Mon´
olise reached the same precision and
recall results, for all similarity levels applied.
Case 3: Blog API. In the Blog API target application, the
Mon´
olise approach had better results both for recommending
microservices and for saying which features should remain in
the monolithic API. In microservices recommendation, both
API Slicer and Mon´
olise had a low performance, but in relation
to the recall metric, Mon´
olise managed to reach 100%. Already
in the functionalities that must remain in the monolithic API,
although both approaches have reached the same results for
recall. In the precision metric, in the range of 20% to 40%
similarity, Mon´
olise reached an accuracy of 100%.
10
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
Figure 6: Case 2: Order API
Figure 7: Case 3: Blog API
6. Discussion
Based on the observations of the results obtained in the
construction of this project and the comparison of both
approaches, the following issues were identified for discussion:
Better performance. Based on the results obtained for each
of the three target applications, it can be said that the API Slicer
approach performed better in recommending microservices. It
achieved 100% accuracy and recall in two of the three target
applications tested (Section 5.4). This is due to the user’s
flexibility to choose which packages he wants to be considered
when generating recommendations. The second consequence
of this choice of packages is that transversal classes tend not to
interfere with the result of the recommendation, as was seen in
the Mon´
olise work.
Incorrect recommendation. It can be noticed that the
API Slicer approach did not have a satisfactory result when it
generated the recommendations for the target application Blog
API. This happened because the packages that generated the
dierentiation of the login functionality were also used by other
services, generating the impossibility of reaching the desired
result. And this leads to speculation that when you have tightly
coupled services, maybe the API Slicer approach is not the
best choice. However, to confirm this assumption, further
comparisons would have to be made with tools using dierent
approaches.
Execution trace. The execution trace is a file that tells which
classes and methods were executed when certain functionality
was activated. In this project, it serves as input for the API
Slicer to understand the target application and be able to make
some recommendations. However, this is one of the most costly
steps in the recommendation process, because in addition to
being a manual step performed only by the user, the Java code
instrumentation tools used to assist in this process, either cannot
deliver the trace in a simple way, or have limitations, as can be
seen in the work [7]
Open source tools. There are several works related to
the decomposition of monolithic applications, using the most
diverse approaches, such as graph [18], joining code with
database [6], analyzing the code syntactically [7], among
others. However, in most cases, the source code of the tools
is not available in a public repository, preventing a series
of improvements that could arise through the comparison of
approaches and respective evolutions. The observation that
more works should make their source code available, for
the evolution not only of the tool, but also of the research,
corroborates the work of [28], who says the following:
”universities have synergy with open source programs, because
creating and sharing knowledge for the public good is a
fundamental part of the mission of universities”.
6.1. Challenge and Implications
This section presents the challenges and implications that
were derived from the analysis of the results of the data
obtained.
Challenge 1: Proposed approach. Although the API Slicer
had better results in recommending microservices, in terms of
accuracy metrics. For future research, it would be interesting
to create an approach that works in any scenario and that has
a certain context of intelligence behind it so that the result
of microservices recommendation is not strongly influenced
by user choices. And yes in metrics and knowledge already
validated. In the case of API Slicer, it may be possible to
change the result of the microservices recommendation through
the execution trace provided, the similarity value or through the
name of the packages that must be considered at the time of
the recommendation. And this can be seen as a problem as
the recommendation will be biased towards something the user
already wants as a result. And not always what the user wants,
as a result, is the best.
Implication 1: Execution trace. The execution trace is a
file that contains all the classes and methods that were used
when a functionality was executed. The execution trace can
be obtained through instrumentation tools or application of
aspect-oriented programming in the project. After analysis,
it was identified that making use of the execution trace to
map functionalities that are within the application is extremely
useful and recommended for any approach that needs to
understand which classes and methods a particular functionality
is composed of.
11
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
6.2. Limitations
The case study addressed in this work is an initial study
that explores the decomposition of monolithic APIs, a topic
little explored in the literature. Due to this, it has some
limitations that need to be considered, for example: although
there is nothing that prevents the use of this project for other
applications, its focus only encompasses APIs. Therefore,
the eectiveness tests of the recommendations are restricted
to this scope only. As the tool created by the Mon´
olise
approach is not available in a public repository and there was no
access to it, the prototype was recreated based on information
from the Mon´
olise article. Therefore, the results obtained
with the recreated tool may dier a little from the original
implementation (either positively or negatively). The quality
of the recommendations has a strong dependence on the trace
captured by the user. And since this capture is entirely his
responsibility, as the tool does not have any functionality that
captures the execution trace, this point is seen as a limiting
factor.
7. Conclusions and Future Work
This article presented API Slicer, an approach based
on the level of similarity between features to recommend
the decomposition of monolithic API into microservices.
The approach was evaluated through a case study, where
the proposed approach was compared with the Mon´
olise
approach [8]. The precision and recall metrics were used to
compare the obtained results generated from both approaches.
The results indicated that the API Slicer approach had better
results in terms of accuracy for recommending microservices.
Regarding the recall metric, both approaches had similar
results. Our results indicate that the proposed approach
seems to be a promising approach, but it still needs important
improvements for an eective use in the software industry.
As future works, three important points are highlighted: (1)
make use of machine learning to improve the recommendations;
(2) implement a functionality that captures the execution trace
of a target API; and (3) run the tool in more applications
to identify whether the recommendations are still assertive.
Further empirical studies are still required to grasp whether
our findings can be found in other contexts, considering robust
APIs. We do not claim generalization of our initial evaluation
beyond the target applications. Finally, we hope that the API
Slicer encourage other practitioner and researchers to propose
new approaches and tool support. This work can be seen
a first step in a more ambitious agenda on better supporting
decomposition tasks.
References
[1] A. Sheikh, A. Bs, Decomposing monolithic systems to
microservices, in: 2020 3rd International Conference on Computer
and Informatics Engineering (IC2IE), 2020, pp. 478–481.
doi:10.1109/IC2IE50715.2020.9274641.
[2] R. G. Urdangarin, K. Farias, J. Barbosa, Mon4aware: A multi-objective
and context-aware approach to decompose monolithic applications, in:
XVII Brazilian Symposium on Information Systems, SBSI 2021, 2021.
[3] O. Amaral J´
unior, Arquitetura de micro servic¸ os: uma comparac¸ ˜
ao com
sistemas monol´
ıticos (2017).
[4] W. Assunc¸ ˜
ao, T. Colanzi, L. Carvalho, A. Garcia, J. Alves Pereira,
M. Lima, C. Lucena, Analysis of a many-objective optimization approach
for identifying microservices from legacy systems, Empirical Software
Engineering 27 (03 2022).
[5] G. Filippone, M. Autili, F. Rossi, M. Tivoli, Migration of monoliths
through the synthesis of microservices using combinatorial optimization,
in: 2021 IEEE International Symposium on Software Reliability
Engineering Workshops (ISSREW), 2021, pp. 144–147.
[6] J. Zhao, K. Zhao, Applying microservice refactoring to object-2riented
legacy system, in: 2021 8th International Conference on Dependable
Systems and Their Applications (DSA), 2021, pp. 467–473.
[7] A. P. dos Santos, H. B. De Paula, Implementac¸ ˜
ao e avaliac¸ ˜
ao de uma
ferramenta de sugest˜
oes para decomposic¸˜
ao de aplicac¸˜
ao monol´
ıtica em
microsservic¸os (2020).
[8] D. Pereira da Rocha, Mon ´
olise: Uma t´
ecnica para decomposic¸˜
ao de
aplicac¸ ˜
oes monol´
ıticas em microsservic¸os (2018).
[9] S. Newman, Building microservices, O’Reilly Media, Inc.”, 2021.
[10] a. gupta, A first look at microservice (2015).
[11] J. P. D. Lucio, et al., An´
alise comparativa entre arquitetura monol´
ıtica e
de microsservic¸os (2017).
[12] S. Wahlstr¨
om, Comparing scaling benefits of monolithic and microservice
architectures implemented in java and go (2019).
[13] T. Th¨
um, C. K¨
astner, F. Benduhn, J. Meinicke, G. Saake, T. Leich,
Featureide: An extensible framework for feature-oriented software
development, Science of Computer Programming 79 (2014) 70–85.
[14] S. Apel, C. K ¨
astner, An overview of feature-oriented software
development., J. Object Technol. 8 (5) (2009) 49–84.
[15] L. Lazzari, K. Farias, Event-driven architecture and rest architectural
style: An exploratory study on modularity, Journal of applied research
and technology 1 (1) (2022) 1–2.
[16] E. W. J´
unior, K. Farias, B. da Silva, On the use of uml in the brazilian
industry: A survey, Journal of Software Engineering Research and
Development 10 (2022) 10–1.
[17] N. Ivanov, A. Tasheva, A hot decomposition procedure: Operational
monolith system to microservices, in: 2021 International Conference
Automatics and Informatics (ICAI), 2021, pp. 182–187.
[18] L. J. Kirby, E. Boerstra, Z. J. Anderson, J. Rubin, Weighing the evidence:
On relationship types in microservice extraction, in: 2021 IEEE/ACM
29th International Conference on Program Comprehension (ICPC), 2021,
pp. 358–368.
[19] G. Mazlami, J. Cito, P. Leitner, Extraction of microservices from
monolithic software architectures, in: 2017 IEEE International
Conference on Web Services (ICWS), 2017, pp. 524–531.
doi:10.1109/ICWS.2017.61.
[20] M. Rubert, K. Farias, On the eects of continuous delivery on code
quality: A case study in industry, Computer Standards & Interfaces 81
(2022) 103588.
[21] K. Farias, T. C. de Oliveira, L. J. Gonc¸ales, V. Bischo, UML2Merge: a
UML extension for model merging, IET Software 13 (6) (2019) 575–586.
[22] R. G. Urdangarin, K. Farias, J. Barbosa, Mon4aware: A multi-objective
and context-aware approach to decompose monolithic applications, in:
XVII Brazilian Symposium on Information Systems, 2021, pp. 1–9.
[23] V. Bischo, K. Farias, Vitforecast: An iot approach to predict diseases in
vineyard, in: XVI Brazilian Symposium on Information Systems, 2020,
pp. 1–8.
[24] P. Runeson, M. H¨
ost, Guidelines for conducting and reporting case study
research in software engineering, Empirical software engineering 14 (2)
(2009) 131–164.
[25] M. Buckland, F. Gey, The relationship between recall and precision,
Journal of the American society for information science 45 (1) (1994)
12–19.
[26] T. Roelleke, Information retrieval models: Foundations and relationships,
Synthesis Lectures on Information Concepts, Retrieval, and Services 5 (3)
(2013) 1–163.
[27] K. Farias, A. Garcia, J. Whittle, C. Chavez, C. Lucena, Evaluating the
eort of composing design models: A controlled experiment, Vol. 7590,
2012, pp. 676–691. doi:10.1007/978-3-642-33666-943.
[28] C. Coppola, E. Neelley, Open source-opens learning: Why open source makes
sense for education (2004).
12
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4307974
Preprint not peer reviewed
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Event-driven architecture has been widely adopted in the software industry, emerging as an alternative to the development of enterprise applications based on REST architectural style. However, little is known about the effects of event-driven architecture on modularity while enterprise applications evolve. Consequently, practitioners end up adopting it without any empirical evidence about its impacts on essential indicators, including separation of concerns, coupling, cohesion, complexity, and size. This article, therefore, reports an exploratory study comparing event-driven architecture and REST style in terms of modularity. A realistic application was developed using an event-driven architecture and REST through ve evolution scenarios. In each scenario, a feature was added. The generated versions were compared using ten metrics. The initial results suggest that the event-driven architecture improved the separation of concerns, but was outperformed considering the metrics of coupling, cohesion, complexity and size. The findings are encouraging and can be seen as a first step in a more ambitious agenda to empirically evaluate the bene ts of event-driven architecture against the REST architectural style.
Article
Full-text available
The expensive maintenance of legacy systems leads companies to migrate such systems to modern architectures. Microservice architectural style has become a trend to modernize monolithic legacy systems. A microservice architecture consists of small, autonomous, and highly-independent services communicating by using lightweight network protocols. To support the designing of microservice architectures, recent studies have proposed either single or multi-objective approaches. In order to improve the effectiveness of existing approaches, we introduced toMicroservices that is a many-objective search-based approach to aid the identification of boundaries among services. In previous studies, we have focused on a qualitative evaluation of the applicability and adoption of the proposed approach from a practical point of view, thus the optimization process itself has not been investigated in depth. In this paper, we extend our previous work by performing a more in-depth analysis of our many-objective approach for microservice identification. We compare our approach against a baseline approach based on a random search using a set of performance indicators widely used in the literature of many-objective optimization. Our results are validated through a real-world case study. The study findings reveal that (i) the criteria optimized by our approach are interdependent and conflicting; and (ii) all candidate solutions lead to better performance indicators in comparison to random search. Overall, the proposed many-objective approach for microservice identification yields promising results, which shed light on insights for further improvements.
Article
Continuous delivery has been adopted by organizations to make software available to their users at any time. The transition from traditional software delivery methodologies to continuous delivery can impact on the results generated by organizations, e.g., the quality of source code and products. Although widely adopted, little is known about its effects. To account for this, this article reports a case study on the effects of continuous delivery on the quality of source code and products produced. Our case study was carried out for 12 months within a software development company in Brazil. Our findings indicate that the adoption of continuous delivery practices improved the quality of delivered products, mainly considering the number of defects reported by customers, the number of demands delivered per month, and user satisfaction. However, the adoption of continuous delivery did not favor the quality of source code, including the number of bugs, security vulnerabilities, code smells, duplicated code, and code complexity. Researchers and practitioners may benefit from our findings typically when delivering software products, designing and seeking to improve deployment pipeline practices. Finally, our study draws up some implications and shows the potential of adopting continuous delivery for developing enterprise applications that are constantly evolving.