ArticlePDF Available

Engineering on the Internet for global software production

Authors:

Abstract and Figures

Over the past two decades, researchers and tool vendors have introduced techniques and tools to improve software engineering processes. Most of these are host-centered systems with closed architectures, fixed database drivers, specific network requirements, and platform-dependent client and server software. These restrictions make sharing information difficult, complicate tool integration, and limit global user access from diversified software environments. These are the major obstacles in global software production. Today's Internet technology provides a powerful and cost-effective means of overcoming these obstacles. Internet technologies allow distributed networking, global access, platform independence, information sharing, and internationalization. The Internet provides a nearly ubiquitous communication infrastructure, enabling team members to connect to the development process easily. This article reports the authors' innovative work in the arena of constructing an Internet-based, global software engineering environment
Content may be subject to copyright.
0018-9162/98/$10.00 © 1998 IEEE38 Computer
Computing Practices
Over the past two decades, re-
searchers and tool vendors have
introduced techniques and tools
to improve software engineer-
ing processes. But most of these
are host-centered systems with closed architec-
tures, fixed database drivers, specific network
requirements, and platform-dependent client
and server software. These restrictions make
sharing information difficult, complicate tool
integration, and limit global user access from
diversified software environments. These are
the major obstacles in global software produc-
tion.
Today’s Internet technology provides a pow-
erful and cost-effective means of overcoming
these obstacles. Internet technologies allow distributed
networking, global access, platform independence,
information sharing, and internationalization. As oth-
ers have pointed out, the Internet provides a nearly
ubiquitous communication infrastructure, enabling
team members to connect to the development process
easily.1
Although many large organizations have established
enterprise infrastructures on the Internet,2-7 few pub-
lications have discussed the issues involved in con-
structing an Internet-based, global software-
engineering environment. This article reports Fujitsu’s
innovative work in this arena.
In 1995, a small research and development group
within Fujitsu Network Communication Systems
developed the first Web-based prototype system to
support global testing and validation on the company’s
intranet.8Soon after that, the company set up the
Product Development Environment project to build
an enterprise infrastructure that would offer Internet-
based support for the complete life cycle of software
products. The primary goal is to establish a software
environment that has a configurable system infra-
structure and flexible information repositories that
support a set of collaborative software tools on a dis-
tributed network.
Over the past two years, we have developed several
systems to work within the PDE infrastructure: a prob-
lem information management system (PIMS), a
resource management system known as ResourcePark,
a test management system (TMS), and an information-
sharing system (ISS) for project management. Teams
Computing Practices
Engineering on the
Internet for Global
Software Production
How can Internet technology change software engineering? Fujitsu has
taken the first steps in constructing an enterprise-wide, Internet-based
infrastructure that allows teams around the world to collaborate on every
phase in the life cycle of a global software product.
Jerry Z. Gao
San Jose State
University
Cris Chen
Yasufumi
Toyoshima
Fujitsu Network
Communications
Inc.
David K.
Leung
MagicSoft
Consulting
Inc.
May 1999 39
throughout the world have used these systems on the
Internet to work on several software product lines.
GLOBAL SOFTWARE ENGINEERING:
REQUIREMENTS
For a large enterprise such as Fujitsu, developing
software can involve several organizations and numer-
ous teams at different locations. Development teams
conduct design and implementation; testing teams val-
idate software and test systems; and customer support
teams provide various customer services. Each team
needs its own environment, tools, and information
repositories to support its activities. Nonetheless, the
teams need to share and exchange information and
software across environments and networks. They also
need to synchronize their workflow and schedules.
At the beginning of the PDE project, we studied
Fujitsu’s global software production lines and identi-
fied a software production model. This model pro-
vided a clear picture of how to divide a global software
process into engineering functions, managerial func-
tions, and their related support information. It helped
us define the scope of the PDE project in a rational
way for constructing our virtual software-engineering
environment on the Internet.
WEB-BASED INFRASTRUCTURE
Figure 1 shows part of the PDE-based system infra-
structure for the Fujitsu enterprise. Each site has at
least one PDE system, which contains a set of man-
PDE
PDE
PDE
Q&A and
test team
PDE PDE
Development team
Test team Support team
Firewall
Extrant-Net
Intranet
Firewall
Firewall
Test team
Development
team
Marketing team
Development
team
Fujitsu-Net
Development
team
Internet
Customer
Site
PDE system
PDE
Figure 1. PDE-based
global system infra-
structure.
40 Computer
agement systems and tools. Teams on the same site
may share or access more than one system in a PDE.
To support different user accesses and enforce system
security, we divide Internet access into three classes:
Fujitsu-Net. This private intranet serves all of
Fujitsu’s internal divisions and departments.
Different teams working on the same product line
may share information and reuse common soft-
ware through their PDEs.
Internet. Customers can communicate with cus-
tomer support teams, marketing, and salespeo-
ple over the Internet by accessing a customer
support system or a business management sys-
tem. Because these are public users, we estab-
lished a rigorous security mechanism and policy.
Extrant-Net. Many global software product lines
involve teams from external software workshops
or warehouses. Although these teams have their
own environments and tools, they need to com-
municate with Fujitsu teams and share certain
information across network firewalls. For exam-
ple, a Fujitsu validation team might need to mon-
itor the problems, test process, and testing status
of an external software-testing team. Meanwhile,
an external software developer might need data
to check the product problems found by an inter-
nal validation group.
Systems
According to the software production model, we
generated a plan to focus on seven global systems and
their information repositories.
Business management tracks and manages business
information. Its functional scope includes marketing,
sales, customer and business tracking, product cata-
logs, and advertisements.
Project information sharing allows us to manage
and coordinate the project. This system includes pro-
ject tracking, news bulletins, schedules, status reports,
and cost analyses. For joint projects involving several
teams, project coordination, status sharing, and sched-
ule synchronization are important functions.
Resource management controls and manages a
global product’s resources including programs, com-
ponents, scripts, documents, problems, and test cases.
This system’s functional scope includes version,
release, and document control.
Problem management tracks, manages, and shares
problems related to a product among various teams
globally. This covers problems with bookkeeping, track-
ing, report and analysis, and notification. Problems can
relate to the product itself or to project management.
Test management is a Web-based system through
which we share test information, manage the testing
process, and also report and control tests. Its test agent
function helps team members select test cases, control
test simulators, manage test tools, and perform auto-
tests. This system not only provides an integrated test-
ing environment but also allows remote testing.
Change management is a Web-based global system
through which we track, share, manage, and monitor
change information. This system tracks changes
among different versions of a software product,
including requirements and design documents, pro-
grams, test cases, and procedures. In addition, it mon-
itors and controls a systematic change process from a
change request to design or code changes, as well as
document update and test modifications.
Customer support is a Web-based multimedia sys-
tem that helps engineers provide service to customers.
It includes customer tracking, call tracking, remote
technical support, field support and report, customer
questions and answers, customer training, and prod-
uct distribution. This system provides a remote sup-
port capability for customer support engineers and a
remote training tool for product trainers.
Information repositories
To support these management systems, a PDE sys-
tem includes several information repositories, as listed
in Table 1. Exactly which repositories a PDE includes
Table 1. Repositories hold information that supports PDE system functions.
Repository type Contents
Project information Schedules, status reports, meeting minutes, deliverables, and team membership
System resource Resource database for each software product, each of which contains a product’s version control
records, components, program source files, documents, test scripts, and test suites
Problem information Problem analysis, fixes, validation, and review
Test information Test cases, procedures, data, metrics, and reports
Customer information Customer calls, support, shipping, and training, frequently asked questions and answers, and field
support reports
PDE administration User information, including user groups, user accounts, access control settings (for predefined
functions in different management systems and tools), and workflow and monitoring information
May 1999 41
depends on the products under development and the
project needs.
PDE SYSTEM ARCHITECTURE
A PDE system can support one or more teams on
an intranet. As shown in Figure 2, the PDE system has
a four-layer architecture.
The user interface layer runs on a Web browser.
Each system has its own client software, which sup-
ports distributed accesses from users. Based on our
experience, Java-based client software usually pro-
vides a better user interface than HTML-based client
software because of Java’s Abstract Window Toolkit
features. Although we can use JavaScript in HTML-
based client software to support dynamic tables, dia-
log boxes, and windows, this option limits the creation
of dynamic graphics and complicated windows. On
the other hand, Java-based client software usually con-
sumes more system resources. In addition, Java applets
take longer to load than an HTML page.
The communication layer is the adopted third-party
software, such as a Web server, a CORBA (Common
Object Request Broker Architecture) server, or both.
This layer controls communications between client
software and its corresponding server.
When the communication layer uses a Web server,
communication between a client and its server follows
the HTTP protocol or a secured version of it, such as
HTTPS. These protocols support only connectionless
communication, which means that both client and
server software need packing and unpacking functions
to support their communications. This implies that
we need a standard communication message format at
the application level to support a PDE’s different appli-
cation functions and systems. Our experience has been
that an approach based on CGI script limits concur-
rent processing and is slower than other approaches,
such as CORBA and Remote Method Invocation
Database server(s)
Common database access library
Product Development Environment (PDE)
Web server CORBA
HTTP CORBA IIOP
Intranet
Firewall
HTML forms/CGI
User
Browser
Java applets/CGI
User
Browser
Java applets/CORBA IIOP
User
Browser User
interface layer
Communication
layer
Functional service layer
Data storage layer
Customer databases
Change databases
Problem databases
Resource databases
Test databases
Administration
databases
Figure 2. PDE system architecture.
42 Computer
(RMI). However, CGI-based approaches are flexible
and easy to change, and can be very useful for report
generation.
With CORBA technology, communication between
a client and a server follows the CORBA IIOP proto-
col, a standard transmission protocol for distributed
objects. Compared with RMI, current CORBA tech-
nology has the advantages of secured communication
and multiple language support.
The functional service layer includes several func-
tional servers in the PDE. All these servers can inter-
act with each other through CORBA or an infor-
mation exchange program in a PDE management
system. Each functional server has its own functional
scope and specific information repository. To support
the interactions and collaborations of distributed
objects, CORBA provides the most general solution
because it lets objects written in different languages
communicate with each other. In addition, a PDE envi-
ronment needs a management system, which controls
and monitors different functional servers and provides
a centralized user interface for system administration.
The data storage layer consists of three parts: a
database access library, including a collection of data-
base access classes or programs; one or more database
servers; and logic and physical databases. To cope with
different database drivers, this layer needs a stan-
dardized database definition language. In addition, it
requires a related database schema conversion tool
and a database migration facility.
This four-layer system architecture yields several ben-
efits. Partitioning diversified information into different
logic databases increases the PDE system’s scalability,
extendability, and performance because one or more
database servers can support various data accesses.
Grouping various functional servers on the CORBA net-
work increases the flexibility of support for system inter-
actions, system collaborations, and legacy software.
Separating the functional service layer from the data-
base access layer increases the flexibility of support for
different system configurations and alternative load-
balancing techniques. In addition, it also reduces the
impact of software changes in the service functions on
the data access programs. Meanwhile, it increases soft-
ware reuse among a PDE’s systems.
Server architecture
A PDE consists of several systems and a group of
databases. Each division at one site may set up one or
more PDE systems. Although each system in a PDE
gives users a distinct set of functional features and infor-
mation, the PDEs share a set of functional features and
CORBA
Web
server
Service functionSystem administration
CORBA IDLCGICORBA IDLCGI
Communication interface
for administration Communication interface
for service function
Workflow process controller
Security controller
User-access controller
System configuration and customization
System administration database API
Graphic analysis
report generator E-mail
agent
HTML-based
report generator FTP
agent
Database
migration Database
export/import Notification
Repository-specific database API
System-specific components
Database access library
Data server
Administration database Specific databases
E-mail
server
FTP
server
Figure 3. Common
server architecture.
Shaded components
are specific to the par-
ticular system.
mechanisms. To reduce software development costs—
and increase software reuse—we have identified and
used a common architecture. As shown in Figure 3,
each server in the PDE system includes two parts.
System administration. The first part, system admin-
istration, consists of the following components and
functions:
• An interface acts as a communication program
between client software for administrators and
its corresponding server in a PDE.
The security controller sets up and monitors the
system security.
The user-access controller creates and maintains
user accounts, groups, and access permissions.
The workflow process controller creates, config-
ures, and maintains a process-oriented workflow
according to a state-based workflow model.
System configuration and customization configures
or customizes the system-persistent data, parame-
ters, and features according to customer needs.
A system administration database is an applica-
tion program interface (API) for communication
among hardware, software, and users.
The systems in a PDE share similar administration
functions, but each system has its own user groups,
with different access rights to a certain information
repository. To achieve system independence and thus
increase the PDE’s configuration flexibility, we must
give each system its own administration functions.
Service function. The second part of a PDE server
is its service function, which supports the communi-
cations of a specific functional client software and its
function server. Both CGI and CORBA can be used
here to implement the communication interface.
The service function contains the specific compo-
nents that provide the functions for a particular sys-
tem. It also contains several components that are
common to each system built on the PDE infrastruc-
ture. These include report generators for graphic
analysis and HTML reports based on data from a
database. To access the data, systems must have a
component that imports or exports a subset of infor-
mation from one database to another when both data-
bases have the same database schema. Another
common component handles database migration—
moving the contents of a database to a newly created
database on the same database server. Each system
also needs components that act as e-mail and FTP
agents.
The system must also support three types of notifi-
cation. Workflow-driven notification is invoked by a
workflow process. For example, in a problem man-
agement system, when a user enters a problem-fixing
record, the system sends a notification message to
testers. Intersystem notification supports the
interactions among systems. For example,
when a resource management system gener-
ates a new product release, it should notify
other systems, such as the problem manage-
ment and test management systems. User-dri-
ven notification allows users to send alteration
or notification messages to anyone on the
Internet.
A repository-specific database API provides
an interface between a database server and a
functional service program.
Client architecture
To reduce the software development cost of the PDE
project, we identified a common architecture model
and used it in our implementation. As shown in Figure
4, this architecture divides client software into six parts.
As in the server architecture, the top level of the
client architecture is an interface. This client interface
can communicate with the server in one of two ways.
For systems using CORBA, the interface is a set of dis-
tributed interface objects defined in IDL. These sys-
tems use the CORBA IIOP as the communication
protocol. For systems using Java and CGI script, all
communication messages—in both client and server—
must use the HTTP protocol.
Any client also requires an access controller, which
includes components for user access control, security
checking, and workflow checking.
The loader comprises a data loader and an applet
loader. The data loader loads data dynamically from
a server to a client. These data are either client-spe-
cific or changeable. The applet loader loads Java
applets dynamically from a server to a client machine.
Although the Java virtual machine (JVM) includes a
feature that loads embedded Java applets in HTML
pages, we created the applet loader to load other
applets during runtime. This is useful for controlling
the loading performance and system resources on a
client machine.
The client data store contains and maintains a set of
client-specific data in two folders: persistent data/
objects and dynamic data/objects. Persistent data and
objects are a particular user’s information, including
account, password, security code, and access control
information. Dynamic data and objects are loaded
from a function-specific information repository. A typ-
ical example in a problem management system would
be problem records in a problem database.
A utility module, which consists of a set of classes—
Help, Print, Tool bar, and Timer—provides basic utilities.
System-specific GUI components include GUIs for
text and analysis reports, as well as specialized GUIs
for other system-specific functions.
A GUI for system administration allows an admin-
May 1999 43
A standard database
access solution
is critical to coping
with different
database drivers.
44 Computer
istrator to manage and maintain user accounts, prod-
uct releases, and workflow models.
EXPERIENCE AND LESSONS LEARNED
Fujitsu began the PDE project in the summer of
1997. Since then, we have developed PIMS, Re-
sourcePark, TMS, and ISS to work within the PDE
infrastructure, and teams have used these systems on
several global software product lines.
We gained valuable knowledge from our practical
experience on the PDE project. The following sections
summarize the major lessons we have learned.
Design
Pay attention to software reuse. Besides creating
reusable architecture models for client and server soft-
ware, we spent a lot of effort finding reusable solu-
tions for features such as workflow, user access
control, and system customization for user-defined
data. Moreover, we developed other reusable software
frameworks, such as a database migration facility and
a 2D and 3D Java graphic library for statistical-report
generation. These reusable solutions and components
have effectively reduced the cost of software develop-
ment for different systems.
Aim at a standardized database access solution. For
an enterprise system infrastructure, a standard data-
base access solution is critical for coping with different
database drivers and the future evolution of database
technology. Achieving this goal requires three actions:
Create application database access programs
using a standard database access API (such as
JDBC, Java Database Connectivity) and a stan-
dard database access language (such as SQL,
Structured Query Language). This way, you can
separate application database access programs
from specific database drivers.
Develop an in-house facility that can convert
database definitions based on an internal data-
base notation.
Implement an in-house database migration soft-
ware utility that can move data from one data-
base driver to another based on an internal
notation of a database schema.
Focus on system customization. Customization fea-
tures are key to the success of any of the systems in
the infrastructure. Ideally, any system in an enterprise
infrastructure should support customization for the
following features:
workflow,
user access,
user data,
report formats, and
system selection and configuration.
In our experience, the diverse needs of different teams
make the ability to customize these features very
important.
Java/CGI-based communication Java IDL Interface
Access control Loader Utility
Access control Applet loader Data loader Help
Print
Tool bar
Timer
Persistent Dynamic
Security check
Workflow check
GUI for system
administration Text report
GUI Specific-
function GUI
Analysis
report GUI
Client data store
System-specific GUI components
Figure 4. Common
client architecture.
Design for scalability. We can look at the scalabil-
ity of an infrastructure for global software production
from three perspectives. The first is functional scala-
bility, the infrastructure’s ability to support an increas-
ing number of functional systems, tools, and services.
The second is data scalability, the ability to support
an increasing number of data volumes in an informa-
tion repository. The last is user scalability, the ability
to support an increasing number of users.
Three actions can help ensure high scalability of an
enterprise system infrastructure:
Use one database driver to manage one informa-
tion repository. This arrangement is more scal-
able because it avoids the bottleneck of database
accesses caused by the growth of data volume in
databases and the increase of user numbers in the
systems.
Use a dedicated machine for each function server.
Also avoid placing a function server and a data-
base server on the same machine.
Find an in-house load-balancing strategy and
implement it as a program to handle the most
user requests. Doing so is a difficult proposition,
and we are currently working on this in our own
system.
Design for system resources. When developing
Java-based client software, it is important for devel-
opers to consult with customers to determine the sys-
tem configuration—minimum memory, for exam-
ple—for client machines. In addition, developers must
understand customers’ requirements and system
resource limitations for client software.
For Java-based client software, system resource
usage depends on four factors:
the execution environment—that is, a system’s
Web browser and supporting JVM;
the number of Java applets and their sizes in a
typical Web browser window;
the volume of data stored in the client software;
and
the complexity of GUI components and their
structures, including colors, images, icons, and
threads.
Clearly, the design and implementation of Java
applets is key as it affects three of these factors. At the
beginning of the PDE project, we did not pay much
attention to system resources when designing client
software. Later, users found that their 32-Mbyte lap-
tops did not have adequate system resources to run
PIMS’s Java applets concurrently with other software.
To resolve this problem, developers had to redesign
and restructure the applets. Through this experience,
we came up with several design tips for control-
ling the system resource requirements of Java-
based client software.
Partition a GUI interface into a number of
Java applets according to high-level func-
tions. To do so, create a main window to
hold a controlling Java applet, which can
create separate browser windows to hold
function-specific Java applets. This method
not only reduces the system loading time,
but also cuts the size of the applets.
Focus on class/object reuse and avoid object cre-
ation. In one release of the PIMS system, for
example, our developers reduced the client soft-
ware’s system resource usage by 30 to 40 percent
after we enforced this rule in GUI design and
implementation.
Control the data volume on the client side. For
example, use a fixed-size data buffer, control the
size of images and icons, reduce the number of
colors, and select simple and popular colors.
Effectively use Java’s dispose method to reclaim
big objects immediately when they are no longer
useful.
Testing
Testing Web-based systems is complicated and
costly because of special features such as distributed
and concurrent accesses, platform independence, secu-
rity, and internationalization. A Web-based applica-
tion usually depends on many different technologies,
such as HTML, Java or J++, JavaScript or ActiveX,
Web browsers, Web servers, and third-party middle-
ware. This makes it difficult to automate the test
process. Although a few test tools are available on
today’s market, all have limitations in different areas.
Platform testing is important. Because our PIMS
system has Java-based client software, we first thought
that users would have no problem accessing it through
any Java-enabled browser on any platform. We dis-
covered, however, that this was not true after testing
it on both PC and Unix platforms using different
browsers. At best, a Java applet works well with dif-
ferent browsers except that the hardware differences
of the various platforms change its color and size. At
worst, a Java applet works fine with one Web browser
but has problems with another browser, even on the
same client machine. We had a similar experience
when upgrading from one browser version to another.
Therefore, though tedious, platform testing is neces-
sary to ensure that a Web-based system works well on
a specified platform with the required Web browsers.
System resource testing is necessary. System resource
testing checks the amount of system resources that
Java-based client software uses on a given client plat-
May 1999 45
The design and
implementation
of Java applets is
key to controlling
system resource
usage.
46 Computer
form. System resources include memory space,
swap space, number of threads, and GUI
resources (such as the number of colors and
windows). Each platform has different admin-
istration tools that check the system resource
usage of a given software application. Because
of the implementation differences between Web
browsers, it is not surprising that a given Java
applet needs different system resources from
one Web browser to another—even on the same
platform.
Our advice to system analysts is to find out early
the customer requirements and the minimum system
configuration for each type of client machine. Then,
testers should use this information to set up a bench-
mark test machine with the minimum system config-
uration for each platform. Another tip is to perform
system resource testing of your client software as early
as possible during system integration and testing. We
learned these lessons the hard way.
Performance testing is critical, but very expensive
and time-consuming. The nature of distributed accesses
makes performance testing indispensable for a Web-
based application system. Testers should select a target
server machine and set up various client machines as
a benchmark environment. Ideally, a performance test
checks the system performance according to a set of
predefined test metrics. For the PIMS system, we used
the following categories of test metrics:
user, such as single-user performance or concur-
rent-user performance;
platform, such as Unix, PC, or portable note-
book; and
function, such as problem report, problem analy-
sis, problem display, or problem bookkeeping.
To get the detailed performance data about a sys-
tem, we added a performance-tracking utility to report
several aspects of system performance:
download speed for Java applets;
data creation and update speed for client and
server programs;
data retrieval speed for client and server pro-
grams;
data deletion speed for client and server pro-
grams; and
data report speed for client and server programs.
This data helps engineers effectively tune system per-
formance.
Because of the lack of Web-based automatic test
tools, performance testing is time-consuming and
tedious. A performance-tracking facility within a sys-
tem to generate the performance metrics for both
client and server software can significantly reduce per-
formance test cost.
Observations and challenges
Although Internet technology has provided power-
ful, cost-effective tools for constructing our Web-based
global software production infrastructure, several
challenges and obstacles remain.
System performance is critical. For a Web-based appli-
cation system, system performance depends on many
factors. Obviously, network hardware performance is
a primary factor. The good news is that vendors are
working on faster Internet switching systems using new
technology.9Still, it is up to software developers to
choose the most efficient technology, select the optimal
algorithm, and write effective programs to reduce the
performance overhead of server and client software.
For example, at the beginning of the PDE project,
we used script-based programs as the communication
gateway interface between client and server programs
in our PIMS system. Using CGI script resulted in a
serious performance problem. We later resolved this
problem by using C++ programs. Our system test
results indicated that this change improved system per-
formance by 25 to 40 percent.
System collaboration and integration is an ongoing
concern. An enterprise infrastructure is a collabora-
tion of many systems and tools that work together to
support a business function. Business functions will
change due to technology updates, product upgrades,
new requirements and tools, and structural alterations
of existing product lines.
System interoperation can be complex. Because a
user group may access several systems concurrently,
guring out how these systems interoperate becomes
a real issue. For example, a test engineer might want
to access test and problem management systems at the
same time. To do so, the engineer should only have to
log in once. Providing the interoperation, even for a
simple login, can be time-consuming.
Legacy system support is a necessity. Support for
legacy systems can be a determining factor in whether
an organization successfully adopts a new enterprise
infrastructure. To achieve this support, we can create
middleware (or a wrapper program) to connect a
legacy system or tool to the infrastructure.
Security is always an issue. Because the information
for global software production is company propri-
etary, most information repositories should only be
accessible from within the enterprise intranet. Usually,
however, a large company’s global software produc-
tion line involves working with people from outside
the company—consultants or other software devel-
opment houses. In this case, a rigorous security mech-
anism must control the access of users from outside
the company.
Testers should
set up a benchmark
test machine with
the minimum system
configuration.
Fujitsu has used Internet technology as a cost-
effective means of constructing an enterprise-
wide system infrastructure to aid global software
engineering. At this point, we have introduced three of
the systems in the PDE environment to several Fujitsu
software groups worldwide. Engineers and managers
of these groups have used these systems to support a
number of global software projects, one of which is a
reuse-driven project involving several teams. Our
application experience indicates that the infrastruc-
ture provides distinct advantages in project manage-
ment, information sharing, concurrent development,
and project coordination. It not only reduces the com-
munication overhead and increases the information
sharing; it also enhances the overall process and
improves engineering practices.
With advances in software reuse and component
engineering, there is a strong demand for processes
that support domain-specific software reuse and com-
ponent sharing globally. In the future, an enterprise
system infrastructure like PDE may need to add new
functional systems and tools to support engineers in
component construction, testing, maintenance, and
release.
Acknowledgments
We thank the members of the R&D group in the
Software Engineering Department of the Global
Software Technology Division, Fujitsu Network
Communications Inc., San Jose, California. We also
thank the members of the Global Development
Engineering Department in the Operation System
Division of Fujitsu Limited, and members of the
Software Engineering Department of Fujitsu
Hokkaido Communication Systems Limited.
References
1. F. Maurer and G. Kaiser, “Software Engineering in the
Internet Age,” IEEE Internet Computing, Sept.-Oct.
1998, pp. 22-24.
2. S. Kamel, “Building an Information Highway,” Proc.
31st Hawaii Int’l Conf. System Science, Vol. 4, IEEE CS
Press, Los Alamitos, Calif., 1998, pp. 31-41.
3. W.W. Noah, “The Integration of the World Wide Web
and Intranet Data Resource,” Proc. 31st Hawaii Int’l
Conf. System Science, Vol. 4, IEEE CS Press, Los Alami-
tos, Calif., 1998, pp. 496-503.
4. A.W. Biermann, “Toward Every-Citizen Interfaces to the
Nation’s Information Infrastructure: A National
Research Council Study,” Proc. Fourth Symp. Human
Interaction with Complex Systems, IEEE CS Press, Los
Alamitos, Calif., 1998.
5. J. Mylopoulos et al., “A Generic Integration Architec-
ture for Cooperative Information Systems,” Proc. First
IFCIS Int’l Conf. Cooperative Information Systems,
IEEE CS Press, Los Alamitos, Calif., 1996, pp. 208-217.
6. S. Browne et al., “The National HPCC Software
Exchange,” IEEE Computational Science & Engineer-
ing, Vol. 2, No. 2, Summer 1995, pp. 62-69.
7. J. Wachter, “GEOLIS—Innovative Geoscientific Infor-
mation Management,” Proc. First IEEE Metadata
Conf., IEEE Press, Piscataway, N.J., 1996.
8. J.Z. Gao et al., “Developing an Integrated Testing Envi-
ronment Using the World Wide Web Technology,” Proc.
Compsac 97, IEEE CS Press, Los Alamitos, Calif., 1997,
pp. 594-601.
9. S.J. Vaughan-Nichols, “Switching to a Faster Internet,”
Computer, Jan. 1997, pp. 31-32.
Jerry Z. Gao is an assistant professor at San Jose State
University. His research interests include object-
oriented technology, Internet-based software engi-
neering, and software testing and maintenance
methodologies. Gao has a PhD and an MS in com-
puter science from the University of Texas, Arlington.
He was a manager in the software engineering depart-
ment of Fujitsu Network Communications Inc. from
1995 to 1998. He is a member of the IEEE.
Cris Chen is a director of the software engineering
department of Global Software Technology Division
I, Fujitsu Network Communications Inc. His current
research interests include software engineering for
object-oriented technology and software process man-
agement.
Yasufumi Toyoshima is a vice president of Fujitsu Net-
work Communications Inc. His current research inter-
ests include software engineering for object-oriented
technology and software process management.
David K. Leung is a principal consultant at Magic-
Soft Consulting Inc. His research interests include
software component engineering and distributed
computing. Leung has an MS in computer science
from the University of Texas, Austin, and a BS in
computer engineering from the University of Michi-
gan, Ann Arbor.
Contact Jerry Gao at CISE Dept., Engineering School,
San Jose State University, One Washington Square,
San Jose, CA 95192-0180; jerrygao@email.sjsu.edu
or gaojerry@hotmail.com.
May 1999 47
... To reach a larger audience in the globalized market, software vendors and service providers must make their products adapted to the target language, culture and locale-specific requirements (Bell 1995;Gao et al., 1999;Lycett and Paul, 1999;Collins, 2002;Rau and Liang, 2003;Aspray et al., 2006;Ojala and Tyrväinen, 2007;Gabrielsson et al., 2008). Carey (1998) indicated one decade ago that developing global software for today's international market is no longer a choice, but an imperative. ...
... A review in the literature of the past decade concerning software localization indicates that despite of the tremendous efforts devoted in the research, most of them are focused on the technical aspect (Gao et al., 1999;Lycett and Paul, 1999;Collins, 2002;Rau and Liang, 2003) or analyzed from a national or cross-national perspective (Aspray et al., 2006;Ojala and Tyrväinen, 2007), in contrast, much less attention has been paid to the management issues in software localization. There is no established theory or methods that specifically address management issues in software localization. ...
... As literature regarding software localization has focused much attention on technical aspect (Gao et al., 1999;Lycett and Paul, 1999;Collins, 2002;Rau and Liang, 2003) or analyzing from a national or cross-national perspective (Aspray et al., 2006;Ojala and Tyrväinen, 2007). This study contributes to the literature of software localization in that it provides a managerial approach to examine the critical success factors for software localization. ...
Article
Although software localization has gained enormous momentum in the software sector in the past decades, little attention in the research world so far has been placed on the managerial perspective. This study fills this gap by examining a variety of key variables in localization project management and developing an approach to identify the critical factors for successful software localization. A four-step iteration method is applied to build a complete list of factors affecting the success of software localization. Based on previous research and the best practices in the industry, four propositions are formed, which are tested by the data collected from a survey sent to professionals in the industry from 14 countries. Subsequent in-depth interviews were conducted to verify the findings and to build insights into the industry. The survey result identified the critical success factors for each of the four categories. The most significant factors are highlighted to provide managerial insights for the managers of software localization.
... Traditional software development processes, focus on closed architectures and platform-dependent software, restricted potential code reuse across project, and organizational boundaries. With the introduction of the Internet, these boundaries have been removed allowing for global access, online collaboration, information sharing, and internationalization of the software industry [1]. Software development and maintenance tasks can now be shared amongst team members working across and outside organizational boundaries. ...
... Solution. Figure 9 shows the integration of concepts from Figures 7(b) [3] olo:length [1] [2] [3] olo:index olo:index olo:index Problem. Dependency exclusion is a feature provided by many dependency management tools (e.g., Maven, Gradle) to support dependency mediation. ...
Article
Full-text available
The use of external libraries in today’s software projects allows developers to take advantage of features provided by such application programming interfaces (APIs) without having to reinvent the wheel. However, APIs have also introduced new challenges to the software engineering community (e.g. API incompatibilities, software vulnerabilities, and license violations) that extend beyond traditional project boundaries and often involve different software artifacts. One potential solution to these challenges is to provide a technology-independent representation of software dependency semantics and its integration with knowledge from other software artifacts. In our research, we take advantage of the semantic web (SW) and its technology stack to establish a unified knowledge representation of build and dependency repositories. Given this knowledge base, we can now extend and integrate other (heterogeneous) resources to allow for a flexible and comprehensive global impact analysis approach. To illustrate the applicability of our SW-enabled modeling approach, we discuss two different applications. These applications illustrate how our modeling approach can not only integrate and reuse knowledge from dependency management systems and other software artifacts, but also take advantage of inference services provided by the SW to support novel software analytics services across artifact and project boundaries.
... In [8] and [9], global software development lifecycle has been discussed. In [8], with the perspective of graduate course in three different universities, the requirements management techniques considering global software development environment and different time zones and languages has been discussed. ...
... In [8], with the perspective of graduate course in three different universities, the requirements management techniques considering global software development environment and different time zones and languages has been discussed. In [9], Fujitsu has initiated the internet based infrastructure for global software development. ...
Conference Paper
Full-text available
Requirements Elicitation phase in Requirements Engineering (RE) is found to be very complex and demands more attention when software development is performed on the global scale. The available approaches of requirements elicitation require vigilant application in different scenarios of GSD and may need further improvement when considering challenges of distributed development. In this paper, a comprehensive survey of requirements elicitation approaches and challenges is performed which describes the limitations in applying the current elicitation approaches in GSD scenarios. Considering these constraints, an iterative framework for elicitation (IRE) in Requirements engineering is proposed. The case study analysis of the proposed model shows the effectiveness of iterative approach. The results show that IRE approach is more effective in satisfying the customer requirements than existing elicitation approaches.
... As an important method to ensure the quality of Web portals, the Web portal testing attracts more and more attention by researchers and users. They have proposed some important research methods and obtained numerous results [1], [2]. So far traditional testing methodologies were used for web portal testing as they are treated similar to software. ...
... In the last few years, a large variety of tools for supporting Web application testing have been proposed [3], [2]. Most of them, however, have been designed to carry out performance and load testing, security testing, checking link implementation, accessibility checking and HTML validation. ...
Conference Paper
As more and more services and information are made available over the Internet and Intranet, web portals are gaining in popularity and have become extraordinarily complex to test for their functionality. These web portals usually have thousands of hyper links, images, multimedia files, data files, audio files among many others. Web pages are modified frequently viz. adding links, making user specific customizations, adding new features and many other functionalities. Many a times these consist of hundreds of files of different types to be navigated by the users. Due to this, conventional software testing methods and tools are not adequate for these Web portals. In this paper, we propose an automated test tool which will test such a Web portal for all the missing links, file type mismatches, unreachable files etc. It also reports errors in Java Script and PHP modules embedded in the HTML documents. The tool will have a dry run to identify all these features and will generate a complete report regarding the web portal for the ease of maintenance. This tool is supposed to work on both static and dynamic Web pages. Dynamic content include filling up the forms, entering login information etc. This tool generates test cases for running over the web portal which uses regression testing techniques. It also provides GUI for easy interaction with the portal. The methodology used includes the global data flow analysis, use-definition chains, construction of control flow graph and regression testing.
... Traditional software development processes, with their focus on closed architectures, platformdependent tools and software, restrict potential code reuse. With the introduction of the Internet, these restrictions have been removed, allowing for global access, online collaboration, information sharing, and internationalization of the software industry (Gao et al. 1999). Software development and maintenance tasks can now be shared among team members working across and outside organizational boundaries. ...
Article
Full-text available
The globalization of the software industry has led to an emerging trend where software systems depend increasingly on the use of external open-source external libraries and application programming interfaces (APIs). While a significant body of research exists on identifying and recommending potentially reusable libraries to end users, very little is known on the potential direct and indirect impact of these external library recommendations on the quality and trustworthiness of a client’s project. In our research, we introduce a novel Ontological Trustworthiness Assessment Model (OntTAM), which supports (1) the automated analysis and assessment of quality attributes related to the trustworthiness of libraries and APIs in open-source systems and (2) provides developers with additional insights into the potential impact of reused libraries and APIs on the quality and trustworthiness of their project. We illustrate the applicability of our approach, by assessing the trustworthiness of libraries in terms of their API breaking changes, security vulnerabilities, and license violations and their potential impact on client projects.
... Virtual teams Orsak & Etter 1996; Offshore outsourcing (Kaiser & Hawk 2004;Pfannenstein & Tsai 2004;Smith et al. 1996) Virtual organizations (Bleecker 1994;Markus et al. 2000;Mowshowitz 1997) Global software development (Damian & Moitra 2006;Gao et al. 1999; Indicated by several literature studies in different research fields, virtual teams is a widespread and frequently used conceptualization Gillam & Oppenheim 2006;. However, as the literature on virtual teams has grown, many different definitions have appeared. ...
Thesis
Full-text available
Increasingly, software projects are becoming geographically distributed, with limited face-toface interaction between participants. These projects face particular challenges that need careful managerial attention. This PhD study reports on how we can understand and support the management of distributed software projects, based on a literature study and a case study. The main emphasis of the literature study was on how to support the management of distributed software projects, but also contributed to an understanding of these projects. The main emphasis of the case study was on how to understand the management of distributed software projects, but also contributed to supporting the management of these projects. The literature study integrates what we know about risks and risk-resolution techniques, into a framework for managing risks in distributed contexts. This framework was developed iteratively, using evaluations by practitioners. Subsequently, the framework served as the foundation for the design of a risk management process, compliant with Capability Maturity Model Integration’s (CMMI) generic approach to risk management (2006). The case study investigates the managerial challenges of control and coordination in a successful, distributed software project between a Russian and a Danish company. The case study’s control aspects were investigated, drawing on Kirsch’s (2004) elements of control framework, to analyze how control is enacted in the project. This analysis showed that informal measurement and evaluation controls were used even though the team was short-lived and rarely met face-to-face; in addition, informal roles and relationships, such as clanlike control, were also used. The investigation suggests that management in successful distributed software projects can be highly reliant on both formal and informal controls and in both a project context and mediated communications. The case study’s coordination aspects were investigated, drawing on the collective mind concept, developed by Weick and Roberts (1993), to analyze the patterns of mediated interactions in multimodal communication. This analysis showed that multimodal communication can facilitate collective minding in distributed software projects and can positively impact performance. In providing an approach for investigating the impact of multimodal communication practices on virtual team performance, we can further understand and support coordination in distributed software projects.
... Website building and testing needs to follow several ways of experiments and it needs to follow several methods to ensure the quality of Web applications and it hard working and become crucial after applications raises new problems and faces very high challenges Faults are obtained through a process similar to one defined and used in previous research in testing techniques [2,3,4] whereby faults are inserted, which are as realistic as possible and based on user experience. The following fault types are considered: ...
Conference Paper
The internet and the World Wide Web have bought significant changes in the emerging global economy. The E-commerce has been serving to accelerate and diffuse more widely changes that are already under way in the economy. As more and more services and information are made available over the internet and intranets, web sites have become extraordinarily complex, while their correctness and reliability are often crucial to the success of businesses and organization. New standards and new facilities are constantly emerging and their proper understanding is essential for the success of an operation, and especially for the web developer who are assigned a duty to select, establish, and maintain the necessary infrastructure. As an important method to ensure the quality of web sites, web testing attracts more and more attentions in the academic community and industrial world. Testing web applications raises new problems and faces very high challenges. This work proposes an integrated web testing approach for web application testing.
Conference Paper
With the rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This brings new business requirements and demands in mobile software testing, and causes new issues and challenges in mobile testing and automation. Current existing mobile application testing tools mostly concentrate on GUI, load and performance testing which seldom consider large-scale concurrent automation, coverage analysis, fault tolerance and usage of well-defined models. This paper introduces an implemented system that provides an automation solution across platforms on diverse devices using GUI ripping test scripting technique. Through incorporating open source technologies such as Appium and Selenium Grid, this paper addresses the scalable test automation control with the capability of fault tolerant. Additionally, maximum test coverage can also be obtained by executing parallel test scripts within the model. Finally, the paper reports case studies to indicate the feasibility and effectiveness of the proposed approach.
Article
Full-text available
Web applications are becoming progressively more complex and imperative for companies. The e-commerce web sites have been serving to accelerate and disseminate more widely changes that are already under way in the economy. Their development, including analysis, design and testing, needs to be approached by means of support tools, while their correctness and reliability are often crucial to the success of businesses and organizations. There are some tools provided to support analysis and design. However, few tools are provided to directly support the software testing on Web-based applications. In this paper, an automated online website evaluation tool hosted at http://safaethossain.comze.com is proposed to support the automated testing of web-based applications. Testers can evaluate performance of a site with other websites and can precisely express the existing websites and find out what are the modifications required. The tool elevates the automation level of functional testing for web applications into a new height.
Article
In recent years, Web applications have grown rapidly because of their abilities to provide online information access to anyone at anytime around the world. As Web applications become complex, there is a growing concern about their quality and reliability. This paper extends traditional data flow testing techniques to Web applications. Several data flow issues about analyzing HyperText Markup Language (HTML) documents in Web applications are discussed. An object-based data flow testing approach is presented. The approach is based on a test model that captures data flow test artifacts of Web applications. In the test model, each entity of a Web application is modeled as an object. The data flow information of the functions within an object or across objects is then captured using various flow graphs. Based on the object-based test model, data flow test cases for a Web application can be systematically and selectively generated in five different levels.
Conference Paper
Full-text available
This paper demonstrates the experience of Egypt in introducing the concept of the Information Highway. It describes the role of information technology in terms of computing, information and communication in boosting socio-economic development planning and change in priority issues and sectors in the economy. The paper covers the build-up of the information infrastructure of various sectors in the economy using state-of-the-art information technology while accommodating to newly emerging issues such as Internet, Intranet and the concept of the information highway. The focus of the paper will be on a comprehensive government program that started in 1994 targeting the development of a national information highway putting Egypt on the front-end in terms of business and socio-economic development. The paper demonstrates how such an ambitious objective required massive build-up of a human, information and technological infrastructure. The paper, in that respect, covers the introduction and development of the concept of the information-based society while demonstrating its development phases, mission, objectives, framework, challenges and opportunities among others. Finally, the paper highlights some of the recommended areas for future research related to the development of information highways in developing countries capitalizing on the findings and the lessons learnt from the Egyptian experience
Conference Paper
Full-text available
Cooperative information systems consist of existing legacy systems integrated in terms of a generic architecture which supports data integration and coordination among the integrated components. The paper presents a proposal for a generic integration architecture named CoopWARE. The architecture is presented in terms of the mechanisms it provides for data integration, and coordination. Data integration is supported by an information repository with an extensible schema, while coordination is facilitated by a rule set and an event-driven rule execution mechanism. In addition, the paper describes implementation and application experiences for the architecture in the context of a three year software engineering project
Article
Full-text available
Helping the high-performance computing and communications (HPCC) community to share software and information, the National HPCC Software Exchange (NHSE) is an Internet-accessible resource that promotes the exchange of software and information among those involved with HPCC. Now in its infancy, the NHSE will link varied discipline-oriented repositories of software and documents, and encourage Grand Challenge teams and other members of the HPCC community to contribute to these repositories and use them. By acting as a national online library of software that makes widely distributed materials available through one place, the exchange will cut down the amount of time, talent and money spent reinventing the wheel. The target audiences for the NHSE include scientists and engineers in diverse HPCC application fields, computer scientists, users of government and academic supercomputer centers, and industrial users
Article
Full-text available
Cooperative information systems consist of existing legacy systems integrated in terms of a generic architecture which supports data integration and coordination among the integrated components. This paper presents a proposal for a generic integration architecture named CoopWARE. The architecture is presented in terms of the mechanisms it provides for data integration, and coordination. Data integration is supported by an information repository with an extensible schema, while coordination is facilitated by a rule set and an eventdriven rule execution mechanism. In addition, the paper describes implementation and application experiences for the architecture in the context of a 3-year software engineering project. keywords: cooperative information systems 1 Introduction Traditionally, information systems have been defined as software systems consisting of databases, application programs and user interfaces. However, current trends in business organizations point to a paradigm shift in...
Conference Paper
This paper gives an overview of the highlights from a National Research Council study as seen by its chairman. The aim of the study was to examine the state of the art in human-machine interfaces and the possibility of enabling all citizens to achieve access to the National Information Infrastructure. The report encourages the community to break away from 1960s technologies and paradigms and to seek ways to employ new modes and media for human-machine interaction
Conference Paper
The explosive growth in the volume of information available on the Web and in enterprise databases continues unabated. Managing these large quantities of information remains a challenge for both government and industry. TRW's Digital Media Systems Lab has developed a research platform, InfoWeb<sup>TM</sup>, that can be described as an “information infrastructure” that provides seamless access to Web search services, Web pages, intranet databases, special purpose search engines, and legacy systems. InfoWeb<sup>TM</sup> generates and manages descriptive metadata associated with “content objects”-text, compound documents, graphics, images, video, audio, numeric data etc. The metadata add value to the intellectual content of the content objects and provide for varied retrieval strategies, notably through complex searches and automatic hyperlinks
Conference Paper
The World Wide Web (WWW) technology has been widely accepted and used to build different application systems on the Internet and/or Intranet due to its advantages in networking, platform independence, distributed accesses for multiple users at different locations. We believe that this new technology is a powerful and cost effective tool to build a next generation environment for supporting software engineering practice in a software product's life cycle. The paper reports our experience on developing a Web based testing information sharing, control and management system using WWW technology. The paper also discusses its approach, basic function features, related design and implementation mechanisms. In addition, it reports the first hand useful experience and lessons in developing Web based software engineering support systems
Article
It is not uncommon to find people complaining about how slow the Internet is becoming now that more people are using it, particularly to transmit large and complex files, such as multimedia. There have even been predictions of an Internet meltdown due to overuse. Although data indicates that Net traffic generally moves at a good clip, it is clear that many people want faster data transmission speed now and that many others are worried about Internet speed in the future. With this in mind, Ipsilon Network and Cisco Systems are trying to generate more speed by bringing switching to the Internet. In fact, Ipsilon has already released a product (the IP Switch ATM1600) that analysts believe will make throughput several times faster than the fastest routers. Cisco expects to release Internet switching products no earlier than mid-1997
Article
this document is to give guidelines for the development of domain-specific HPCC repositories that will provide access to software and documents within their specific domains, as well as interoperate with each other and with the top-level NHSE interface. Specific guidelines are given for scoping the domain, classifying and cataloging software, access mechanisms and control, securing the repository site(s), interoperation, measurement, and ongoing maintenance and development. Guidelines for developing and utilizing a domain-specific software review framework that extends the top-level NHSE software review framework are contained in a separate document entitled the NHSE Software Review Framework.