![Erich Schikuta](https://i1.rgstatic.net/ii/profile.image/272219918041102-1441913694677_Q128/Erich-Schikuta.jpg)
Erich SchikutaUniversity of Vienna | UniWien · Workflow Systems and Technology Research Group
Erich Schikuta
Professor
Head of Research Group "Workflow Systems and Technology", University of Vienna, Austria
About
256
Publications
29,705
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,938
Citations
Introduction
Erich Schikuta studied Computer Science, Business Informatics and Mathematics at the University of Technology Vienna (UTV) and the University of Vienna (UV), and finished with a Bachelor in Mathematics, and a Master and Ph.D. in CS from the UTV.
He is Head of the Research Group Workflow Systems and Technology. His research interests are in the area Utility Computing, parallel and distributed computing and neural network simulation, and he is author of more than 200 peer-reviewed papers.
Additional affiliations
January 1999 - present
August 1992 - November 1993
May 1984 - present
Education
July 1983 - June 1987
![Independent Researcher](https://c5.rgstatic.net/m/44008539103780/images/template/default/university/university_default_m.jpg)
Independent Researcher
Field of study
October 1978 - June 1983
![Independent Researcher](https://c5.rgstatic.net/m/44008539103780/images/template/default/university/university_default_m.jpg)
Independent Researcher
Field of study
Publications
Publications (256)
We present N2Cloud, a novel Cloud-based neural network simulation system, which provides and exchanges neural network knowledge and simulation resources to and between arbitrary users on a world-wide basis following the Web 2.0 principle. N2Cloud enables the exchange of knowledge, as neural network objects and paradigms, by a virtual organization e...
This paper represents an analytical economic cost model for cloud computing aiming at comprising all kinds of cost of a commercial environment. To extend conventional state of- the-art models considering only fixed cost, we developed a concise, but comprehensive analytical model which does not only include fixed cost, but also variable cost allowin...
IT-based Service Economy requires Service Markets to flourish for the trade of services. A market does not represent a simple buyer-seller relationship, rather it is the culmination point of a complex chain of stake-holders with a hierarchical integration of value along each point in the chain. To enable a Service Economy, Service Markets must be p...
With the advent of distributed computing, particularly since the emergence of Grids, Clouds and other Service Oriented Computing paradigms, the querying of huge datasets of distributed databases or data repositories on a global scale has become a challenging research question. Currently, beside various other topics, two major concerns in this resea...
Creating simple marketplaces with common rules, that enable the dynamic selection and consumption of functionality, is the missing link to allow small businesses to enter the cloud, not only as consumers, but also as vendors. In this paper, we present the concepts behind a hybrid service and process repository that can act as the foundation for suc...
With recent technological advancements, quantitative analysis has become an increasingly important area within professional sports. However, the manual process of collecting data on relevant match events like passes, goals and tacklings comes with considerable costs and limited consistency across providers, affecting both research and practice. In...
Science collaborations use computer grids to run expensive computational tasks on large data sets. Tasks as jobs across the network demand data and thereby workload management and data allocation to maintain the computational workflow. Data allocation includes data placement with different replication factors (multiplicity) of data.The proposed dat...
This paper describes the design and implementation of parallel neural networks (PNNs) with the novel programming language Golang. We follow in our approach the classical Single-Program Multiple-Data (SPMD) model where a PNN is composed of several sequential neural networks, which are trained with a proportional share of the training dataset. We use...
This paper describes the design and implementation of parallel neural networks (PNNs) with the novel programming language Golang. We follow in our approach the classical Single-Program Multiple-Data (SPMD) model where a PNN is composed of several sequential neural networks, which are trained with a proportional share of the training dataset. We use...
Today’s cloud infrastructure landscape offers a broad range of services to operate software applications. The myriad of options, however, has also brought along a new layer of complexity. When it comes to procuring cloud computing resources, consumers can purchase their virtual machines from different providers on different marketspaces to form so...
Science collaborations such as ATLAS at the high-energy particle accelerator at CERN use a computer grid to run expensive computational tasks on massive, distributed data sets. Dealing with big data on a grid demands workload management and data allocation to maintain a continuous workflow. Data allocation in a computer grid necessitates some data...
Today's cloud infrastructure landscape offers a broad range of services to build and operate software applications. The myriad of options, however, has also brought along a new layer of complexity. When it comes to procuring cloud computing resources, consumers can purchase their virtual machines from different providers on different marketspaces t...
Modelling the trajectorial motion of humans along the ground is a foundational task in the quantitative analysis of sports like association football. Most existing models of football player motion have not been validated yet with respect to actual data. One of the reasons for this lack is that performing such a validation is not straightforward, be...
Modelling the trajectorial motion of humans along the ground is a foundational task in the quantitative analysis of sports like association football. Most existing models of football player motion have not been validated yet with respect to actual data. One of the reasons for this lack is that such a validation is not straightforward, because the v...
Today, traded cloud services are described by service level agreements that specify the obligations of providers such as availability or reliability. Violations of service level agreements lead to penalty payments. The recent development of prominent cloud platforms such as the re-design of Amazon's spot marketspace underpins a trend towards dynami...
Sky computing is a new computing paradigm leveraging resources of multiple Cloud providers to create a large scale distributed infrastructure. N2Sky is a research initiative promising a framework for the utilization of Neural Networks as services across many Clouds. This involves a number of challenges ranging from the provision, discovery and util...
Many electronic and electrical systems are now incorporated with modern vehicles to control functional safety. Lack of security protection mechanisms in vehicular design may lead to different ways of executing malicious attacks against the vehicular network. These attacks may have various types of negative consequences, such as safe vehicle operati...
This work presents a novel posterior inference method for models with intractable evidence and likelihood functions. Error-guided likelihood-free MCMC, or EG-LF-MCMC in short, has been developed for scientific applications, where a researcher is interested in obtaining approximate posterior densities over model parameters, while avoiding the need f...
Computing grids are key enablers of computational science. Researchers from many fields (High Energy Physics, Bioinformatics, Climatology, etc.) employ grids for execution of distributed computational jobs. These computing workloads are typically data-intensive. The current state of the art approach for data access in grids is data placement: a job...
Security verification and validation is an essential part of the development phase in current and future vehicles. It is essential to ensure that a sufficient level of security is achieved. This process determines whether or not all security issues are covered and confirms that security requirements and implemented measures meet the security needs....
Computing grids are key enablers of computational science. Researchers from many fields (High Energy Physics, Bioinformatics, Climatology, etc.) employ grids for execution of distributed computational jobs. These computing workloads are typically data-intensive. The current state of the art approach for data access in grids is data placement: a job...
Training artificial neural networks is a computationally intensive task. A common and reasonable approach to reduce the computation time of neural networks is parallelizing the training. Therefore, we present a data parallel neural network implementation written in Go. The chosen programming language offers built-in concurrency support, allowing to...
The data access patterns of applications running in computing grids are changing due to the recent proliferation of high speed local and wide area networks. The data-intensive jobs are no longer strictly required to run at the computing sites, where the respective input data are located. Instead, jobs may access the data employing arbitrary combina...
This work describes the technique of remote data access from computational jobs on the ATLAS data grid. In comparison to traditional data movement and stage-in approaches it is well suited for data transfers which are asynchronous with respect to the job execution. Hence, it can be used for optimization of data access patterns based on various poli...
The distributed data management system Rucio manages all data of the ATLAS collaboration across the grid. Automation, such as data replication and data rebalancing are important to ensure proper operation and execution of the scientific workflow. In this proceedings, a new data allocation grid service based on machine learning is proposed. This lea...
Internet technology has changed how people work, live, communicate, learn and entertain. The internet adoption is rising rapidly, thus creating a new industrial revolution named "Industry 4.0". Industry 4.0 is the use of automation and data transfer in manufacturing technologies. It fosters several technological concepts, one of these is the Intern...
Sky computing is a new computing paradigm leveraging resources of multiple Cloud providers to create a large scale distributed infrastructure. N2Sky is a research initiative promising a framework for the utilization of Neural Networks as services across many Clouds. This involves a number of challenges ranging from the provision, discovery and util...
For an increasing number of data intensive scientific applications, parallel I/O concepts are a major performance issue. Tackling this issue, we develop an input/output system designed for highly efficient, scalable and conveniently usable parallel I/O on distributed memory systems. The main focus of this research is the parallel I/O runtime system...
This study is motivated by the high-energy physics experiment ATLAS, one of the four major experiments at the Large Hadron Collider at CERN. ATLAS comprises 130 data centers worldwide with datasets in the Petabyte range. In the processing of data across the grid, transfer delays and subsequent performance loss emerged as an issue. The two major cos...
Today, the so called supermarket approach is used for trading Cloud services on Cloud markets. Thereby, consumers purchase Cloud services at fixed prices without negotiation. More dynamic Cloud markets are emerging as e.g. the recent development of the Amazon EC2 spot market shows - with spot blocks and spot fleet management. Hence, autonomous Baza...
This paper presents the integration of Dijkstra’s algorithm into a Blackboard framework to optimize the selection of web resources from service providers. The architectural framework of the implementation of the proposed Blackboard approach and its components in a real life scenario is laid out. For justification of approach, and to show practical...
This paper presents the integration of Dijkstra's algorithm within a Blackboard framework to optimize the selection of web services from service providers. In addition, methods are presented how dynamic changes during the workflow execution can be handled; specifically, how changes of the service parameters have effects on the system. For justifica...
The semantic web aims to describe information in terms of well-defined vocabularies and comprehends both data and knowledge to cope with meaning of data. Advanced search engines are used to retrieve precise information out of these knowledge resources. The main challenge is not only retrieving data but also how to keep data safe and protected again...
In this paper, we present an architectural pattern called Data Oriented Architecture (DOA). Motivation is the fact that on the one hand we face a shift to the usage of more and more mobile devices but on the other hand most services in the Internet still use a classic client-server-approach. Data is mainly produced at private devices today and put...
Today’s economy creates the need for dynamic, adaptive and autonomous building of enterprise value chains consisting of arbitrary virtualized computing resources, as hardware and software services. The current key technology for service provisioning is the cloud computing framework. In the course of this development digital service markets are beco...
Sky computing is a new computing paradigm leveraging resources of multiple Cloud providers to create a large-scale distributed infrastructure. N2Sky is a research initiative promising a framework for the utilization of Neural Networks as services across many Clouds integrating into a Sky. This involves a number of challenges ranging from the provis...
Cluster analysis methods have proven extremely valuable for explorative data analysis and are also fundamental for data mining methods. Goal of cluster analysis is to find correlations of the value space and to separate the data values into a priori unknown set of subgroups based on a similarity metrics. In case of high dimensional data, i.e. data...
The domain of image processing technologies comprises many methods and algorithms for the analysis of signals, representing data sets, as photos or videos. In this paper we present a discussion and analysis, on the one hand, of classical image processing methods, as Fourier transformation, and, on the other hand, of neural networks. Specifically we...
Currently digital markets emerge where cloud resources are traded in the form of computational services. Usually the so called supermarket approach is applied on these service markets, where consumers buy offered services from providers based on fixed functional and non-functional characteristics without negotiations. However, bilateral multi round...
We present webAD, a web-based e-learning platform for the visualization of algorithms and data structures. It follows a light-weight server-less implementation paradigm and pursues a minimalistic vision: no installation or configuration effort, multi device support, clear structure of didactic content and simple extensibility for developers. Compar...
Neural networks proved extremely feasible for problems which are hard to solve by conventional computational algorithms due to excessive computational demand, as NP-hard problems, or even lack of a deterministic solution approach. In this paper we present a management framework for neural network objects based on ontology knowledge for the cloud-ba...
Cloud computing has emerged as powerful technology with various use cases in different environments. The goal of these use cases is to provide services to users on demand. These use cases vary from application to application, but service level agreement (SLA) management plays an important role in all of them. SLA management is an essential part of...
A Virtual Organization (VO) is logical orchestration of globally dispersed resources to achieve common goals fostering new computing paradigms as Utility Computing, Grid Computing, Autonomic Computing, Clusters Computing and Cloud computing. The Computational Intelligence community is striving hard to build an online community to share resources su...
NetLuke is an interactive, Web-based, e-Learning system to visualize the dynamic behavior of algorithms and data structures. It is used as a supportive tool for under-graduate and graduate courses on these topics. It aims for clarity and vividness, simplicity, portability, and extendibility and supports user controlled dynamics. During the design o...
We present a novel neural network simulation framework, which provides the parallelized execution of artificial neural network by exploiting modern hardware and software environments adhering to the service oriented paradigm within the N2Sky system. The goal of the N2Sky system is to share and exchange neural network resources, neural network speci...
The Cell Processor is a widely-used processor type embedded into a wide variety of technical appliances, ranging from television sets and gaming consoles to high performance computing servers. Its high availability and cost efficiency makes it very interesting for computing tasks as parts of scientific and business workflows. In this paper we prese...
Future e-business models will rely on electronic contracts which are agreed dynamically and adaptively by web services. Thus, the automatic negotiation of Service Level Agreements (SLAs) between consumers and providers is key for enabling service-based value chains. The process of finding appropriate providers for web services seems to be simple. C...
Future e-business models will rely on electronic contracts which are agreed
dynamically and adaptively by web services. Thus, the automatic negotiation of
Service Level Agreements (SLAs) between consumers and providers is key for
enabling service-based value chains.
The process of finding appropriate providers for web services seems to be
simple. C...
We present the N2Sky system, which provides a framework for the exchange of
neural network specific knowledge, as neural network paradigms and objects, by
a virtual organization environment. It follows the sky computing paradigm
delivering ample resources by the usage of federated Clouds. N2Sky is a novel
Cloud-based neural network simulation envir...
Training of Artificial Neural Networks for large data sets is a time consuming task. Various approaches have been proposed to reduce the efforts, many of them by applying parallelization techniques. In this paper we develop and analyze two novel parallel training approaches for Backpropagation neural networks for face recognition. We focus on two s...
This paper represents an economic cost model for cloud computing aiming at comprising all kinds of cost of a commercial environment. To extend conventional state-of-the-art models considering only fixed cost, we developed a concise but comprehensive analytical model, which includes also variable cost allowing for the development and evaluation of b...
"United we stand, divided we fall" is a well known saying. We are living in
the era of virtual collaborations. Advancement on conceptual and technological
level has enhanced the way people communicate. Everything-as-a-Service once a
dream, now becoming a reality.
Problem nature has also been changed over the time. Today, e-Collaborations
are applie...
In this paper we present a scalable and extensible architecture of a business rule management framework. This representation can be used for agent based automatic negotiation and re-negotiation of web services. To ensure scalability and extensibility our architecture is based on the service oriented design pattern using ontologies. Finally we devel...
Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. The data collected so far by the experiment adds up to a...
Electronic contracts are crucial for future e-Business models due to the increasing importance of webservices and the cloud as a reliable commodity enabling service-based value chains. Negotiation is the prerequisite for establishing a contract between two or more partners. These contracts are usually based on Service Level Agreements (SLAs). In th...
We present N2Sky, a novel Cloud-based neural network simulation environment. The system implements a transparent environment aiming to enable arbitrary and experienced users to do neural network simulations easily and comfortably. The necessary resources, as CPUcycles, storage space, etc., are provided by using Cloud infrastructure. N2Sky also fost...
Purpose
The optimization of quality‐of‐service (QoS) aware service selection problems is a crucial issue in both grids and distributed service‐oriented systems. When several implementations per service exist, one has to be selected for each workflow step. This paper aims to address these issues.
Design/methodology/approach
The authors proposed sev...
Electronic contracting is key issue for establishing liquid markets dealing with electronic goods. This paper presents a framework for automatic negotiation between Web services. The major goal of the framework is comprising all neces- sary components for negotiation and re-negotiation. The capabilities and the components are described both for Web...
With the ever increasing importance of web services and the Cloud as a reliable commodity to provide business value as well as consolidate IT infrastructure, electronic contracts have become very important. WS-Agreement has itself established as a well accepted container format for describing such contracts. However, the semantic interpretation of...
Quality-of-Service (QoS) aware service selection problems are a crucial issue in both Grids and distributed, service-oriented systems. When several implementations per service exist, one has to be selected for each workflow step. Several heuristics have been proposed, including blackboard and genetic algorithms. Their applicability and performance...
This paper presents a methodological and structured approach to build Virtual E-Learning Organizations. We propose a Reference Architecture which allows to capture the existing knowledge of the domain and to reuse that knowledge in the form of architecture patterns and building blocks. Our goal is to deliver a blueprint to help the IT community on...
In distributed systems, where several deployments of a specific service exist, it is a crucial task to select and combine concrete deployments to build an executable workflow. Non-functional properties such as performance and availability are taken into account in such selection processes that are designed to reach certain objectives while meeting...
In distributed, service-oriented systems, in which several concrete service instances need to be composed in order to respond to a request, it is important to select service deployments in an optimal and efficient way. Quality of Service attributes of deployments and network links are taken into account to decide between workflows that are identica...
In distributed, heterogeneous systems, where several deployments of a specific service exist, it is a crucial task to select and combine concrete deployments to build a service chain that can be an arbitrary workflow or a query execution plan. In order to decide between deployments with identical functionality, non-functional properties, also calle...
Traditionally, E-learning systems are built around the static educational contents and present to users (both teacher and student), a clumsy, hard to adapt environment, which leaves no room for adaptive integration of the users. User activity is restricted to the workspace and user has to adopt what is already provided. Changing IT technologies has...
In large-scale distributed systems the selection of services and data sources to respond to a given request is a crucial task. Non-functional or Quality of Service (QoS) attributes need to be considered when there are several candidate services with identical functionality. Before applying any service selection optimization strategy, the system has...
The classification and selection of services within distributed, heterogeneous environments is a non trivial task. For a proper selection and composition of services in such environments -- for example in a Grid or Cloud -- it is required to dispose of detailed information about the existing resources and their characteristics. Particularly for app...
The present era has witnessed a rapid technological advancement, which has shaped our social connection into a new dimension. Online social networks are gaining acceptance and popularity among masses. PCs and mobile devices are used to connect with each other. Availability of online social networks on remote devices, such as cell phones, has made i...
IT-based ServiceMarkets requires an enabling infrastructure to support Service Value Chains and service choreographies resulting from service composition scenarios. This will result into novel business models where services compose together hierarchically in a producer-consumer manner to form ser- vice supply-chains of added value. Service Level Ag...
Business-to-Business (B2B) workflow/service interoperation across Virtual Organisations (VOs) brings about novel business scenarios. In these scenarios, parts of workflows (or services) corresponding to different partners can be aggregated in a producer-consumer manner, leading to hierarchical structures of added value interpreted as service value...
We present a novel optimization approach for the orchestration of the execution of database workflows in heterogeneous infrastructures, hereby specifically focusing on sorting algorithms. We give a case for the validity of our approach by developing a generic template for a family of optimization algorithms. We develop a model for the mathematical...
Until now, the research community mainly focused on the technical aspects of Grid computing and neglected commercial issues. However, recently the community tends to accept that the success of the Grid is crucially based on commercial exploitation. In our vision Foster's and Kesselman's statement ‘The Grid is all about sharing.’ has to be extended...
Until now, the research community mainly focused on the technical aspects of Grid computing and neglected commercial issues. However, recently the community tends to accept that the success of the Grid is crucially based on commercial exploitation. In our vision Foster's and Kesselman's statement ‘The Grid is all about sharing.’ has to be extended...
With the advent of Cloud computing, there is a high potential for third-party solution providers such as composite service providers, aggregators or resellers to tie together services from different clouds to fulfill the pay-per-use demands of their customers. Customer satisfaction which is primarily based on the fulfillment of user-centric objecti...
Cloud computing brings in a novel paradigm to foster IT-based service economy in scalable computing infrastructures by allowing
guaranteed on-demand resource allocation with flexible pricing models. Scalable computing infrastructures not only require
autonomous management abilities but also the compliance to users’ requirements through Service Leve...
Medical research is a highly collaborative process in an interdisciplinary environment that may be effectively supported by a Computer Supported Cooperative Work (CSCW) system. Research activities should be traceable in order to allow verification of results, repeatability of experiments and documentation as learning processes. Therefore, by record...
Virtual Organizations (VO) are playing a major part in our daily communication. Many social online/offline networks are in use by humans all around the globe. The Computational Intelligence (CI) society is striving to build an online community to share resources, such as data, algorithms, human expertise, procedures and methods. Existing VOs are of...