BookPDF Available

Heritage Technologies in Space Programs - Assessment Methodology and Statistical Analysis

Authors:

Abstract

An established approach to cost and risk reduction in system development programs is the use of heritage technologies. A heritage technology is defined as a proven technology, reused in a new use context, in an unaltered or adapted form. Heritage technologies are particularly relevant for space systems development programs, as their development costs are usually high and stakeholders risk-averse. Nevertheless, numerous space programs encountered problems linked to improper ‘management’ of heritage technologies when reused, i.e., improper use, implementation or adaptation. Improperly managed heritage technologies can lead to cost and schedule overruns, or even failure in the reuse application. Currently, the applicability of heritage technologies is mostly assessed ad-hoc. The existing assessment approaches are deemed to be insufficient for providing decision-makers and analysts with ample guidance on the applicability of heritage technologies. This thesis presents a methodology for assessing heritage technologies in the early phases of development, taking the new use context of the technology, its necessary adaptations and modifications, as well as technological capabilities of the implementing organization into consideration. For illuminating the relationship between the use of heritage technologies and the performance of space programs empirically, a statistical analysis is performed. The methodology focuses on the early phases, where most of the technology selection takes place. A 3-component framework is developed that serves as the theoretical basis for the statistical analysis and the methodology. The framework consists of a systems architecting framework, a technology framework, and a verification, validation, testing, and operation framework. Based on the concepts developed in the framework, a statistical analysis is performed. Using multiple regression with control variables, a statistically significant relationship between heritage technology and specific development cost / development duration was confirmed. No statistically significant relationship between heritage use and development cost overrun / schedule overrun could be identified. Based on the framework and results from the statistical analysis, a methodology for assessing heritage technologies in the early phases is developed. It allows for identifying potential compliance issues of the heritage technology with respect to changed needs and requirements. Estimating the impact of modifications is performed via design structure matrices and a graph-edit-similarity algorithm. Furthermore, a heritage metric is presented that can be used for measuring heritage with respect to a new application. Finally, the methodology also allows for assessing technological and organizational capabilities. The methodology is validated by three space system case studies: 1) a CubeSat component technology, 2) a high-pressure tank technology for the Ariane 5 launcher, and 3) the Saturn V and Space Launch System technology. From the presented work it can be concluded that the methodology can be systematically applied to various types of space systems at different levels of decomposition. The heritage metric provides a rough estimate of heritage of a technology for a new application and context. The statistical analysis confirmed that in general using heritage technologies significantly reduces specific development cost and development duration. As future work, the developed methodology could be extended to other domains such as automotive engineering, aeronautics, and medical engineering, where heritage also plays an important role.
A preview of the PDF is not available
... Organizational competences are one of the main assets of large and complex companies (Christensen and Kaufman, 2006). They form the basis for technologies, systems, and products these companies can develop, produce, and operate etc. (Hein, 2016). Models of these competences would allow for a systematic reasoning on combining or acquiring them with the objective to explore which new technologies, systems, and products would be accessible to an organization (Hein et al., 2014). ...
... Similarly, Danilovic and Leisner (2007) use design structure matrices for relating core competences of a company to its products, in order to identify core products. Hein (2016) proposes a competence model for the field of systems engineering, with the objective of assessing the reusability of technologies, i.e. for assessing the extent to which the underlying competences of a technology exist. Hein et al. (2014) propose a framework for modelling organizational competences in order to jointly architect organizational competences and system architectures. ...
... Competences are described for defining and measuring required competences with regard to the intended learning objectives. Drawing from the literature in philosophy, Hein (2016) adds that a competence "has a more or less defined "object" on which the competence acts upon" called an object of competence and an agent "which can perform an action". The competence can be either "a specific competence" or "a general competence". ...
Article
Full-text available
Organizational competences are one of the main assets of companies. Models of these competences would allow for systematic reasoning for exploring technological innovations, enabled by combining and transposing organizational competences. Today, the literature linking organizational competencies to engineering design and systems engineering remains limited. In particular, a generic modelling approach for organizational competencies for engineering design and systems engineering seems to be missing, although first frameworks have been proposed for specific purposes. This paper presents a generic conceptual model of organizational competences. The objective is to link technology, product, and systems development with the corresponding organizational competencies and their future evolution in order to allow for a joint design of competencies and technologies, products, or systems. The conceptual model provides the basis for a competence combination framework which allows for modeling competence combinations in an organization. Finally, we validate our conceptual model using a case study from the automotive industry.
... Technology as knowledge (Layton, 1974)  Knowledge captured in technological artifacts (Vincenti, 1990(Vincenti, , 1992  Technology elements: artifact (car, airplane), design (specification of artifact), competencies (knowledge, tools, methods, models) (Hein, 2016)  Defining the unit of analysis, e.g. airplane: ...
... maintenance workshops) and competencies (personnel for development and production) also included? (Hein, 2016)  Potential breakthrough technologies are often less precisely defined:  Autonomous driving: What level of autonomy? ...
Presentation
Full-text available
Breakthrough technologies are technologies that introduce radically new capabilities or a performance increase of at least an order of magnitude. Examples are the turbojet, inertial navigation, and autonomous vehicles. The existing literature on the sociology and history of technology has used numerous case studies of breakthrough technologies for exploring their communities of practice, institutional context, and geographic context. The strategic management literature has dealt with the relationship between breakthrough technologies and a firm’s competitivity. However, the role that models played during the very early phases of the emergence of a potential breakthrough technology does not seem to have been explored yet. This presentation addresses the questions of how engineering models are used in assessing the feasibility of potential breakthrough technologies and what the particular characteristics of the use of engineering models in this context are. The basic problem setting is the relationship between feasibility-related questions about a potential breakthrough technology and models that provide support for answers pertaining to these questions. Important issues regarding this relationship are often about the validity of analogies (If x is feasible in context y, then x is feasible in context z.), scalability (If x works with a set of parameters Y, then it works with a changed set of parameters Y’.), and the unexpected emergence of new physical effects, which I call “technology-dependent physics”. By using four historical examples of (potential) breakthrough technologies and one detailed case study, the role of these three attributes of models is explored. It is concluded that these three attributes seem to play an important role in assessing the feasibility of breakthrough technologies via models, raising the question about what form of validity they would result in.
... In engineering newness is associated with risk, as a new system has not been tried in practise or in a different context and the technology it is built upon is not yet validated, e.g. (Hein, 2016). The aerospace industry (Mankins 1995) has developed technology readiness levels (TRL) to assess whether a technology is sufficiently mature to be deployed in a product, where TRL 1 corresponds to a new basic principle that is discovered, TRL 5 requires a test in a realistic product and TRL 9 is required for introduction in a product. ...
... The potential of doing something is called capability in the following [36]. In the second step of the characteristicscapabilities method, capabilities are derived from characteristics by asking the question "What can be done in a new way or better way by exploiting one or more characteristics?". ...
Preprint
Full-text available
The miniaturization of electronic and mechanical components has allowed for an unprecedented downscaling of spacecraft size and mass. Today, spacecraft with a mass between 1 to 10 grams, called AttoSats, have been developed and operated in space. Due to their small size, they introduce a new paradigm in spacecraft design, relying on agile development, rapid iterations, and massive redundancy. However, no systematic survey of the potential advantages and unique mission concepts based on AttoSats exists. This paper explores the potential of AttoSats for future space missions. First, we present the state of the art of AttoSats. Next, we identify unique AttoSat characteristics and map them to future mission capabilities. Finally, we go beyond AttoSats and explore how smart dust and nano-scale spacecraft could allow for even smaller spacecraft in the milligram range: zepto- and yocto spacecraft.
... In engineering newness is associated with risk, as a new system has not been tried in practise or in a different context and the technology it is built upon is not yet validated, e.g. (Hein, 2016). The aerospace industry (Mankins 1995) has developed technology readiness levels (TRL) to assess whether a technology is sufficiently mature to be deployed in a product, where TRL 1 corresponds to a new basic principle that is discovered, TRL 5 requires a test in a realistic product and TRL 9 is required for introduction in a product. ...
Article
Full-text available
The aim of the paper is to foster a discussion in the engineering design community about its understanding of the innovation phenomena and the unique contribution that comes from engineering design. The paper reports on the dialouge originating from a series of workshops with participants from different backgrounds in engineering design, systems engineering, industrial design psychology and business. Definitions of innovation are revisited as used in business, management and engineering design contexts. The role of innovation is then discussed related to product development from (i) the management perspective, (ii) a systems architecture perspective and (iii) in relation to sustainable development as one driver of innovation. It is argued that engineering design has a central role in how to realise the novelty aspect of innovation and often plays a critical role in maturing these into the valuable products, and there is a need to articulate the role of engineering design in innovation to better resonate with the business and management research.
... Apart from this quantitative framework for comparing an agent's capability, we further use a qualitative maturity scale for analyzing task-specific capabilities and general capabilities with respect to AI probe missions, drawing heavily from Hernandez-Orallo [80,79] and Hein [69]. The results of this qualitative analysis are presented in Section 4. ...
Preprint
Full-text available
The large distances involved in interstellar travel require a high degree of spacecraft autonomy, realized by artificial intelligence. The breadth of tasks artificial intelligence could perform on such spacecraft involves maintenance, data collection, designing and constructing an infrastructure using in-situ resources. Despite its importance, existing publications on artificial intelligence and interstellar travel are limited to cursory descriptions where little detail is given about the nature of the artificial intelligence. This article explores the role of artificial intelligence for interstellar travel by compiling use cases, exploring capabilities, and proposing typologies, system and mission archi-tectures. Estimations for the required intelligence level for specific types of interstellar probes are given, along with potential system and mission architectures, covering those proposed in the literature but also presenting novel ones. Finally, a generic design for an interstellar probes with an AI payload is proposed. Given current levels of increase in computational power, a spacecraft with a similar computational power as the human brain would have a mass from dozens to hundreds of tons in a 2050-2060 timeframe. Given that the advent of the first interstellar missions and artificial general intelligence are estimated to be by mid-21st century, a more in-depth exploration of the relationship between the two should be attempted, focusing on neglected areas such as protecting the artificial intelligence payload from radiation in interstellar space and the role of artificial intelligence in self-replication.
... Apart from this quantitative framework for comparing an agent's capability, we further use a qualitative maturity scale for analyzing task-specific capabilities and general capabilities with respect to AI probe missions, drawing heavily from Hernandez-Orallo [71,70] and Hein [62]. The results of this qualitative analysis are presented in Section 4. ...
Preprint
Full-text available
The large distances involved in interstellar travel require a high degree of spacecraft autonomy, realized by artificial intelligence. The breadth of tasks artificial intelligence could perform on such spacecraft involves maintenance, data collection, designing and constructing an infrastructure using in-situ resources. Despite its importance, existing publications on artificial intelligence and interstellar travel are limited to cursory descriptions where little detail is given about the nature of the artificial intelligence. This article explores the role of artificial intelligence for interstellar travel by compiling use cases, exploring capabilities, and proposing typologies, system and mission architectures. Estimations for the required intelligence level for specific types of interstellar probes are given, along with potential system and mission architectures, covering those proposed in the literature but also presenting novel ones. Finally, a generic design for an interstellar probes with an AI payload is proposed. Given current levels of increase in computational power, a spacecraft with a similar computational power as the human brain would have a mass from dozens to hundreds of tons in a 2050-2060 timeframe. Given that the advent of the first interstellar missions and artificial general intelligence are estimated to be by mid-21st century, a more in-depth exploration of the relationship between the two should be attempted, focusing on neglected areas such as protecting the artificial intelligence payload from radiation in interstellar space and the role of artificial intelligence in self-replication.
... The main inputs and outputs of the methodology are shown in Figure 2. As inputs, the technology under consideration is required. A technology is understood as a combination of an artifact and its underlying competencies that allow for its development, production, and operation as defined in the technology maturity literature [33]- [35]. In addition, the field of interest is a set of domains such as an area of economic activity (e.g. ...
Conference Paper
Full-text available
Product service systems (PSS) can be understood as an innovation / business strategy that includes a set of products and services that are realized by an actor network. More recently, PSS that comprise System of Systems (SoS) have been of increasing interest, notably in the transportation (autonomous vehicle infrastructures, multi-modal transportation) and energy sector (smart grids). Architecting such PSS-SoS goes beyond classic SoS engineering, as they are often driven by new technology, without an a priori client and actor network, and thus, a much larger number of potential architectures. However, it seems that neither the existing PSS nor SoS literature provides solutions to how to architect such PSS. This paper presents a methodology for architecting PSS-SoS that are driven by technological innovation. The objective is to design PSS-SoS architectures together with their value proposition and business model from an initial technology impact assessment. For this purpose, we adapt approaches from the strategic management, business modeling, PSS and SoS architecting literature. We illustrate the methodology by applying it to the case of an automobile PSS.
... • Implementation of the heritage metric developed by Hein [132] • Expanding the design framework to include airbreathing engines like ram and scramjet as well as balloon ascents Annex | 113 ...
Thesis
Full-text available
Since the creation of the Ansari X-Prize, a significant technical and commercial interest has developed in sub-orbital space tourism. An obvious question arises: what system architecture will provide the best combination of safety and economic return? The objective of this thesis is to address this question, by searching comprehensively through the architectural design space, and evaluating optimized architectures for cost and safety. Generally, in the early stages of development, systems built for a specific function lie in a broad architectural space with numerous concepts being developed, built and tested. As the product matures, certain concepts become more dominant and the variety of concepts in use decreases [1]. Consider the wide range of “flying machines” in the decades before and after the Wrights. History teaches us that the original architectural decisions (e.g. biplane, pusher propeller and canard) do not always survive as the dominant design [2]. This phenomenon of a wide variety of concepts can currently be observed in the suborbital tourism industry. The specific objective of this work is to explore the design space as thoroughly as possible to identify architectures that are more likely to succeed. In doing so we will identify the limits of the plausible design space and identify decisions that define the space. Then, we will build a parametric model for each of the viable options, and optimize that architecture with respect to the objective functions. Finally, we will assess the un-dominated architectures in the risk and cost dimensions, identify the small handful of designs that merit more refined design analysis and we give guidance to structure the decision-making process to choose one architecture.
... Although it might seem that principle feasibility regarding physical effects and working principles have a clear yes / no answer, we demonstrate by using records from historical and ongoing debates that converting physical effects into applicable engineering knowledge is not trivial and the framing of the feasibility question plays an important role. In the following, we limit our focus on technologies as physical artifacts (hardware) along with their design (Hein, 2016;Olechowski et al., 2015). For software and algorithms, feasibility issues are more closely related to logic, proofs, and mathematics, e.g. ...
Conference Paper
Full-text available
Breakthrough technologies are technologies that introduce radically new capabilities or a performance increase of at least an order of magnitude. Examples are the turbojet, inertial navigation, and autonomous driving. However, a remarkable pattern for these technologies is that their feasibility seems to have been initially contested. Existing approaches for technology assessment such as the Technology Readiness Levels do not seem to be adequate for capturing the subtle dimension of assessing the potential of breakthrough technologies, as they rather focus on technological maturity. Important aspects such as performance, enabling systems, and contextual factors are not taken into account. This paper addresses the principle feasibility of breakthrough technologies by looking at what arguments for and against principle feasibility were/are used and how the feasibility question was resolved. For this purpose, we reconstruct past and ongoing principle feasibility debates of four exemplary breakthrough technologies using a technology conceptual model and argument maps. For the four technologies analysed, we conclude that sufficient expected performance was a key issue debated in all cases, whereas physical effects and working principles were issues for breakthrough technologies with a relatively low maturity. Principle feasibility issues for breakthrough technologies seem to be resolved by introducing new component technologies and working principles. For future work, we propose the use of case and field studies in order to explore contextual feasibility criteria for breakthrough technologies such as injection into existing system architectures, enabling systems, and market readiness.
Article
Full-text available
The primary objective of this paper is to contribute to the existing literature by comprehensively reviewing the development, definitions and concepts of technology and technology transfer based on a literature review conducted on these wide research areas. This review covers various definitions and dimensions of both technology and technology transfer from the early technology concept i.e. from the development of Solow's (1957) growth model up to Maskus's (2003) definition and concept of technology and technology transfer. While the term 'technology' itself is difficult to interpret, observe or evaluate, as argued by many scholars, this review attempts to provide in-depth discussion and enhance understanding on these concepts from various perspectives, research background and disciplines. This review could shed some dynamic ideas for future researchers to further identify, conceptualize and understand the underlying theories and perspectives which strongly influence the previous, current and future concept of technology transfer.
Article
Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for “getting the wrong answer”.
Conference Paper
Technologies have become a critical issue within product development, since superior technologies are the foundation for successful products. Technology development until now has suffered from a fuzzy innovation process based on trial and error in a high-pressure product development environment, often leaving no time for real innovation. Technologies developed under these circumstances seldom become superior, robust, mature and flexible, which are the criteria considered to be critical for technologies to provide competitive advantage. In this paper the idea of separating technology development from product development into a steady technology stream is evolved. This enables companies to supply their product development programs with winning technologies at the right time. A four-phase process framework to support and catalyze the technology development cycle is introduced, discussing the first and second phase in depth. The proposed framework is based on an integration of five major development methodologies and aims at providing competitive advantage to companies by emphasizing superior, robust, mature, and flexible technologies.
Article
When the call for papers for a special issue of Icarus devoted to analysis of data from the Lunar Reconnaissance Orbiter mission was announced in March 2015 we envisioned a single issue with only a possibility of a second. We certainly were gratified by the response from within and outside the LRO instrument teams such that we were compelled to publish this the third and final volume.