Figure 1 - uploaded by Ioana Bratu
Content may be subject to copyright.
Accountability Concepts under Core International Space Law B. ABSOLUTE AND FAULT-BASED LIABILITY Under the Liability Convention, liability is by definition attributed only

Accountability Concepts under Core International Space Law B. ABSOLUTE AND FAULT-BASED LIABILITY Under the Liability Convention, liability is by definition attributed only

Source publication
Article
Full-text available
The introduction of advanced new technologies is transforming the space industry. Artificial intelligence is offering unprecedented possibilities for space-related activities because it enables space objects to gain autonomy. The increasing autonomy level of space objects does not come without legal implications. The lack of human control challenge...

Citations

... A practical legal framework governing space exploration is critical to tackling issues like space debris removal, space security, and the exploration and utilization of space resources. [19] The emergence of disruptive innovations necessitates the integration of legal technologies to navigate the legal landscape and ensure the seamless development of intelligent contract legal frameworks. Additionally, introducing blockchain technology significantly influences the legal domain, reshaping conventional legal concepts. ...
Article
Full-text available
The paper examines the complex and dynamic connection between technology and law in space exploration, providing a detailed analysis of their interactions and difficulties. This text explores the historical progression of space technology, highlighting significant advancements that have propelled humanity into outer space. It examines the legal frameworks that govern space activities, including international agreements and national regulations, to demonstrate how legal systems can adapt to the complexities of space exploration. Within this framework, the research explores essential issues that arise when technology and law intersect, including the handling of space debris, ethical concerns, and the regulatory environment for private space companies. The statement emphasises the urgent requirement for flexible legal systems that can efficiently govern the growing capabilities of space technology. Furthermore, the paper emphasises the significance of global cooperation as a fundamental element for efficient governance, guaranteeing the accountable and enduring exploration of space. This research combines knowledge of technology, law, and ethics to contribute to the ongoing discussion. It supports a comprehensive approach that aligns the advancement of space technology with robust legal frameworks and ethical considerations. This approach aims to guide humanity's exploration beyond Earth.
... The space treaties do not explicitly address AI systems, but their provisions and principles can be interpreted in the context of AI systems in the space domain. Principles such as the use for the benefit of all (hu)mankind, [ [45] Although this regime may require reform to tackle the nuances of the modern space industry, [46] it is still a cornerstone of the obligations of States and the way they regulate space activities within their jurisdictions,[41, Art. VI] giving it tremendous influence as a means of compelling responsible behaviour in the sector. ...
Conference Paper
Full-text available
Advances in Artificial Intelligence (AI) technologies are enabling a plethora of new applications across many industries. There are already a multitude of applications for AI systems in the space industry, but as the AI and space industries continue to grow in size and value rapidly, further use-cases will become apparent and proliferate to all corners of space operations and data analytics. Such space-based AI systems will bring many economic, scientific, and environmental benefits; however, they could also enable harm to individuals, organisations, and the environment if they are not developed and managed properly. Potential breaches of privacy through AI-assisted analysis of Earth observation imagery and collisions between objects in orbit due to malfunctioning automated maneuvering systems are examples of the harms that could eventuate if poorly designed AI systems are deployed in the space-sector. Responsible AI practices are needed to ensure such risks do not eventuate. 'Responsible' (or 'ethical') AI has emerged as a discipline designed to guide responsible AI development wherein the goal is to maximise the benefits of AI systems for individuals and society while mitigating against any potential harm that they may cause. Commonly accepted Responsible AI principles include accountability, contestability, fairness, security, privacy, transparency, explainability, and reliability. At times notions of 'do-no-harm' and generating 'net benefits' for society and the environment are also included. These principles of Responsible AI are generalizable and industry agnostic; however, they should be carefully considered in the context of the unique physical, economic, political, and technological characteristics of the space domain before being adopted wholesale by the space industry. While concepts such as security and reliability can be readily applied to applications of AI systems in the space domain, other ideals such as contestability, fairness, and explainability may not be as relevant to the use cases found within the space industry. This paper introduces the concept of Responsible AI and Responsible AI principles and then examines the applicability and appropriateness of widely accepted Responsible AI principles in the context of existing and emerging regulatory instruments relevant to the space industry. This serves as a first step towards creating a standardized regulatory framework for the responsible development of space-based AI systems and preventing harms associated with such systems occurring.
... One interesting point of view is that both agencies were established under well-defined legal instruments; however, these instruments were adopted after finding a serious issue that needed to be regulated. The difference between those industries and the space industry is that the current technological advancement, such as the emergence of AI technologies [30,31], could severely impact the international community if not addressed by a legal instrument. Thus, the previous lessons from other industries should be considered to address the issue related to space activity, not wait for a catastrophe to regulate its impact. ...
... The challenges relating to the use of AI technologies in space could be an opportunity to create a positive impact through the enhancement of the space industry by creating a space-specialized agency of the UN to govern the current activities through the adoption of new treaties. Such treaties may oversee the activities of large satellite constellations [31] through an adequately developed STM that includes the different aspects related to governments, agencies, and private stakeholders. This will result in a dramatic change in the industry towards a safer, more secure, and more sustainable environment for space actors, consequently creating a more reliable and stable situation. ...
Article
Full-text available
The space industry is one of the most technologically advanced industries that aims for scientific explorations that benefit humanity on multiple fronts. Further, Artificial Intelligence (AI) technologies comprise game-changing tools that could be utilized to facilitate space exploration aims. The emergence of AI in the space industry would evolve how both industries look. Since many challenges in the current space industry could be addressed by implementing artificial intelligence, space objects will create "Intelligent Space Objects". Different studies were conducted to explore the implementation of AI technologies in space activities and their legal implications. The scope of this paper goes beyond the existing work. It will investigate the main AI applications in space and then explore their legal challenges, including issues related to regulations, liability, and policy questions. Accordingly, it will discuss the need for developing a novel legal framework to address these challenges, creating a strategic opportunity for international collaboration between states and organizations that will contribute to advancing space law. This study will review, evaluate, and analyze the current situation and recommend ways to establish a novel international space organization. Doi: 10.28991/HIJ-2023-04-01-04 Full Text: PDF
... Furthermore, AI is involved in preventing harm through avoidance of collisions by being an important part of the manufacturing process (Schmelzer 2020), thus minimizing the chance of human error in the production phase and ensuring a better functioning of the satellite once it is launched into outer space. Directly, AI is involved in satellite collision prevention through directly monitoring "the health" of satellites, namely, keeping a constant watch on satellite's sensors and other equipment, alerting in case of malfunction or a threat of collision, and in some cases, even by carrying out corrective action (Schmelzer 2020;Bratu et al. 2021). In order words, AI plays an important role in controlling and navigating space objects (Schmelzer 2020). ...
... ESA is currently developing a collision-preventing system, which will automatically assess the risk and probability of collisions in outer space and will upon such an assessment decide or partake in the decisionmaking process in order to establish the appropriate corrective action (a step toward strong AI). Such action may be either to conduct a responsive maneuver to avoid the forthcoming threat, or to send out orders and signals to other satellites involved in the potential collision to carry out such a maneuver on their own (ESA 2019;Bandivadekar Berquand 2021;Bratu et al. 2021). Despite the usefulness of already existing collision-preventing mechanisms as well as the expected efficiency of those under development, concerns regarding potential vulnerability of such systems arise. ...
Chapter
Biometrics covers a variety of technologies used for the identification and authentication of individuals based on their behavioral and biological characteristics. A number of new biometric technologies have been developed, taking advantage of our improved understanding of the human body and advanced sensing techniques. They are increasingly being automated to eliminate the need for human verification. As computational power and techniques improve and the resolution of camera images increases, it seems clear that many benefits could be derived through the application of a wider range of biometric techniques for security and surveillance purposes in Europe. Facial recognition technology (FRT) makes it possible to compare digital facial images to determine whether they are of the same person. However, there are many difficulties in using such evidence to secure convictions in criminal cases. Some are related to the technical shortcomings of facial biometric systems, which impact their utility as an undisputed identification system and as reliable evidence; others pertain to legal challenges in terms of data privacy and dignity rights. While FRT is coveted as a mechanism to address the perceived need for increased security, there are concerns that the absence of sufficiently stringent regulations endangers fundamental rights to human dignity and privacy. In fact, its use presents a unique host of legal and ethical concerns. The lack of both transparency and lawfulness in the acquisition, processing and use of personal data can lead to physical, tangible and intangible damages, such as identity theft, discrimination or identity fraud, with serious personal, economic or social consequences. Evidence obtained by unlawful means can also be subject to challenge when adduced in court. This paper looks at the technical and legal challenges of automated FRT, focusing on its use for law enforcement and forensic purposes in criminal matters. The combination of both technical and legal approaches is necessary to recognize and identify the main potential risks arising from the use of FRT, in order to prevent possible errors or misuses due both to technological misassumptions and threats to fundamental rights, particularly—but not only—the right to privacy and the presumption of innocence. On the one hand, a good part of the controversies and contingencies surrounding the credibility and reliability of automated FRT is intimately related to their technical shortcomings. On the other hand, data protection, database custody, transparency, accountability and trust are relevant legal issues that might raise problems when using FRT. The aim of this paper is to improve the usefulness of automated FRT in criminal investigations and as forensic evidence within the criminal procedure.KeywordsBiometricsPrivacyDignityForensicsFacial recognitionInfallibility
... A further class of AI systems concerns the new generation of autonomous astronaut assistants, such as the Crew Interactive Mobile Companion (CIMON), that enables voice-controlled access to media and documents, navigation through operating and repair instructions, down to planetary exploration, especially in conditions too dangerous or prohibitive for humans. The autonomy of AI systems and a certain degree of unpredictability and opacity of these technologies may raise thorny issues of space law that regard damages to be covered, the liability regime to be applied, or the procedure to be followed once such damages occur in space or here down on earth (Tronchetti 2013;Bratu et al. 2021). One of the main assumptions of this paper is that pillars of the current legal framework -which revolves around the international treaties for outer space signed under the auspices of the United Nations in the 1960s and 1970s -fall increasingly short in coping with the challenges of AI (Martin and Freeland 2021). ...
Preprint
Full-text available
The paper examines the open problems that experts of space law shall increasingly address over the next years, according to four different sets of legal issues. Such differentiation sheds light on what is old and what is new in today's troubles of space law, e.g., the privatization of space, vis-à-vis the challenges that AI raises in this field. Some AI challenges depend on its unique features, e.g., autonomy and opacity, and how they affect pillars of the law, whether on earth or in space missions. The paper insists on a further class of legal issues that AI systems raise, however, only in outer space. We shall never overlook the constraints of a hazardous and hostile environment, such as in a mission between Mars and the Moon. The aim of this paper is to illustrate what is still mostly unexplored, or is in its infancy, in this kind of research, namely, the fourfold way in which the uniqueness of AI and that of outer space impact standards of the law. The claim is that a new generation of sui generis standards of space law, stricter or more flexible standards for AI in outer space, down to the 'principle of equality' between human standards and robotic standards will necessarily follow as a result of this twofold uniqueness.
... A further class of AI systems concerns the new generation of autonomous astronaut assistants, such as the Crew Interactive Mobile Companion (CIMON), that enables voice-controlled access to media and documents, navigation through operating and repair instructions, down to planetary exploration, especially in conditions too dangerous or prohibitive for humans. The autonomy of AI systems and a certain degree of unpredictability and opacity of these technologies may raise thorny issues of space law that regard damages to be covered, the liability regime to be applied, or the procedure to be followed once such damages occur in space or here down on earth (Tronchetti 2013;Bratu et al. 2021). One of the main assumptions of this paper is that pillars of the current legal framework -which revolves around the international treaties for outer space signed under the auspices of the United Nations in the 1960s and 1970s -fall increasingly short in coping with the challenges of AI (Martin and Freeland 2021). ...
... The AI Act provides a comprehensive definition for AI systems [19]. The concept includes from basic systems, such as symbolic expert systems, to more advanced systems, reaching high automation levels and operating based on sophisticated learning approaches, such as machine learning, a process inspired by the neural networks of the human brain [20]. One of the key elements in differentiating between various AI systems is the degree of human control deployed in the decisionmaking process. ...
Conference Paper
Full-text available
GNSS offer solutions for many sectors, from road traffic, aviation, emergency-response services, civil engineering and agriculture. Due to the latest technological developments, GNSS, including Galileo, are also being integrated as an essential component of AI systems with various automation levels, such as self-driving vehicles, drones, lane keeping systems on highways etc. Despite their numerous benefits, GNSS are not risk-free. Even though it is unlikely that a loss of signal will lead to an accident caused by an AI system, this scenario cannot be totally ignored. Recent incidents revealed a series of vulnerabilities that need to be addressed before more AI systems using GNSS signals can become active participants in our societies. In this context, it becomes clear that the most pressing issue is the one related to liability: who will be liable in case an accident is caused by an AI system due to an absent or inaccurate GNSS signal at a critical point during navigation? Taking into consideration the debates concerning Galileo's potential acceptance of liability, this paper investigates if international space law is able to prevent potential liability gaps, thus avoiding situations where incidents occur and liability cannot be attributed.
Chapter
Full-text available
The paper addresses the challenges brought forth by projects and investments of private companies on mass space exploration, space tourism, and scientific research in outer space missions. Such projects and investments are examined with the case study of the regulatory framework for the Columbus Laboratory in the International Space Station. The aim is to illustrate the limits of traditional approaches in the field of space law, and how ethics and moral arguments help filling the gaps of current legal regulations. The quest for the democratization of outer space casts light on the democratic deficit of such institutions, as the European Union, vis-à-vis current trends on the privatization of outer space.
Article
Full-text available
Advances in artificial intelligence (AI) and automated robotics will profoundly influence space operations. By utilising machine learning and deep learning approaches, AI-enabled systems may accomplish tasks as well as improve their own performance. These capabilities are useful in the often-remote settings of outer space and will grow in value as automated space operations become more widespread. As AI extends throughout the space domain, automated algorithms will take on many of the roles that have historically been handled by humans. Artificial intelligence is progressing from theory to implementation in the space environment by exposing new satellites and orbital autonomous vehicles to new data. Even though all initial computational parameters are provided, such systems' outputs can be very unpredictable, putting people, property, and the environment at risk. This paper investigates the application of United Nations space treaties, selected regional AI regulations, and various 'soft-law' instruments and industry initiatives focusing on responsible AI system development to space-based AI systems. Following that, reforms are proposed to clarify the practical relationship between AI systems and the international legal regime that governs space, as well as a 'bottom-up' regulatory approach to better facilitate the future development of regulation governing the use of AI by the global space sector. While this work does not purport to provide a conclusive resolution to these multifaceted matters, its objective is to underscore significant obstacles that arise at the convergence of space law and AI, serving as a preliminary foundation for subsequent discussions on this issue.