Figure 2 - uploaded by John Carrico
Content may be subject to copyright.
The lead author conducting RAS field tests in 2015 during the Space Launch System QM-1 solid-rocket booster static firing test at the Orbital ATK plant in northern Utah.

The lead author conducting RAS field tests in 2015 during the Space Launch System QM-1 solid-rocket booster static firing test at the Orbital ATK plant in northern Utah.

Source publication
Conference Paper
Full-text available
“In space, no one can hear you scream,” as the tagline from the sci-fi film Aliens goes. But what if there were a way of “hearing” in space, moving in-space video from the Silent Era to a more contemporary cinematic experience? How could this capability be applied to shape future spacecraft and mission designs? Such a capability can be effectively...

Context in source publication

Context 1
... experiments performed by the lead author (Fig. 2) over the last 10 years have demonstrated the ability to visually and aurally observe a variety of scenes ranging from exoatmospheric rocket flight at >100 km observing distances, ground-based visual / aural observations at 45 km (28 mile) distances to extreme close-up views of protozoa within micro-aquatic worlds 1,2,11,12,13 . The ...

Citations

... Satellite missions equipped LIDAR systems exist but do neither provide high resolution nor area coverage as they focus on specific regions [34], [35]. As other space born, non-imagery approaches such as "semi-acoustic" sensory are still in their development phase [36], and data on gravitational fields does not seem to possess any immediate relevance for ESM, this thesis will focus on imagery sensing techniques. ...
Thesis
Due to an increase of extreme weather events, climate change is steadily moving to the center of public discourse. The consensus on the global challenges posed by a further increase in global warming is also reflected in political decisions such as the Paris Climate Agreement. However, the transition to an energy supply based on renewable energies is opposed by a globally increasing energy demand, the security of supply, as well as economic interests. In this area of tension, energy system modelling seeks solutions that lead to both, climate-neutral and cost-effective energy systems by evaluating modeled scenarios. However, the decentralized and fluctuating character of renewable energies places high demands on the temporal and spatial resolution of these models, which cannot be ensured with current databases. This work therefore explores the possibilities of an automated database generation through a combined approach of remote sensing data and machine learning by the example of coal-fired power plants and wind turbines. To determine the relevant modeling parameters, state-of-the-art models are compared and current databases analyzed. Since essential training data for the development of neural networks tailored to the chosen power generators is not available this work develops a framework that allows to generate training data from ger-referenced databases and thus enables different combinations of energy infrastructures and satellite data. Moreover, the framework provides a toolbox to integrate different neural network structures. On this basis, a converted classification and detection network is trained and evaluated using coal-fired power plants and wind turbines. The results show that detection of wind turbines is possible even on image resolutions of 10 meters. The trained neural networks are able to detect the majority of existing wind turbines on area-wide remote sensing images. With the presented approach it is possible to generate global datasets for power system analysis with a high degree of automation based on satellite data.
... We define the acoustic playback of high-rate photometric data as its photoacoustic signature. The light to sound conversion is possible because photons carry an equivalent information content that an acoustic wave cannot because there is no medium for it to propagate through in the vacuum of space [2,3]. This technique has been proven robust in terrestrial tests that demonstrated the capability to accurately recover a conversation or song on the radio by collecting the light being modulated by a flexible surface nearby the acoustic source such as a plant leaf [4]. ...
... Unless explicitly stated otherwise, a 0.04 m v standard deviation and zero mean are assumed for the noise distribution, also denoted as N(0,0.04 2 ) as an additive effect to an object's apparent visual magnitude. 2 The intensity relative to the sun's apparent brightness generated from each facet and reflected in the direction of the observer is ...
... We define the acoustic playback of high-rate photometric data as its photoacoustic signature. The light to sound conversion is possible because photons carry an equivalent information content that an acoustic wave cannot because there is no medium for it to propagate through in the vacuum of space [2,3]. This technique has been proven robust in terrestrial tests that demonstrated the capability to accurately recover a conversation or song on the radio by collecting the light being modulated by a flexible surface nearby the acoustic source such as a plant leaf [4]. ...
... Unless explicitly stated otherwise, a 0.04 m v standard deviation and zero mean are assumed for the noise distribution, also denoted as N(0,0.04 2 ) as an additive effect to an object's apparent visual magnitude. 2 The intensity relative to the sun's apparent brightness generated from each facet and reflected in the direction of the observer is 1 Data obtained from ExoAnalytics marketing video on YouTube titled, "Commercial Space Situational Awareness Solutions" at timestamp 2:53, accessed via https://www.youtube.com/watch?v= kKQLiqM42Xw&t=3s. 2 Personal communications. Dr. Tamara Payne, Applied Optimization, Inc. July 2018. ...
... 2 The intensity relative to the sun's apparent brightness generated from each facet and reflected in the direction of the observer is 1 Data obtained from ExoAnalytics marketing video on YouTube titled, "Commercial Space Situational Awareness Solutions" at timestamp 2:53, accessed via https://www.youtube.com/watch?v= kKQLiqM42Xw&t=3s. 2 Personal communications. Dr. Tamara Payne, Applied Optimization, Inc. July 2018. ...
Article
Full-text available
Current active satellite maneuver detection techniques can resolve maneuvers as quickly as fifteen minutes post maneuver for large Δ v when using angles-only optical tracking. Medium to small magnitude burn detection times range from 6 to 24 h or more. Small magnitude burns may be indistinguishable from natural perturbative effects if passive techniques are employed. Utilizing a photoacoustic signature detection scheme can allow for near real time maneuver detection and spacecraft parameter estimation. We define the acquisition of hypertemporal photometric data as photoacoustic sensing because the data can be played back as an acoustic signal. Studying the operational frequency spectra, profile, and aural perception of an active satellite event such as a thruster ignition or any subsystem operation can provide unique signature identifiers that support resident space object characterization efforts. A thruster ignition induces vibrations in a satellite body which can modulate reflected sunlight. If the reflected photon flux is sampled at a sufficient rate, the change in light intensity due to the propulsive event can be detected. Sensing vibrational mode changes allows for a direct timestamp of thruster ignition and shut-off events and thus makes possible the near real time estimation of spacecraft Δ v and maneuver type if coupled with active observations immediately post maneuver. This research also investigates the estimation of other impulse related spacecraft parameters such as mass, specific impulse, exhaust velocity, and mass flow rate using impulse-momentum and work-energy methods. Experimental results to date have not yet demonstrated an operator-correlated detection of a propulsive event; however, the application of photoacoustic sensing has exhibited characteristics unique to hypertemporal photometry that are discussed alongside potential improvements to increase the probability of active satellite event detection. Simulations herein suggest that large, potentially destructive modal displacements are required for optical sensor detection and thus more comprehensive vibration modeling and signal-to-noise ratio improvements should be explored.
Conference Paper
Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.