Figure 3 - uploaded by Zoran Stamenkovic
Content may be subject to copyright.
Functional block diagram of the video codec Figure 3 shows a functional block diagram of the video codec. The first stage of coding is a prediction that uses previous coded parts of the image. Therewith the difference to the original image is calculated. The resulting residual is transformed by a 4x4 DCT-like Matrix prior to quantiza- tion. This step is used to regulate the compression strength always with direct impact to the image quality. In the second stage a context adaptive variable length coding is used to reduce the remaining redundancy of the residual information. The last step is packeting the bit stream as a MPEG2 transport stream with 188 bytes per packet. In the proposed system the encoder core gets the incoming image data directly by a parallel interface from the digital camera. The compressed data is delivered to the ARC processor through an AMBA AHB master module. An AHB slave module in the decoder core collects the transmitted data. After decompressing the data is send to the video output core. The image size is 640x480 pixels with a rate of 30 frames per second. As data rate, a range of 5 to 10 Mbit is aimed. G. High-speed USB Controller The two high-speed USB interfaces have been implemented with an ULPI PHY chip USB3300 on the board and a digital USB 2.0 On-The-Go Single Device Controller [7] within the FPGA. Each interface is dual-role device that meets the 2.0 revision of the USB specification and On-The-Go supplement. It handles bytes transfer autonomously and bridges the USB interface to a PVCI interface. The USBHS-OTG-SD can be customized and optimized for a specific application. The design is strictly 

Functional block diagram of the video codec Figure 3 shows a functional block diagram of the video codec. The first stage of coding is a prediction that uses previous coded parts of the image. Therewith the difference to the original image is calculated. The resulting residual is transformed by a 4x4 DCT-like Matrix prior to quantiza- tion. This step is used to regulate the compression strength always with direct impact to the image quality. In the second stage a context adaptive variable length coding is used to reduce the remaining redundancy of the residual information. The last step is packeting the bit stream as a MPEG2 transport stream with 188 bytes per packet. In the proposed system the encoder core gets the incoming image data directly by a parallel interface from the digital camera. The compressed data is delivered to the ARC processor through an AMBA AHB master module. An AHB slave module in the decoder core collects the transmitted data. After decompressing the data is send to the video output core. The image size is 640x480 pixels with a rate of 30 frames per second. As data rate, a range of 5 to 10 Mbit is aimed. G. High-speed USB Controller The two high-speed USB interfaces have been implemented with an ULPI PHY chip USB3300 on the board and a digital USB 2.0 On-The-Go Single Device Controller [7] within the FPGA. Each interface is dual-role device that meets the 2.0 revision of the USB specification and On-The-Go supplement. It handles bytes transfer autonomously and bridges the USB interface to a PVCI interface. The USBHS-OTG-SD can be customized and optimized for a specific application. The design is strictly 

Source publication
Conference Paper
Full-text available
The paper presents a system-on-chip (SOC) aimed to provide the fast video stream processing and wireless transfer for automotive applications, e.g. from a truck's trailer to the driver cabin. This SOC is based on the ARC processor and a custom very-low-latency video codec. It is verified and implemented in FPGA on a custom printed-circuit-board. Th...

Similar publications

Conference Paper
Full-text available
1 Abstract The growing amount of security cameras increases the chance that there is important video footage available for the reconstruction of an incident. Using the right methods for analysis of large amounts of videos, it is possible to use this relevant information. In collaboration with the Netherlands Forensic Institute (NFI), research is do...
Chapter
Full-text available
Detecting humans in films and videos is a challenging problem owing to the motion of the subjects, the camera and the background and to variations in pose, appearance, clothing, illumination and background clutter. We develop a detector for standing and moving people in videos, testing several different motions coding schemes and showing empiricall...
Article
Full-text available
Widespread of smartphones which are equipped with cameras and an Internet connection allow development of applications that might be used as parking assistance, or to help humans to coordinate their actions with something that they cannot see directly. We propose an application that uses distributed architecture with carefully designed techniques t...

Citations

... All three sensors use different methods to detect objects, and, therefore, they have different strengths and weaknesses: Camera: The camera is one of the most versatile sensors in an autonomous car. It can provide a human driver with additional images (Stamenkovic Z. et al. 2012) and can be used by an autonomous car to make better decisions. For example, traffic sign detection (Huang S.C. et al. 2017) and vehicle detection (Caraffi C. et al. 2012) are necessary to drive safely on the road. ...
Article
Full-text available
Having systems that can adapt themselves in case of faults or changing environmental conditions is of growing interest for industry and especially for the automotive industry considering autonomous driving. In autonomous driving, it is vital to have a system that is able to cope with faults in order to enable the system to reach a safe state. In this paper, we present an adaptive control method that can be used for this purpose. The method selects alternative actions so that given goal states can be reached, providing the availability of a certain degree of redundancy. The action selection is based on weight models that are adapted over time, capturing the success rate of certain actions. Besides the method, we present a Java implementation and its validation based on two case studies motivated by the requirements of the autonomous driving domain. We show that the presented approach is applicable both in case of environmental changes but also in case of faults occurring during operation. In the latter case, the methods provide an adaptive behavior very much close to the optimal selection.
Conference Paper
The availability of advanced driver assistance systems (ADAS), for safety and well-being, is becoming increasingly important for avoiding traffic accidents caused by fatigue, stress, or distractions. For this reason, automatic identification of a driver from among a group of various drivers (i.e. real-time driver identification) is a key factor in the development of ADAS, mainly when the driver's comfort and security is also to be taken into account. The main focus of this work is the development of embedded electronic systems for in-vehicle deployment of driver identification models. We developed a hybrid model based on artificial neural networks (ANN), and cepstral feature extraction techniques, able to recognize the driving style of different drivers. Results obtained show that the system is able to perform real-time driver identification using non-intrusive driving behavior signals such as brake pedal signals and gas pedal signals. The identification of a driver from within groups with a reduced number of drivers yields promising identification rates (e.g. 3-driver group yield 84.6 %). However, real-time development of ADAS requires very fast electronic systems. To this end, an FPGA-based hardware coprocessor for acceleration of the neural classifier has been developed. The coprocessor core is able to compute the whole ANN in less than 4 μs.
Article
Full-text available
Driving vehicles with one or more passive trailers has difficulties in both forward and backward motion due to inter-unit collisions, jackknife, and lack of visibility. Consequently, advanced driver assistance systems (ADAS) for multi-trailer combinations can be beneficial to accident avoidance as well as to driver comfort. The ADAS proposed in this paper aims to prevent unsafe steering commands by means of a haptic handwheel. Furthermore, when driving in reverse, the steering-wheel and pedals can be used as if the vehicle was driven from the back of the last trailer with visual aid from a rear-view camera. This solution, which can be implemented in drive-by-wire vehicles with hitch angle sensors, profits from two methods previously developed by the authors: safe steering by applying a curvature limitation to the leading unit, and a virtual tractor concept for backward motion that includes the complex case of set-point propagation through on-axle hitches. The paper addresses system requirements and provides implementation details to tele-operate two different off- and on-axle combinations of a tracked mobile robot pulling and pushing two dissimilar trailers.