Figure 3 - uploaded by Michael K. Birbas
Content may be subject to copyright.
Baseband Digital Receiver Diagram 

Baseband Digital Receiver Diagram 

Source publication
Conference Paper
Full-text available
This paper presents the algorithms and corresponding hardware architectures developed in the context of the nexgen miliwave project, that compose the digital baseband processor of a 60GHz point-to-point link. The nexgen baseband processor provides all basic functionality required from a digital transmitter and receiver, including filtering, synchro...

Context in source publication

Context 1
... will focus on the inner receiver as in single carrier systems most of the required signal processing takes place after signal reception. A functional block diagram of the entire digital baseband receiver (inner receiver & LDPC decoder) is depicted in Figure 3. ...

Similar publications

Conference Paper
Full-text available
This paper clarifies the class of second-order digital filters with two second-order modes equal. We consider three cases for second-order digital filters: complex conjugate poles, distinct real poles, and multiple real poles. We derive a general expression of the transfer function of second-order digital filters with two second-order modes equal....
Article
Full-text available
In this paper we propose a generalized method of structural identification of biomedical signals with locally concentrated properties using a digital non-linear filter. The experimental verification of the detecting function was performed by using different ways to describe the model of the desired class of structural elements.
Conference Paper
Full-text available
This paper presents an analytic design technique for 2D IIR filters with elliptical symmetry, which have useful applications in image processing. The design is based on efficient elliptic digital filters, regarded as 1D prototypes, to which specific complex frequency transformations are applied; this allows to obtain directly a factored form of the...
Article
Full-text available
Regression verification is the problem of deciding whether two similar programs are equivalent under an arbitrary yet equal context, given some definition of equivalence. So far this problem has only been studied for the case of single-threaded deterministic programs. We present a method for regression verification of multi-threaded programs. Speci...

Citations

... Their task graph often exhibits a high level parallelism which can be exploited by individual cores to achieve higher throughput. Such an application is a digital baseband processor of a 60GHz, 1.6Gbps point-to-point link, under Nexgen Miliwave research project [5]. The particular application requires the implementation of processing capable of compensating impairments due to millimeter-wave front-end and yet operate at a throughput rate in excess of one gigabit, with modest hardware cost. ...
Conference Paper
Contemporary and next-generation wireless, wired and optical telecommunication systems rely on sophisticated forward error-correction (FEC) schemes to facilitate operation at particularly low Bit Error Rate (BER). The ever increasing demand for high information throughput rate, combined with requirements for moderate cost and low-power operation, renders the design of FEC systems a challenging task. The definition of the parity check matrix of an LDPC code is a crucial task as it defines both the computational complexity of the decoder and the error correction capabilities. However, the characterization of the corresponding code at low BER is a computationally intensive task that cannot be carried out with software simulation. We here demonstrate procedures that involve hardware acceleration to facilitate code design. In addition to code design, verification of operation at low BER requires strategies to prove correct operation of hardware, thus rendering FPGA prototyping a necessity. This paper demonstrates design techniques and verification strategies that allow proof of operation of a gigabit-rate FEC system at low BER, exploiting the state-of-the-art Virtex-7 technology. It is shown that by occupying up to 70% - 80% percent of slices on a Virtex-7 XC7V485T device, iterative decoding at gigabit rate can be verified.
... FOR A 60GHZ GIGABIT LINK The presented design, verification and optimization methodology has been successfully applied to the development of a digital baseband processor of a 60GHz point-to-point link, under Nexgen Miliwave research project [35]. There was a need of techniques capable of compensating impairments due to millimeter-wave front-end and yet support a throughput rate of more than one gigabit, with modest hardware cost. ...
Conference Paper
Full-text available
This paper introduces a methodology for forward error correction (FEC) architectures prototyping, oriented to system verification and characterization. A complete design flow is described, which satisfies the requirement for error-free hardware design and acceleration of FEC simulations. FPGA devices give the designer the ability to observe rare events, due to tremendous speed-up of FEC operations. A Matlab-based system assists the investigation of the impact of very rare decoding failure events on the FEC system performance and the finding of solutions which aim to parameters optimization and BER performance improvement of LDPC codes in the error floor region. Furthermore, the development of an embedded system, which offers remote access to the system under test and verification process automation, is explored. The presented here prototyping approach exploits the high-processing speed of FPGA-based emulators and the observability and usability of software-based models.
Chapter
This chapter demonstrates the high energy efficiency of the proposed Domain Specific Instruction set Processor (DSIP), architecture concept on a challenging very high throughput Finite Impulse Response (FIR), filter for 60 GHz applications. Thereby HardSIMD and SoftSIMD datapath implementations are proposed. Section 5.1 motivates this case study and summarizes related work on digital 60 GHz baseband implementations. The targeted matched filter and the flexibility requirements of this functional block are explained in Sect. 5.2. In Sect. 5.3, the applied algorithm optimizations and the characteristics of the considered algorithm are shown. The proposed HardSIMD and SoftSIMD DSIP architecture instances are presented in Sect. 5.4. Software mapping and hardware implementation results are given in Sect. 5.5. Section 5.6 compares the results to Application Specific Integrated Circuit (ASIC) references and to other programmable implementations. Finally, Sect. 5.7 concludes this chapter. This chapter also includes an appendix which shows preliminary experimental results that were obtained by applying the proposed back-end semi-custom design approach (see appendix of Chap. 3).
Article
This paper introduces hardware architectures for encoding Quasi-Cyclic Low-Density Parity Check (QC-LDPC) codes. The proposed encoders are based on appropriate factorization and subsequent compression of involved matrices by means of a novel technique, which exploits features of recursively-constructed QC-LDPC codes. The particular approach derives to linear encoding time complexity and requires a constant number of clock cycles for the computation of parity bits for all the constructed codes of various lengths that stem from a common base matrix. The proposed architectures are flexible, as they are parameterized and can support multiple code rates and codes of different lengths simply by appropriate initialization of memories and determination of data bus widths. Implementation results show that the proposed encoding technique is more efficient for some LDPC codes than previously proposed solutions. Both serial and parallel architectures are proposed. Hardware instantiations of the proposed serial encoders demonstrate high throughput with low area complexity for code words of many thousand bits, achieving area reduction compared to prior art. Furthermore, parallelization is shown to efficiently support multi-Gbps solutions at the cost of moderate area increase. The proposed encoders are shown to outperform the current state-of-the-art in terms of throughput-area-ratio and area-time complexity by 10 to up to 80 times for codes of comparable error-correction strength.