Figure 4 - available via license: CC BY
Content may be subject to copyright.
CFP Optical Transceiver Block Diagram.

CFP Optical Transceiver Block Diagram.

Source publication
Article
Full-text available
The demand for integrated telecommunication network infrastructure has increased, and 100 Gbps optical transceivers are a critical part of this infrastructure. In this paper, an efficient firmware design scheme is proposed for a 100 Gbps C form-factor pluggable (CFP) optical transceiver based on the multi-source agreement standard for optical trans...

Context in source publication

Context 1
... this section describes the hardware structure of the CFP used in the firmware design. Figure 4 is a block diagram of the entire hardware architecture. In the proposed firmware design technique, an FPGA is used for the MDIO interface of the CFP and a standard memory configuration is employed. ...

Citations

... CPU and ML-specialized GPU, application-specific integrated circuits (ASIC) (Capra et al., 2019). Field programmable gate arrays (FPGA) (Kim et al., 2020), targeting training the neural network/ ML with massive training data or inference of the attributes for the new samples (Freund, 2022) (article from www.forbes.com/) are the solvers to the demand of ML. ...
Article
Purpose This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design. Findings ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher. Originality/value IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.
... The monitored values are updated by converting them to values in units defined in the standard document when they are updated in that register. In this design, the DDM-related specifications for the SFP optical transceiver in Standard Document SFF-8472 were implemented to satisfy the DDM-related specifications and were designed to calculate DDM values using higher-order polynomials to increase DDM accuracy [20]. The DDM formula representing the receiving optical power is shown in Equation (1): ...
Article
Full-text available
A quad, small form-factor pluggable 28 Gbps optical transceiver design scheme is proposed. It is capable of transmitting 50 Gbps of data up to a distance of 40 km using modulation signals with a level-four pulse-amplitude. The proposed scheme is designed using a combination of electro-absorption-modulated lasers, transmitter optical sub-assembly, low-cost positive-intrinsic-native photodiodes, and receiver optical sub-assembly to achieve standard performance and low cost. Moreover, the hardware and firmware design schemes to implement the optical transceiver are presented. The results confirm the effectiveness of the proposed scheme and the performance of the manufactured optical transceiver, thereby confirming its applicability to real industrial sites.
Chapter
Full-text available
Traditional computing hardware is working to meet the extensive computational load presented by the rapidly growing Machine Learning (ML) and Artificial Intelligence algorithms such as Deep Neural Networks and Big Data. In order to get hardware solutions to meet the low-latency and high-throughput computational needs of these algorithms, Non-Von Neumann computing architectures such as In-memory Computing (IMC) have been extensively researched and experimented with over the last five years. This study analyses and reviews works designed to accelerate Machine Learning task. We investigate different architectural aspects and directions and provide our comparative evaluations. We further discuss IMC research’s challenges and limitations and present possible directions.KeywordsIn-memory computingMachining learningDeep neural networkIn-memory accelerator