Decreasing mean transistor size since 2000. https://doi.org/10.1371/journal.pone.0256245.g007

Decreasing mean transistor size since 2000. https://doi.org/10.1371/journal.pone.0256245.g007

Source publication
Article
Full-text available
Gordon Moore famously observed that the number of transistors in state-of-the-art integrated circuits (units per chip) increases exponentially, doubling every 12–24 months. Analysts have debated whether simple exponential growth describes the dynamics of computer processor evolution. We note that the increase encompasses two related phenomena, inte...

Context in source publication

Context 1
... added advantage of the methodology presented here is the ability to track changes in mean transistor size, which is the reciprocal of the density function; this is dissimilar to the conventional technology node process as defined by the "minimum feature size" [72]. Data since 2000 exhibit a significant deceleration in miniaturization trends (Fig 7) while even relatively important advances have only kept transistor miniaturization on this decelerating trajectory, such as Intel's 3D tri-gate technology. Throughout the last two decades, advances in transistor miniaturization have slowed substantially and seem to signify a departure from the Table 3 for fitted model parameter values. ...

Citations

... The emergence of Artificial Intelligence (AI) in this era and the impact of Gordon Moore's law [15] indicates computing power will double every 18 to 24 months; however, not everyone can fully understand the implications of the speed of this progress [16]. Semiconductor technology has surpassed the rules of Moore's Law [17] in response to the rapid advancement and growth of new technologies since 1979 and Alphabet's breakthrough in quantum computing that has a computational statespace of approximately 10 16 dimensions [18,19]. These new innovative technologies and advancements are now linked with smart energy networks [20] and have enhanced computerized management, network communication, and intelligent analysis through perception and interconnection. ...
... SPSS (v. 19.0) and AMOS (v. 19.0) were used to analyse the reliability and validity of the data. ...
... 19.0) and AMOS (v. 19.0) were used to analyse the reliability and validity of the data. Examination of the similarities and differences between the populations using CB-SEM to test hypotheses, interpret results, and provide conclusions is an industry/ research standard [99][100][101]. ...
Article
Full-text available
Background The literature for assessing online and offline shopping behaviours that are linked to intelligent robotic goods and services is inadequate. In this study, we applied the Theory of Planned Behaviour model for guidance regarding how consumer behaviour affects their purchase intentions for intelligent robotic goods and services. Methods Data from 408 respondents were gathered through an online questionnaire binned into Online and Overall Shoppers, and analysed using SPSS, AMOS, and Covariance-Based Structural Equation Modelling software to evaluate the appropriateness of the measurements and to confirm data reliability, convergence, divergence, and validity. These tools were also used to track and test hypothesized relationships between the variables and model constructs used in this study. Results and conclusions The overarching outcomes from the data analyses indicated the Ease of Usage, Brand Perception, and Product Pricing variables causally impacted the TPB model constructs, namely Attitude, Subjective Norms, and Perceived Behaviour Control for the two populations tested with respect to their intention to purchase intelligent robotic goods and services. The reliability measurements for Ease of Usage, Brand Perception, and Product Pricing are discussed. The results are important for companies and future investors because opportunities to study the complex relationships that ultimately drive consumer behaviour and their intention to purchase intelligent robotic goods and services are provided.
... A tradeoff in node/feature size and highest operation temperature for the various technologies is evident. [8][9][10][11]13,31,44,[48][49][50]54,57,[266][267][268][269][270][271][272][273][274] Making devices smaller with higher performance is costly and requires high volume production. Such production was easy to achieve for silicon due to its prolific and widespread use. ...
Preprint
Full-text available
Silicon microelectronics, consisting of complementary metal oxide semiconductor (CMOS) technology, have changed nearly all aspects of human life from communication to transportation, entertainment, and healthcare. Despite the widespread and mainstream use, current silicon-based devices suffer significant reliability issues at temperatures exceeding 125 °C. The emergent technological frontiers of space exploration, geothermal energy harvesting, nuclear energy, unmanned avionic systems, and autonomous driving will rely on control systems, sensors, and communication devices which operate at temperatures as high as 500 °C and beyond. At these extreme temperatures, active (heat exchanger, phase change cooling) or passive (fins and thermal interface materials) cooling strategies add significant mass and complication which is often infeasible. Thus, new material solutions beyond conventional silicon CMOS devices are necessary for high temperature, resilient electronic systems. Accomplishing this will require a united effort to explore development, integration, and ultimately manufacturing of non-silicon-based logic and memory technologies, non-traditional metals for interconnects, and ceramic packaging technology.
... In the 1960s, the engineer Gordon Moore predicted that the number of transistors and microprocessors on an integrated circuit surface would double every two years at the same manufacturing cost [15]. For more than 50 years, this prediction proved to be accurate, creating an endless stream of technological innovations based on ever faster, more powerful, higher-performance "High Tech". ...
Article
The RASSCAS Laboratory (Applied Research in Social Sciences to Design a Sustainable Anthropocene) is a human sciences laboratory based in a digital engineering school. The idea of this laboratory is to guide engineering students towards models of design and innovation that respond both to the ethical desires of this young generation and to the need for responsible innovation imposed by the context of access to development that is fair for mankind and sustainable for life and the planet. This article is first and foremost a profession of faith, that of researchers who have decided to commit themselves to the path of development that respects life. It is also a call to become aware of the need to modify traditional design models. It’s a call for a paradigm shift from anthropocentric models to one that gives equal place to human and non-human living beings.
... However, the increasing demands to resolve wider ranges of spatial and temporal scales with great precision make these simulations expensive and energy-consuming. Furthermore, chip sizes for classical Central Processing Units (CPUs) are expected to converge over the next decade [1] due to the limits of reducing the transistor's size further, putting an end to Moore's Law [2]. In regard to increasing computing power, Quantum Computers (QCs) promise to address some future hardware challenges. ...
Preprint
Full-text available
The paper presents a variational quantum algorithm to solve initial-boundary value problems described by second-order partial differential equations. The approach uses hybrid classical/quantum hardware that is well suited for quantum computers of the current noisy intermediate-scale quantum era. The partial differential equation is initially translated into an optimal control problem with a modular control-to-state operator (ansatz). The objective function and its derivatives required by the optimizer can efficiently be evaluated on a quantum computer by measuring an ancilla qubit, while the optimization procedure employs classical hardware. The focal aspect of the study is the treatment of boundary conditions, which is tailored to the properties of the quantum hardware using a correction technique. For this purpose, the boundary conditions and the discretized terms of the partial differential equation are decomposed into a sequence of unitary operations and subsequently compiled into quantum gates. The accuracy and gate complexity of the approach are assessed for second-order partial differential equations by classically emulating the quantum hardware. The examples include steady and unsteady diffusive transport equations for a scalar property in combination with various Dirichlet, Neumann, or Robin conditions. The results of this flexible approach display a robust behavior and a strong predictive accuracy in combination with a remarkable polylog complexity scaling in the number of qubits of the involved quantum circuits. Remaining challenges refer to adaptive ansatz strategies that speed up the optimization procedure.
... However, ML models are subjected to the curse of dimensionality and training often involves solving nonlinear optimization problems which causes training times to be a significant concern. Nevertheless, with processing units such as CPUs and GPUs becoming more performant each year [14], the impact of ML will most likely increase even further. ...
... In the 1960s, the engineer Gordon Moore predicted that the number of transistors and microprocessors on an integrated circuit surface would double every two years at the same manufacturing cost [15]. For more than 50 years, this prediction proved to be accurate, creating an endless stream of technological innovations based on ever faster, more powerful, higher-performance "High Tech". ...
Article
Full-text available
The RASSCAS Laboratory (Applied Research in Social Sciences to Design a Sustainable Anthropocene) is a human sciences laboratory based in a digital engineering school. The idea of this laboratory is to guide engineering students towards models of design and innovation that respond both to the ethical desires of this young generation and to the need for responsible innovation imposed by the context of access to development that is fair for mankind and sustainable for life and the planet. This article is first and foremost a profession of faith, that of researchers who have decided to commit themselves to the path of development that respects life. It is also a call to become aware of the need to modify traditional design models. It’s a call for a paradigm shift from anthropocentric models to one that gives equal place to human and non-human living beings.
... The advent of transistors has unimaginably revolutionized the progression of human civilization in the last century. The consistent miniaturization of the transistor in the last few decades has made us capable of storing and processing vast amounts of data [1]. However, the quantity of data to be processed has also been increasing exponentially side by side [2]. ...
Article
Full-text available
The precession of a ferromagnet leads to the injection of spin current and heat into an adjacent non-magnetic material. Besides, spin-orbit entanglement causes an additional charge current injection. Such a device has been recently proposed where a quantum-spin hall insulator (QSHI) in proximity to a ferromagnetic insulator (FI) and superconductor (SC) leads to the pumping of charge, spin, and heat. Here we build a circuit-compatible Verilog-A-based compact model for the QSHI-FI-SC device capable of generating two topologically robust modes enabling the device operation. Our model also captures the dependence on the ferromagnetic precision, drain voltage, and temperature with an excellent (> 99%) accuracy.
... The performance improvement of processor, memory, logic, analog, and power integrated circuits is being driven by three-dimensional (3D) integration technology, which involves vertically stacking multiple complementary metal-oxide-semiconductor (CMOS) large-scale integration chips. This approach has become instrumental in extending Moore's Law [1][2][3]. In the context of 3D electronic integrated circuits (EICs), chip-to-chip connections can be easily achieved using through-silicon vias (TSV) [4]. ...
Article
Full-text available
We present a high-efficiency silicon grating coupler design based on a left–right mirror-symmetric grating and a metal mirror. The coupler achieves nearly perfect 90-degree vertical coupling. When two SOI chips are placed face to face with a vertical working distance of 50 μm, the chip-to-chip interlayer coupling efficiency reaches as high as 96%. When the vertical working distance ranges from 45 μm to 55 μm, the coupling loss remains below 1 dB. We also verified the effectiveness of our designed vertical coupler through 3D FDTD full-model simulation. The results demonstrate that our proposed vertical coupling structure represents a high-efficiency solution for 3D optical interconnects. The integration of multiple photonic chips in a 3D package with vertical optical and electrical interconnects is also feasible in the foreseeable future.
... This exponential growth requires a progressive miniaturization of transistors that now are commonly around 7-10 nm in size with announcements for a 2 nm transistor by IBM [41]. If Moore's laws empirical prediction continues to be followed, in the coming decades the size of a single transistor will get to the smallest possible size as shown in Figure 1.6, reaching sizes comparable with few atoms making it not possible to realize due to completely different properties of the commonly used materials at that physical scale [38,[42][43][44]. In fact, in that future atomic dimension device quantum mechanics has a dominant role and quantum tunneling will make it difficult to prevent the electrons from moving freely between the drain and the source of the transistor. ...
Preprint
Full-text available
Quantum computers have the potential to expand the utility of lattice gauge theory to investigate non-perturbative particle physics phenomena that cannot be accessed using a standard Monte Carlo method due to the sign problem. Thanks to the qubit, quantum computers can store Hilbert space in a more efficient way compared to classical computers. This allows the Hamiltonian approach to be computationally feasible, leading to absolute freedom from the sign-problem. But what the current noisy intermediate scale quantum hardware can achieve is under investigation, and therefore we chose to study the energy spectrum and the time evolution of an SU(2) theory using two kinds of quantum hardware: the D-Wave quantum annealer and the IBM gate-based quantum hardware.
... Advanced logic technology has evolved from planar structures to 3D structures, similar to memory technology. 41,42 The core components for any logic technology are the front-end-ofline (FEOL) which defines the device structure and the back-end-of-line (BEOL) which includes the wiring levels which connect numerous devices together to form circuits. The middle-of-line (MOL) which focuses on contacts between FEOL and BEOL is equally important but we do not discuss MOL in this paper. ...