Table 3 - uploaded by José Nelson Amaral
Content may be subject to copyright.
Number of Prefixes after Table Expansion

Number of Prefixes after Table Expansion

Source publication
Conference Paper
Full-text available
Caching recently referenced IP addresses and their forwarding in- formation is an effective strategy to increase routing lookup speed. This paper proposes a multizone non-blocking pipelined cache for IP routing lookup that achieves lower miss rates compared to previously reported IP caches. The two- stage pipeline design provides a half-prefix half...

Context in source publication

Context 1
... the prefix cache requires full LUT expansion while MPC requires a partial expansion (as described in Section 4). Table 3 compares the total number of prefixes in the LUT after the expansion for a prefix cache and MPC. MPC uses a small buffer (OMB) to hide the miss penalty. ...

Similar publications

Article
Full-text available
Connectivity with external networks is an essential feature of many ad hoc networks, and such connectivity is enabled by gateway nodes. This paper investigates how the gateways affect the throughput in the ad hoc network. The throughput’s dependency on gateway positions, number of gateways and handover properties is sought uncovered by simulations....
Article
Full-text available
The paper presents a new approach to improvement of pipeline analog-to-digital (A/D) converters characteristics. The approach is based on application of adaptive estimation algorithms to control and calculate output codes of pipeline A/D converters. Implementation of adaptive algorithms is possible in so called intelligent pipeline A/D converters (...
Chapter
Full-text available
The core of our paper is represented by the development of the HypeRSimRIP application that can be used for various networking purposes, such as designing a network or setting routing processes. Likewise, the application implements didactical functions useful for teaching networking-related concepts and experimental capabilities that enable its use...
Conference Paper
Full-text available
This paper addresses the evolution of Optical Transport Networks considering the possibilities offered by the Automatically Switched Optical Network to network operators. The architectural model and main features of an ASON are presented in some detail and compared with those of a traditional OTN. Transport service connection offerings are listed....

Citations

... We focus our attention on two key components: the cache and the lookup table. The cache design uses a non-blocking Multizone-Pipelined Cache (MPC) that we first presented in[15]. MPC uses prefix caching in multiple zone caches to improve cache miss ratio. ...
... Improving the cache miss rate through the appropriate usage of prefixes can thus compensate for a large cache miss penalty, and may additionally allow a simple LUT with fast table updates. Alternately, a non-blocking cache could be used, and would hide memory latency by overlapping LUT operations with cache accesses[6,15]. ...
... The multizone pipelined cache (MPC)Fig. 9presents a structural description of our design for a Multizone-Pipelined Cache (MPC)[15]. In the MPC the DAA is divided into two independently sized horizontal parts that form the two zones of the cache. ...
Article
This paper proposes a novel Internet Protocol (IP) packet forwarding architecture for IP routers. This architecture is comprised of a non-blocking Multizone Pipelined Cache (MPC) and of a hardware-supported IP routing lookup method. The paper also describes a method for expansion-free software lookups. The MPC achieves lower miss rates than those reported in the literature. The MPC uses a two-stage pipeline for a half-prefix/half-full address IP cache that results in lower activity than conventional caches. MPC’s updating technique allows the IP routing lookup mechanism to freely decide when and how to issue update requests. The effective miss penalty of the MPC is reduced by using a small non-blocking buffer. This design caches prefixes but requires significantly less expansion of the routing table than conventional prefix caches. The hardware-based IP lookup mechanism uses a Ternary Content Addressable Memory (TCAM) with a novel Hardware-based Longest Prefix Matching (HLPM) method. HLPM has lower signaling activity in order to process short matching prefixes as compared to alternative designs. HLPM has a simple solution to determine the longest matching prefix and requires a single write for table updates.
... Since 2000, most of the research on caching for network routing has been focused on prefix caches [21,3], as well as 'multizone' caches, in which the cache is partitioned into different sections based on the length of the prefixes they store [8,30,25,18]. Here the focus will be solely on methods developed to handle routing prefix dependencies, as these methods are applicable to prefix caching in both the NP model, as well as the distributed model described in Section 1.1.2. ...
... The predominant strategies for dealing with the problem involve either refusing to cache prefixes with dependencies, or alternatively, modifying the routing table to either remove or reduce the number of dependencies [21,3,18]. Before describing the strategies, first consider the example the set of 4-bit routing prefixes in Figure 1.1. In this figure, the routing prefixes are arranged so that child prefixes are dependencies of their parents. ...
Article
Full-text available
Internet routers forward a data packet based on the packet destination IP address. To do this, the routers must find the longest matching prefix with a packet destination address in the routing tables. Increasing the size of the Internet network increases the size of the routing tables that complicate the longest matching prefix problem. Several new approaches have been proposed to accelerate this process. Most proposed algorithms based on binary search algorithms such as a binary trie or a binary search tree, however, the binary trie involves many empty internal nodes. In this paper, a new IP address lookup algorithm using a dynamic content binary trie (DCB-Trie) is proposed. The proposed algorithm is based on the principle of exploitation the empty nodes in the binary trie for storing copies of most recently used prefixes. The results of performance evaluation show that our algorithm is efficient in terms of the lookup speed since search can be immediately finished when the input IP address matches a copy of a leaf prefix in an internal node without arrives to a leaf node.
Article
We propose algorithms for distributing the classifier rules to two ternary content addressable memories (TCAMs) and for incrementally updating the TCAMs. The performance of our scheme is compared against the prevalent scheme of storing classifier rules in a single TCAM in priority order. Our scheme results in an improvement in average lookup speed by up to 49% and an improvement in update performance by up to 3.84 times in terms of the number of TCAM writes.
Article
The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efficiency of the switch while maintaining flexibility and a rich set of packet processing features. The OpenFlow architecture aims to provide flexibility and programmable packet processing to meet these converging needs. Of the many ways to create an OpenFlow switch, a popular choice is to make heavy use of ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume a considerable amount of power and, when used to match flows in an OpenFlow switch, put a bound on switch latency. In this paper, we propose enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacrificing rich policy-based forwarding enabled by the OpenFlow architecture. Packet prediction exploits the temporal locality in network communications to predict the flow classification of incoming packets. When predictions are correct, latency can be reduced, and significant power savings can be achieved from bypassing the full lookup process. Simulation studies using actual network traces indicate that correct prediction rates of 97% are achievable using only a small amount of prediction circuitry per port. These studies also show that prediction circuitry can help reduce the power consumed by a lookup process that includes a TCAM by 92% and simultaneously reduce the latency of a cut-through switch by 66%.
Article
Full-text available
So that the routers forward an IP packet with his destination, they are running a forwarding decision on an incoming packet to determine the packet's next-hop router. This is achieved by looking up the longest matching prefix with a packet destination address in the routing tables. Therefore, one major factor in the overall performance of a router is the speed of the IP address lookup operation due to the increase in routing table size, in this paper, a new IP address lookup algorithm based on cache routing-table is proposed, that contains recently used IP addresses and their forwarding information to speed up the IP address lookups operation in the routers. We evaluated the performance of our proposed algorithm in terms of consultation time for several sets of IP addresses, the results of performance evaluation show that our algorithm is efficient in terms of the lookup speed since search can be immediately finished when the input IP address is found in the cache routing table.
Article
Full-text available
The growth of the World Wide Web (WWW) has seen it evolve into a rich information resource. It is constantly traversed with the aid of crawlers so as to harvest web content. When collecting data, crawlers have the potential of causing service denial to web servers. This paper proposes the application of sampling as a selection strategy in the design of structural analysis web crawlers. This has the benefit of alleviating the problems of bandwidth costs to web servers whilst retaining the quality of the data that is mined by crawlers. The initial results of this study are promising and are presented in this paper.
Article
Full-text available
Web server logs stores click stream data which can be useful for mining purposes. The data is stored as a result of user's access to a website. Web usage mining an application of data mining can be used to discover user access patterns from weblog data. The obtained results are used in different applications like, site modifications, business intelligence, system improvement and personalization. In this study, we have analyzed the log files of smart sync software web server to get information about visitors; top errors which can be utilized by system administrator and web designer to increase the effectiveness of the web site.
Article
Full-text available
The major challenge to design and deployment of mobile ad hoc networks (MANETs) is its dynamic nature, which carries with itself a set of security measures to be resolved. In this paper, we compare the behavior of three routing protocols DSDV, DSR and AODV under security attack, where the investigation is carried out with respect to two types of node misbehavior. The parameters taken into consideration for evaluation of network performance are normalized throughput, routing overhead, normalized routing load and average packet delay, when a certain percentage of nodes misbehave. It could be established through simulation results that DSDV is the most robust routing protocol under security attacks as compared to the other two. In addition, it reveals that a proactive routing protocol reduces the impact of security attack by excluding the misbehaving nodes in advance.
Article
Full-text available
Routers are one of the important entities in computer networks specially the Internet. Forwarding IPpackets is a valuable and vital function in Internet routers. Routers extract destination IP addressfrom packets and lookup those addresses in their own routing table. This task is called IP lookup.Internet address lookup is a challenging problem due to the increasingrouting table sizes. TernaryContent-Addressable Memories (TCAMs)are becoming very popular for designing high-throughputaddress lookupengines on routers: they are fast, cost-effective and simple to manage. Despite theTCAMs speed, their high power consumption is their major drawback. In this paper, MultilevelEnabling Technique (MLET), a power efficient TCAM based hardware architecture has beenproposed. This scheme is employed after an Espresso-II minimization algorithm to achieve lowerpower consumption. The performance evaluation of the proposed approach shows that it can saveconsiderable amount of routing table’s power consumption.