Conference PaperPDF Available

How in the Heck do you lose a layer!?

Authors:

Abstract

Recently, Alex McKenzie published an anecdote in the IEEE Annals of the History of Computing on the creation of INWG 96, a proposal by IFIP WG6.1 for an international transport protocol. McKenzie concentrates on the differences between the proposals that lead to INWG 96. However, it is the similarities that are much more interesting. This has lead to some rather surprising insights into not only the subsequent course of events, but also the origins of many current problems, and where the solutions must be found. The results are more than a little surprising.
© John Day, 2011
Rights Reserved
How in the Heck
Do You Lose a Layer!?
Future Network Architectures Workshop
University of Kaiserslautern, Germany
John Day
2012
OSI only had a Network Layer, but
the Internet has an Internet Layer!
- Noel Chiappa, 1999
© John Day, 2011
Rights Reserved
Preamble
A Couple of Remarks on the Nature of Layering
The advent of packet switching required much more
complex software than heretofore, and so the concept of
layers was brought in from operating systems.
In operating systems, layers are a convenience, one design
choice.
In networks, they are a necessity.
© John Day, 2011
Rights Reserved
The (really) Important Thing about Layers
(From first lecture of my intro to networks course)
© John Day, All Rights Reserved, 2009
Physical
Data Link
Network
Transport
Application
Host or End System
Router
Less
Scope
More
Scope
Networks have loci of distributed shared state with different scopes
At the very least, differences of scope require different layers.
It is this property that makes the earlier telephony or datacomm
“beads-on-a-string” model untenable.
Or any other proposal that does not accommodate scope.
If there are multiple layers of the same scope, their functionality must
be independent.
This has always been understood.
© John Day, 2011
Rights Reserved
1972 Was an Interesting Year
Tinker AFB joined the ‘Net exposing the multihoming problem.
The ARPANET had its coming out at ICCC ‘72.
As fallout from ICCC 72, the research networks decided it would be
good to form an International Network Working Group.
ARPANET, NPL, CYCLADES, and other researchers
Chartered as IFIP WG6.1 very soon after
Major project was an International Transport Protocol.
Also a virtual terminal protocol
And work on formal description techniques
But a major 3-sided war was just breaking out
© John Day, 2011
Rights Reserved
War on All Sides
The Phone Companies don’t like the new model because an end-to-end
transport relegates them to a commodity business, thus not giving them
exclusive claim to “value-added services” or “services in the network.”
IBM (with 80% of the computer market) doesn’t like the new model
because it has a hierarchical network architecture, SNA.
The other computer companies (especially the minicomputer) love the
new model because it plays to their strength and because it nails down
the other two!
This War (and it was) dominates everything for the next 2 decades and
really is still going on.
© John Day, 2011
Rights Reserved
Meanwhile Back at INWG
There Were Two Proposals
INWG 37 based on the early TCP and
INWG 61 based on CYCLADES TS.
And a healthy debate, see Alex McKenzie, “INWG and the Conception of the
Internet: An Eyewitness Account” IEEE Annals of the History of Computing, 2011.
Two sticking points
How fragmentation should work
Whether the data flow was an undifferentiated stream or maintained the integrity of
the units sent (letter).
These were not major differences.
© John Day, 2011
Rights Reserved
After a Hot Debate
A Synthesis was proposed: INWG 96
There was a vote in 1976, which approved INWG 96.
As Alex says, NPL and CYCLADES immediately said they would
convert to INWG 96; EIN said it would deploy only INWG 96.
“But we were all shocked and amazed when Bob Kahn announced that DARPA researchers were too
close to completing implementation of the updated INWG 39 protocol to incur the expense of
switching to another design. As events proved, Kahn was wrong (or had other motives); the final
TCP/IP specification was written in 1980 after at least four revisions.”
Neither was right. The real breakthrough came two years later.
But the differences weren’t the most interesting thing about this work.
© John Day, 2011
Rights Reserved
The Similarity Among all 3
Is Much More Interesting
This is before IP was separated from TCP.
All 3 Transport Protocols carried addresses.
This means that the Architecture that INWG was working to was:
Three Layers of Different Scope each with Addresses.
If this does not hit you like a ton of bricks, you haven’t been paying
attention.
This is NOT the architecture we have.
Internetwork Transport Layer
Network Layer
Data Link
Layer
TCP
IP
MAC
LLC
SNDC
SNAC
© John Day, 2011
Rights Reserved
INWG’s Internet Model
An Internet Layer addressed Hosts and Internet Gateways.
Several Network Layers of different scope, possibly different
technology, addressing hosts on that network and that network’s routers
and its gateways.
Inter-domain routing at the Internet Layer; Intra-Domain routing at the
Network Layer.
Data Link Layer smallest scope with addresses for the devices (hosts or
routers) on segment it connects
Internet Gateways
Data Link
Network
Internet
Application
Data Link
Network
Internet
Application
Net 1 Net 2 Net 3
Host Host
© John Day, 2011
Rights Reserved
Separating IP from TCP
Shouldn’t have changed things much. But it did.
They thought they needed a flag day to transition from NCP to TCP/
IP and when they were done, it looked like this:
Internetwork Layer
Network Layer
Data Link Layer
Transport Layer
Network Layer
Data Link Layer
Transport Layer
© John Day, 2011
Rights Reserved
How Did They Lose A Layer?
What Appears to Have Happened:
By 1980 or so: A Large Central ARPANET surrounded by LANs
ARPANET was a black box run by BBN with its own routing and congestion control.
Routers start appearing with Ethernet on one side and ARPANET on the other
Called Internet Gateways at the time.
Some are directly connected, with Static Routing or Simple Routing schemes
As the ‘Net expands and the ARPANET shrinks . . .
More routers are directly connected but still all under DoD, so not seen as separate
networks with distinct routing domains. And not big enough that it is a problem.
The Non-ARPANET network is routing on IP as a Network Layer protocol!
Little overlap between the INWG and the Internet, by now IP is the Network Layer!
No one notices that there needs to be a network layer and an internetwork layer.
And anyway functions don’t repeat in layers.
But Two Layers of Different Scope is Not a Repetition
Scope? What is that?
The Same Protocol can used to Provide Functions of Different Scope.
To refine Dykstra, Functions shouldn’t repeat in layers of the same scope
© John Day, 2011
Rights Reserved
Okay, What Did OSI Do?
The other major effort at the time.
The well-known 7-layer model was adopted at the first meeting in
March 1978 and frozen. After that, they had to work within that.
One of the concerns in the Network Layer deliberations was
interworking over a less capable network:
Would need to enhance the less capable network with an additional
protocol
high-quality
network
low-quality
network
high-quality
network
high-quality
network
low-quality
network
high-quality
network
© John Day, 2011
Rights Reserved
OSI Sub-Divided the Network Layer
This concern and the recognition that there would be different
networks interworking lead OSI to posit three sublayers, which might
be optional depending on configuration:
Subnet Independent
Convergence (SNIC)
Subnet Dependent
Convergence (SNDC)
Subnet Access (SNAC)
© John Day, 2011
Rights Reserved
OSI Also Came to the INWG Model
With a Transport Layer, this is the same as the INWG model.
OSI stayed the course with a similar split to TCP/IP.
So OSI had an Internet Architecture and the Internet has a Network
Architecture.
And signs of a repeating structure.
Transport Layer
Subnet Independent
Subnet Dependent
Subnet Access
Data Link LLC
Data Link MAC
© John Day, 2011
Rights Reserved
Was Splitting TCP and IP Right?
The Rules say if there are two layers of the same scope, the functions
must be independent for it to work well.
Error and Flow Control separated from Relaying and Multiplexing are
independent. But IP also handles fragmentation across networks.
Remember, Don’t repeat functions in different layers of the same scope?
Problem: IP fragmentation doesn’t work.
IP has to receive all of the fragments of the same packet to reassemble.
Retransmissions by TCP are distinct and not recognized by IP.
Must be held for MPL (5 secs).
There can be considerable buffer space occupied.
There is a fix:
The equivalent of “doc, it hurts when I do this!” “Then don’t do it.”
MTU Discovery.
© John Day, 2011
Rights Reserved
But it is the Nature of the Problem
that is Interesting
The problem arises because there is a dependency between IP and
TCP. The rule is broken.
It tries to make it a beads on a string solution.
A Careful Analysis of this Class of Protocols shows that the Functions
naturally cleave along lines of Control and Data.
TCP was split in the Wrong Direction!
It is one layer, not two.
IP was a bad idea.
Relaying/ Muxing
Data
Transfer
Data Transfer
Control
Delimiting
Seq Frag/Reassemb
SDU Protection
Retransmission and
Flow Control
© John Day, 2011
Rights Reserved
A Chance to Get Things on Track
We knew in 1972, that we needed Application Names and some kind
of Directory.
Downloading the Host file from the NIC was clearly temporary.
And eventually we would have many more applications than the basic 3.
When the time came to automate it, it would be a good time to
introduce Application Names!
Nope, Just Automate the Host file. Big step backwards with DNS.
Now we have domain names
Macros for IP addresses
And URLs
Macros for jump points in low memory
© John Day, 2011
Rights Reserved
Then in ‘86: Congestion Collapse
Caught Flat-footed. Why? Everyone knew about this?
Had been investigated for 15 years at that point
With a Network Architecture they put it in Transport.
Worst place.
Most important property of any congestion control scheme is
minimizing time to notify. Internet maximizes it and its variance.
And implicit detection makes it predatory.
Virtually impossible to fix
• Whereas,
© John Day, 2011
Rights Reserved
Congestion Control in an Internet is
Clearly a Network Problem
With an Internet Architecture, it clearly goes in the Network Layer
Which was what everyone else had done.
Time to Notify can be bounded and with less variance.
Explicit Congestion Detection confines its effects to a specific
network and to a specific layer.
Internet Gateways
Data Link
Network
Internet
Application
Data Link
Network
Internet
Application
Net 1 Net 2 Net 3
Host Host
© John Day, 2011
Rights Reserved
Would be Nice to Manage the Network
With a choice between a modern object-oriented protocol (HEMS) and
a traditional approach (SNMP), err sorry . . . “a simple approach.”
IETF goes with “simple.”
Must be simple has the Largest implementation of the 3:
SNMP, HEMS, CMIP.
So simple too complex to use
IEEE had tried the SNMP approach several years earlier so the shortcomings
were well-known.
Everything connectionless making it impossible to snapshot tables
No attempt at commonality across MIBs.
Router vendors played them for suckers and they fell for it.
Not secure, can’t use for configuration.
© John Day, 2011
Rights Reserved
IPv6 Still Names the Interface?
Why on Earth?
Known about this problem since 1972
No Multihoming, kludged mobility
Notice in an Internet Architecture this is straightforward.
Signs the Internet crowd didn’t understand the Tinker AFB lesson.
DARPA never made them fix it.
By the Time of IPng, Tradition trumps Everything.
IPv7 would have fixed it.
But that fight was too intense. This is no longer science, let alone
engineering.
When can’t ignore, and given post-IPng trauma they look for a
workaround.
Violá!
Loc/Id Split!
© John Day, 2011
Rights Reserved
Loc/ID Split
(these are people who
lost a layer to begin with, right?)
You’ve got to be kidding?! Right!
First off:
Saltzer [1977] defines “resolve” as in “resolving a name” as “to locate an
object in a particular context, given its name.”
All names in computing locate something.
So either nothing can be identified without locating it, nor located without
identifying it, OR
anything that doesn’t locate something is being used outside its context
Hence it is either a false distinction or it is meaningless.
Second, one must route to the end of the path.
The locator is on the path to the end, it isn’t the end.
The “identifier” locates the end of the path but they aren’t routing on it.
No wonder it doesn’t scale
There is no workaround. IP is fundamentally flawed.
© John Day, 2011
Rights Reserved
Never Get a Busy Signal on the Internet!
Golly Gee Whiz! What a Surprise!!
With Plenty of Memory in NICs, Getting huge amounts of buffer
space backing up behind flow control.
If peer flow control in the protocol, pretty obvious one needs interface
flow control as well.
Well, Duh! What did you think was going to happen?
If you push back, it has to go somewhere!
Now you can have local congestion collapse!
Flow Control
2010 They Discovered Buffer Bloat!
No Interface
Flow Control
© John Day, 2011
Rights Reserved
Taking Stock
The Internet has:
Botched the protocol design
Botched the architecture
Botched the naming and addressing
When they had an opportunity move in the right direction with application
names, they didn’t. They did DNS.
When they had an opportunity to move in the right direction with node
addresses, they didn’t. They did IPv6.
More than Botched Network Management
Botched the Congestion Control twice
Once so bad it probably can not be fixed.
By my count this makes them 0 for 8!
It defies reason! Do these guys have an anti-Midas touch or wha!?
© John Day, 2011
Rights Reserved
But It is a Triumph!
But It Works!
So did DOS. Still does.
With enough Thrust even Pigs Can Fly!
As long as fiber and Moore’s Law stayed ahead of Internet
Growth, there was no need to confront the mistakes.
Now it is catching up to us and can’t be fixed.
Fundamentally flawed, a dead end.
Any further effort based on IP is a waste of time and effort.
Throwing good money after bad
Every patch (and that is all we see) is taking us further from where we
need to be.
(By that argument, so was DOS)
© John Day, 2011
Rights Reserved
INWG Was on The Right Track!!
They were Close to Seeing it was a Repeating Structure
A Structure we arrived at independently by a similar approach.
Is RINA the answer? Who knows? We are doing the science and
letting the chips fall where they may.
Can you propose something that is simpler and answers more
questions, makes predictions about things we haven’t seen?
We can and have. . . .
Internetwork Transport Layer
Network Layer
Data Link
Layer
TCP
IP
MAC
LLC
SNDC
SNAC
© John Day, 2011
Rights Reserved
How Lucky Can You Get!?
Now that We are Back on Track, There is so much to explore!
It is Interesting How Different the Fundamentals Are
By Maximizing Invariances and Minimizing Discontinuities:
To scale, resource allocation problems require a repeating (recursive)
structure. (confirmed by Herb Simon’s Science of the Artificial)
A Layer is a Distributed Application for Managing Interprocess
Communication.
Watson’s results are fundamental: Bounding 3 Timers are the Necessary
and Sufficient conditions for synchronization.
Besides a simpler protocol
Implies decoupling port allocation and synchronization
Implies that for it to be networking (IPC) MPL must be bounded.
Has important implications for security
© John Day, 2011
Rights Reserved
Really Lucky!
So Much to Explore!
Multiple layers of the same rank implies a means to choose which one to use.
Neither a global address space nor a global name space are absolute requirements
The nature of security is much clearer, simpler, and more robust.
Many new avenues to explore in congestion control and quality of service.
And the adoption can be seamless.
And there are implications beyond IPC to distributed systems in general that
we are just beginning to understand:
Hint: Distributed Applications are Local Computations.
Is there more to Discover?
• Undoubtedly!
... It champions a separation of mechanism and policy [40], separating what is done from how it is done [27]. In contrast to defining layers by function [21,26], RINA follows an elegant 1 layering paradigm consisting of network layers of identical mechanisms but different policies and/or scope, called Distributed Inter-Process Communication (IPC) Facilities or DIFs [23]. It defines the concept of a flow as the loci of resource allocation necessary to allow the transfer of datagrams from source to destination, rather than identifying a flow as a sequence of packets between a source and destination [17,102]. ...
... In this simplified form, protocol machines at the same level in the partial order are identified using a simple protocol machine called a multiplexing protocol machine or multiplexer using a static D R A F T 00:12 D. Staessens and S. Vrijders PMID as its invariable field; it is not modified during the lifetime of the packet. A multiplexer consists of two classifier FSMs that share state 21 . ...
... The IRM knows a process (PID 2000) that is bound to that name, and creates a flow endpoint for the flow (2). The 21 Multiplexing protocol machines are omnipresent in today's network protocols, albeit usually disguised as part of another protocol machine instead of being standalone. Version numbers, protocol fields, types and ports are all examples of this generic PMID hiding in various protocols that are deployed today. ...
Preprint
The 5-layer TCP and 7-layer OSI models are taught as high-level frameworks in which the various protocols that are used in computer networks operate. These models provide valid insights in the organization of network functionalities and protocols; however, the difficulties to fit some crucial technologies within them hints that they don't provide a complete model for the organization of -- and relationships between -- different mechanisms in a computer network. Recently, a recursive model for computer networks was proposed, which organizes networks in layers that conceptually provide the same mechanisms through a common interface. Instead of defined by function, these layers are distinguished by scope. We report our research on a model for computer networks. Following a rigorous regime alternating design with the evaluation of its implications in an implementation, we converged on a recursive architecture, named Ouroboros. One of our main main objectives was to disentangle the fundamental mechanisms that are found in computer networks as much as possible. Its distinguishing feature is the separation of unicast and broadcast as different mechanisms, giving rise to two different types of layers. These unicast and broadcast layers can easily be spotted in today's networks. This article presents the concepts underpinning Ouroboros, details its organization and interfaces, and introduces the free software prototype. We hope the insights it provides can guide future network design and implementation.
... RINA supports without the need of extra mechanisms mobility [16], multi-homing [15] and Quality of Service, provides a secure and configurable environment [12], motivates for a more competitive marketplace and allows for a seamless adoption. To date RINA is the only initiative that is proposing a network architecture that can be applied from the application layer down to the physical wire, fixing the shortcomings of current network architectures [41] while at the same time pursuing a minimalistic approach and theory of operation. RINA is based on a single type of repeating layer -the DIF -that features two core protocol frameworks (one for data transfer and another one for layer management) [2] which can be customised via policies to any type of networking environment [13]. ...
Article
Full-text available
IRATI is a open source implementation of RINA for OS/Linux systems that allows researchers and innovators to experiment with RINA networks. RINA is a new Internetwork architecture that supports without the need of extra mechanisms mobility, multi-homing and Quality of Service, provides a secure and configurable environment and allows for a seamless adoption. IRATI implements the core RINA protocols and multiple policies to customize such protocols to different environments. Policies can be developed via a Software Development Kit (SDK). IRATI provides an API for applications to natively use RINA IPC services, as well as multiple monitoring utilities and example applications.
... However, two decades after recognizing this issue [1], Internet security levels are still not adequate for a critical infrastructure. We argue that one of the main causes of the Internet's security problems are the fundamental flaws and limitations in the TCP/IP protocol suite design [2], which have caused a proliferation of protocols that try to address some of the issues, making the overall system more complex and difficult to protect. ...
Conference Paper
Full-text available
Current Internet security is complex, expensive and ineffective. The usual argument is that the TCP/IP protocol suite was not designed having security in mind and security mechanisms have been added as add-ons or separate protocols. We argue that fundamental limitations in the Internet architecture are a major factor contributing to the insecurity of the Net. In this paper we explore the security properties of the Recursive InterNetwork Architecture, analyzing the principles that make RINA networks inherently more secure than TCP/IP-based ones. We perform the specification, implementation and experimental evaluation of the first authentication and SDU protection policies for RINA networks. RINA's approach to securing layers instead of protocols increases the security of networks, while reducing the complexity and cost of providing security.
Article
The ever-increasing dependency of the utilities on networking brought several cyber vulnerabilities and burdened them with dynamic networking demands like QoS, multihoming, and mobility. As the existing network was designed without security in context, it poses several limitations in mitigating the unwanted cyber threats and struggling to provide an integrated solution for the novel networking demands. These limitations resulted in the design and deployment of various add-on protocols that made the existing network architecture a patchy and complex network. The proposed work introduces one of the future internet architectures, which seem to provide abilities to mitigate the above limitations. Recursive internetworking architecture (RINA) is one of the future internets and appears to be a reliable solution with its promising design features. RINA extended inter-process communication to distributed inter-process communication and combined it with recursion. RINA offered unique inbuilt security and the ability to meet novel networking demands with its design. It has also provided integration methods to make use of the existing network infrastructure. The present work reviews the unique architecture, abilities, and adaptability of RINA based on various research works of RINA. The contribution of this article is to expose the potential of RINA in achieving efficient networking solutions among academia and industry.
Article
In the humanities and some social sciences, many social theories, histories, and philosophies of networks often draw on incomplete representations of the Internet’s evolution and structure. Empirical and analytical difficulties result when scholars select this partial understanding as an archetype that they use to derive underlying social properties, and consequences, of networks. These networks, in turn, are put to work as both as a metaphor to describe a (our) new kind of society, and a fundamental, causal force that is bringing it into being. In other words, society is said to be governed by networks, but the network ideal-type—insofar as it is based on visions of the Internet—is inaccurate. A common problem with this work lies in its misunderstanding of the technical functions of the Internet Protocol (IP): specifically, the misidentification of IP with routing, and thus with the distributed property of the Internet. As a corrective, I offer a brief history of Internet routing, the Exterior Gateway Protocol (EGP) and autonomous system in particular.
Chapter
There is no neutrality when it comes to net neutrality. Set up in the trenches between digital infrastructure and new media publics, net neutrality has become one of the defining controversies of internet governance, concerning fundamental questions regarding access, digital civil rights and the net’s affordances.
Conference Paper
Full-text available
RINA, the Recursive InterNetwork Architecture, is a novel "back to basics" type approach to networking. The recursive nature of RINA calls for radically different approaches to how networking is performed. It shows great potential in many aspects, e.g. by simplifying management and providing better security. However, RINA has not been explored for congestion control yet. In this paper, we take first steps to investigate how congestion control can be performed in RINA, and demonstrate that it can be very efficient because it is applied close to where the problem happens, and through its recursive architecture, interesting effects can be achieved. We also show how easily congestion control can be combined with routing, enabling a straightforward implementation of in-network resource pooling.
Conference Paper
RINA is a very promising internetwork architecture based on the fundamental patterns presented in the "Patterns in Network Architecture" book. This paper describes the rationale, major design choices and implementation of a RINA prototype over UDP/IP. The prototype is mainly a tool for research and experimentation, the outcome of this work is not meant to be the final solution, but a vehicle to think deeply and experiment around RINA open research areas.
Article
A common proposal for a standard end-to-end transport protocol (INWG96) which has been submitted to ISO as an IFIP contribution is described. This presentation covers network architecture and terminology; transport service elements; transport station and protocol structure; ports and associations; letters and telegrams; fragmentation and reassembly; operation of associations; transport comments; transport service classes; data transmission service; and user interface.
Article
A multiprogramming system is described in which all activities are divided over a number of sequential processes. These sequential processes are placed at various hierarchical levels, in each of which one or more independent abstractions have been implemented. The hierarchical structure proved to be vital for the verification of the logical soundness of the design and the correctness of its implementation.
Article
Experience with the impact of the ARPA Computer Network on attached operating systems is briefly discussed. Early indications of good strategy in network operating systems design are reported.
Article
1972 was an exciting year in computer networking. The ARPANET, which came to life near the end of 1969, had grown to 29 nodes by August 1972. The National Physical Laboratory in England, under the direction of Donald Davies, had been running a 1-node packet switch interconnecting several NPL computers for several years. In July 1972, three people associ ated with the ARPANET work at BBN formed a new company, Packet Communications, to engage in the business of providing communication services for computer to terminal, computer to computer, and terminal to terminal information transfer.
etc is largely based on having been a member of the ARPANET NWG, INWG, and Rapporteur of the OSI Reference Model
  • Inwg Discussion Of
  • Osi
  • Arpanet