How do we get packets from one end point to another? Here's what would be nice for a routing algorithm: correctness, simplicity, robustness, stability, fairness, optimality.


Stability Fairness & Optimality Algorithms may be static, i.e. the routing decisions are made ahead of time, with information about the network topology and capacity, then loaded into the routers, or dynamically, where the routers make decisions based on information they gather, and the routes change over time, adaptively.

Optimality principle and sink trees

Shortest Path

Links between routers have a cost associated with them. In general it could be a function of distance, bandwidth, average traffic, communication cost, mean queue length, measured delay, router processing speed, etc.

The shortest path algorithm just finds the least expensive path through the network, based on the cost function.

Dijkstra's algorithm is an example. You can start from source/destintation (doesn't matter in a unidirectional graph).

  1. Set probe node to starting node.
  2. Probe neighboring nodes and tentatively label them with (probe node, cummulative distance from start).
  3. Search all tentatively labeled nodes (and not just the nodes labeled from the current probe) for the minimum label, make this minimum node's label permanent, and make it the new probe node.
  4. If the probe node is the destination/source, stop, else goto 2.
Comments A nice Java applet for exploring Dijkstra's algorithm is


Every incoming packet is sent out on every other link by every router.

Super simple to implement, but generates lots of redundant packets. Interesting to note that all routes are discovered, including the optimal one, so this is robust and high performance (best path is found without being known ahead of time). Good when topology changes frequently (USENET example).

Some means of controlling the expansion of packets is needed. Could try to ensure that each router only floods any given packet once.

Could try to be a little more selective about what is forwarded and where.


Similar in spirit to minimum distance, but takes traffic flow into consideration.

The key here is to be able to characterize the nature of the traffic flows over time. You might be able to do this if you know a lot about how the network is used (traffic arrival rates and packet lengths). From the known average amount of traffic and the average length of a packet you can compute the mean packet delays using queuing theory. Flow-based routing then seeks to find a routing table to minimize the average packet delay through the subnet.

Distance Vector

Also known as Belman-Ford or Ford-Fulkerson. Used in the original ARPANET, and in the Internet as RIP.

The heart of this algorithm is the routing table maintained by each host. The table has an entry for every other router in the subnet, with two pieces of information: the link to take to get to the router, and the estimated distance from the router. For a router A with two outgoing links L1, L2, and a total of four routers in the network, the routing table might look like this:
router distance link
B 5 L1
C 7 L1
D 2 L2

Neighboring nodes in the subnet exchange their tables periodically to update each other on the state of the subnet (which makes this a dynamic algorithm). If a neighbor claims to have a path to a node which is shorter than your path, you start using that neighbor as the route to that node. Notice that you don't actually know the route the neighbor thinks is shorter - you trust his estimate and start sending frames that way.

When a neighbor sends you its routing table you examine it as follows and update your own routing table.

You can think of this as forming an approximation of the global state of the subnet from local information only (exchange with neighbors). Unfortunately it has problems (it's only an approximation, after all). Good news (a link comes up, a new router is available, a router or link are made faster) propogate very quickly through the whole subnet (in the worst case it takes a number of exchanges equal to the longest path for everyone to know the good news).

Bad news is not spread reliably. Neighbors only slowly increase their path length to a dead node, and the condition of being dead (infinite distance) is reached by counting to infinity one at a time. Various means of fixing this have been tried, but none are foolproof.

Link State

Widely used today, replaced Distance Vector in the ARPANET. Link State improves the convergence of Distance Vector by having everybody share their idea of the state of the net with everybody else (more information is available to nodes, so better routing tables can be constructed).

The basic outline is

  1. discover your neighbors
  2. measure delay to your neighbors
  3. bundle all the information about your neighbors together
  4. send this information to all other routers in the subnet
  5. compute the shortest path to every router with the information you receive
Neighbor discovery Measure delay Bundle your info Distribute your info Compute shortest path tree


When your subnet is large then the routing tables become unwieldy. Too much memory to store them, too much time to search them, too much time to compute them. When something is too large, people form a hieararchy to deal with it.

The idea is to replace N different routing table entries to N different individual routers with a single entry for a cluster of N routers. You can apply many different levels of hierarchy.

<figure 5.17 example>

The price you pay is that you don't have the optimal route for each router anymore (makes sense, since you lump all routers in a single cluster together with one optimal path).


Unicast routing is most general, but broadcast routing is a good match to many applications (e.g. distributing routing data, sending common information like weather reports).

One way of doing this is by sending a packet which has the desired data along with a list of hosts to be visited. At each router the list is examined. For each outgoing line which is the best path to one of the destinations in the list, the packet is duplicated and the destinations which are best sent down this line are selected from the list for the new packet. Eventually the packets only contain a single destination, in which case the algorithm has done the distribution.


The goal is to be more efficient than unicast, without the expense (or lack of privacy) of broadcast.