You start off with an Endpoint Name you wish to talk to. You give that Endpoint Name to a name server which gives you back a route or routes from the name server to the destination. You had a route to the name server (that's how you talked to it, see the Site Beacon Protocol and Endpoints) and you put the routes together to get a route from you to your destination.
You then use that route as a starting point to retrieve network maps between you and your destination, you pick the route you want, set up the flow, and communicate away. Obviously some judicious caching will be a good idea.
The first step of route optimization is to remove the backtracking. It's fairly likely that your route to get to the name server and the route from the name server to your destination, traverse some of the same routers and networks. Obviously that should be removed from the concatenated routes as efficiently as possible. I suggest that the route syntax be defined so it's possible to figure this out from inspection. This means that the names of entities in routes shouldn't depend on the direction you go through them (that is, not mac layer addresses). A simple anwwer is just to use the Endpoint Name of each entity. There may be a more size efficient answer however.
A problem here has to do with multi-homed name servers. If the route in comes through one interface and the route out is through another, we have to arrange that the concatenation of the routes doesn't result in a route through the name server which is unlikely to be a router. A simpler form of this problem is when the in and out routes go through the same interface but through different first hop routers. We could make the name server a little smarter and have it send back a route that produces the right answer by assuming the backtracking optimization will remove it as a stub. Another option is that the requester sends its route to the name server along with its request and the name server does the concatenation along with fixing these sorts of problems if necessary.
Perhaps a better idea than either of those two options is that the resulting route is not intended to be used to actually send a packet but only to give enough information to contact map servers along the way and then you'll figure out the route from that. The "route" might even be just a sequence of Sites rather than a sequence of actual forwarders.
In Nimrod, as a first approximation you knew who to ask for maps by inspecting the addresses. In this approach, you do the same thing with the route; you ask everyone along the route for their maps and then look over the maps for optimizations (or policy choices). Just using the route without this step is the equivalent of the "brain dead" routing described in the original Nimrod paper.
One suggestion that this gives for the syntax of routes is that they should indicate somehow where you traverse from one mapping domain to the next so you know when to ask for the next map. There also has to be a way to go from a router listed in a route to a map server that covers that router.
This lets you cut the corners off your route but doesn't help you to find new routes in the other direction. For now, I'm going to punt this one the same way Nimrod did; if you can think of an algorithm to look elsewhere for maps, go for it. The nice thing about map distribution systems is that everyone doesn't have to run the same routing algorithm.
Note: A good optimization would be if you could tell if a Name you were looking at named an Endpoint or a Site or Coalition. If it's an Endpoint, there's not much point to asking it for a map. If it's a Site or Coalition, you may want to ask.
For this to work, you've got to somehow start with a route to a name server. One way to look at this is that it's the same as you having to know the address of your name server now. But that's lame. Finding your local name server should be a basic part of the system as well as finding your first hop routers and your local map server.
From there, each name server keeps a route to its parent name server and probably to any secondary name servers serving the same zones. Iterative lookups should work fine with the DNS as it is now, recursive ones require the DNS server that's recursing to concatenate the answer with its route to the server it queried.
Each Endpoint has an entry in the name system with at least one route to that Endpoint. Also, each name server is keeping routes to its parent and siblings. As the network changes, those routes must also change. So how are these routes maintained?
Route Fragments
A good name for some of these things I call "routes" might be "route fragments". Dave Clark coined that term for a routing system I never quite understood so I don't want to lift his term and apply it to something that's not even close.
Communication starts by talking to a name server to find a first cut route to some destination. Therefore you have to be able to find the name server, that is each endpoint needs a route to its local name server from which it can find all the others.
I propose that this is a basic service of a Site and that routes to all name servers in the Site (or specified servers outside the Site if configured as such) are passed around by the routers and this information is then made available to hosts either through a request/response protocol or a beaconing protocol with the routers.
I have two answers to how routes to Endpoints are maintained in the name system. The first is the architecturally pure answer, the second is an answer that has some possible difficulties but may be sufficiently more efficient to be a better engineering choice.
The first answer is that Endpoints maintain their own entries in the name system. Each Endpoint knows its own name, it knows a route to at least one name server, so it can query the Name System to find the servers that maintain its own entry (usually the local name server anyway). Having done so, it has also discovered a route to that name server and can reverse that or examine maps to determine an appropriate route back. Obviously the Name System needs to allow dynamic update.
An Endpoint may add multiple routes to its entry if it's a multiply-homed host, it's served by multiple ISPs, or it knows something about the network topology or policies and thinks it gains from having multiple routes listed.
Note: If entries in the name space cost money and extra routes cost extra, each Endpoint gets to decide its own tradeoff between cost and reliability.
The second approach is just a variation on the first. Instead of each Endpoint taking care of this itself, someone else does it for them. This proxy registration server would watch the Site's routing information (or just use the map since it's probably in fact the same host, maybe even the same application, as the map server), figure out all the routes (probably all the Endpoints are served off the same name server), and keep the name server up to date. This has a big advantage for smaller hosts that don't want this overhead, think of embedded stacks in devices that are there only to support SNMP for instance.
Let the the Site map protocol include a bit that an Endpoint uses to inform this proxy registration server if it wants to handle route maintenance itself or it wants the proxy to handle it.
Note: I think there's probably some thinking needed here about name server load with respect to scaling. Also what happens to the server load when there's a large shift in the network topology.
Name servers keep a route to their parents and keep their parents updated on a route to them to keep the naming tree intact. When this route is lost though, it has a little more difficult time reconnecting. The way out is each Site keeps in communication with all its neighboring Sites. If a name server loses its parent, it talks to a neighboring name server and looks up its parent through that.
If the neighbor is similarly lost, that neighbor will be talking to its neighbors. Unfortunately, I'm not sure it's true that somebody, eventually has to know. If the whole net were power cycled, no one would know who their parent was. I need to design out the naming system in a little more detail. I guess whoever's root should know it and the tree somehow builds itself down from there. But roots can get lost too by losing track of each other.
This isn't the right place for these comments, but this is where I though of them. I picture most Sites as having a name server of their own which services the Endpoints in that Site. The Endpoints in the Site then only have to register locally (maybe don't even have to register at all because the name server is co-located with the map server and it just figures it all out). You generally wouldn't need to have a backup name server outside the Site (though a backup in the Site may still make sense), because if you can't get to the Site you can't talk to those Endpoints anyway.
Note: MAP pointed out to me that this isn't so. If your site is unreachable you still want your name system record to be visible. MX records for instance certainly want to be.
The local name server then maintains its entry with its parent name server and that's where issues of paths through multiple ISPs and multiple outside name servers comes in. The name system above the Site level should be automatically generated and maintained. The names picked for those intermediate levels will probably be computer generated sequences of letters and numbers.
If the name system grows out of the current DNS, it'll probably be that internal points in the naming tree are not all automatic. But it may be that there'll be competing registries that give a customer a choice of who to pay. The issue that haunts us now with the DNS remains though, what if you want or need to change your registry? Names from the name system should not change so how to accomplish this?
Nimrod used the addressing hierarchy to aggregate and abstract map information as you moved up the tree. Since I don't have addresses, where does this aggregation and abstraction take place? I don't know. I'm very uncomfortable with that answer but I have a couple of thoughts on the matter if no concrete answers yet.
Suppose Sites were able to form Coalitions and release an aggregated, abstract map to the world. If the hosts maintaining their routes inside this Coalition know about it, they could install routes that reference the abstracted map instead of the physical maps. As long as the routers know about that map too, they can handle forwarding based on its numbers too. This is sort of like bringing addresses back in, but the whole network does not have to resolve to a single root of the addressing tree.
Another possibility is that third parties make their own abstract maps of the net. Hosts that like a certain map vendor might make use of their maps when constructing their routes. Or perhaps the map vendor is used by people desiring to look up a route but not do all the work themselves (picture the Altavista mapping service). Since the routers in question wouldn't necessarily know about these maps, the route would somehow have to get processed down to elements that the routers did understand before trying to set up the flow.
last updated: Sat Apr 27 12:22:30 2002 by David Bridgham