Here are some disconnected thoughts about naming that came to me in late 2010. Several years have passed since I wrote most of these Nimrod related webpages and the Internet has changed a lot. Sadly, the problems surrounding the Domain Name System remain, have even grown worse.
As with the routing, the fundamental organizing and administrative unit will be the Site. The purpose of the naming system is to find and identify a nameserver of the Site you're interested in. Once you've done that, you may ask the Site's nameserver directly for information about entities administered by that site.
A Site starts with generating a PGP key pair. The public key becomes the fixed name of the site. Not a human friendly name certainly, but a name that is globally unique. The public key along with other identifying information is published on the Site's nameservers. The other identifying information is the human readable part. It consists of things like names, addresses, phone numbers, and anything else that would help a human decide they'd found the right place. This other information may not be globally unique, that's why a human needs to get involved in the decision.
The protocol for publishing this information may be as simple as tagged text files much like HTML over HTTP or maybe it's done with a custom protocol.
There would be a protocol to allow Endpoints within a Site to update their entries in the Site's nameservers and a protocol between nameservers to keep them in sync.
The next step is Search Engines. They crawl the net in much the same manner as web search engine crawl the web, collecting all the information from Sites' nameservers and building search tables. This net crawling is done using the routing maps rather than by HTML links. Nameservers are identified on maps and the edges of a Site's map connects to other Sites allowing a Search Engine to traverse the entire net.
The Search Engines make the results of their work available in two ways. They have the PGP public key lookup which is used to translate a network unique name into the set of nameservers for that Site. This function would, in part, replace the DNS lookup protocol of today.
The second lookup function from the Search Engines is keyed off the human readable information. These lookups may well not return unique answers. I picture it looking very much like web searches of today, and may in fact work just like that. Once the user has found the Site they desire, they'd store the unique name, the PGP key, in a local address book under a locally meaningful name for future use.
How the Search Engines distribute these lookup functions across multiple servers for load-balancing is an internal issue.
Note here that nothing precludes multiple, competing Search Engines. Indeed, that is the whole point. This avoids the control point of a single Search Engine like the DNS today.
Once a user has looked up a Site's nameservers with their chosen Search Engine, they talk directly to one of the Site's nameservers to complete their name lookup. This is where they would specify an endpoint within that site or a service they were looking for.
A protocol optimization would allow the desired endpoint or service to be named in the request to the Search Engine and if the Search Engine also collected that information, it could be returned during that transaction.
With DNS gone, what do names look like? Not like DNS host names today, that's for sure. With no hierarchy anymore, there is nothing like .com.
Instead, the public PGP key becomes the name of a site. Names of entities within the site are added to that PGP key to become the full name.
So a hostname might look something like iBDgXXMoRBAC910cya9xTeh0ea26WgjeW6SSoVUB:www where the first part encodes the PGP key of the Site while the www names the host or service within that Site.
Certainly these are not names that people want to type very often. Ideally, never. Tools would be needed to deal with this. For mail readers, the obvious tool would be the address book or contacts. Users would see and use their own names for people they communicated with, not the PGP name. For other applications, the same idea. There would need to be a mapping between local names usable by people and the globally unique name containing that crypto naming gunk.
One of the things we do with DNS names that wasn't really intended in the original design is name services. For instance, www.froghouse.org names the web server at Froghouse and dns.froghouse.org names my DNS server. In fact, those two names resolve to the same machine, the same IP address but I wanted to name the service.
DNS does include the idea of a mail service with its MX records. It seems like it would be worth the time to expand this idea of naming services and explicitly support it in a new design. How about the idea that a service lookup returns the TCP port to connect to as well as the host?
I don't have any specifics for this right now, I just wanted to insert the idea as a placeholder so it's not forgotten.
Nothing in the name system as described here guarantees that the Site you've reached is the one you want. Use of a cryptographic key as the name of a Site lets you believe with high confidence (with whatever confidence that the crypto system is good for) that this is the same Site you talked with last time you use the same key.
Thus, if you can find a way to verify you got the right Site, from then on you can believe. How to perform that out-of-band verification is outside the scope of this discussion. This is, in my opinion, how DNS security issues should have been approached too. Anyone who believes the answer they get from DNS with not additional verification is badly confused.
The question of economics may be the weakest part of this proposal. Sites run their own nameservers. That's good. And they make agreements with other to run secondary nameservers. Perhaps they trade being secondaries for each other or they pay someone for the service. That's also straightforward.
What about the Search Engines though? Who runs those and why?
People do run web search engines of course. In the process they sell ads and collect information. Could that apply here as well?
Or perhaps people sell the service of running secondary name servers for Sites and they also run Name Search Engines as well, simply as part of being in that business.
So what about running this name system with the current Internet Protocols (either IPv4 or IPv6, they're the same) rather than Nimrod? Everything described above could work the same with the exception of the net crawling function of the Search Engines. There are no Sites or site maps to allow a Search Engine to traverse the entire 'net, find Sites' nameservers, and gather the data it needs.
Instead, you might replace the net crawling with a web crawl. Build a web of HTML-like pages containing all the same information held by the Site's nameservers. I see two issues with this. One is that you'd have to be careful to make sure the web was fully connected, no islands. Search Engines need to be able to find everything, somehow. This comes along "for free" with Nimrod as part of how the underlying routing works. It would have to be maintained elsewise for IP.
The other issue is that you obviously can't use DNS names in the URLs (or equivalent). You'd have to use raw IP addresses and then you'd have an update issue. Oh well. When that gets to be too much of a pain, it's time to upgrade to a better Internet Protocol. I recommend Nimrod.
last updated: Fri Mar 11 11:33:08 2011 by David Bridgham