Desired Response ---------------- When a resolver is priming, it does a query to one of the root server addresses that it knows about. This is described in this IETF draft: https://tools.ietf.org/html/draft-ietf-dnsop-resolver-priming You can create a query that looks like this with the "dig" command: $ dig @a.root-servers.net -t ns +norecurse +edns +nodnssec . The "@a.root-servers.net" can be replaced with any name or address that serves the root zone, for example "@bii.dns-lab.net". The response contains the _names_ of the root servers in the answer section: . 518400 IN NS a.root-servers.net. . 518400 IN NS b.root-servers.net. ... . 518400 IN NS m.root-servers.net. This is the list of name servers that carries the root zone. The response carries the _addresses_ of the root servers in the additional section: a.root-servers.net. 518400 IN A 198.41.0.4 b.root-servers.net. 518400 IN A 192.228.79.201 ... m.root-servers.net. 518400 IN AAAA 2001:dc3::35 The additional section data is what the resolvers need to actually perfrom recursion. Serving the Additional Section ------------------------------ Root zone file has: net. 172800 IN NS a.gtld-servers.net. ... a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30 ... Because there are glue records for the "net" domain, BIND 9 will not return the "a.root-servers.net" addresses in the additional section. In the general case, being able to get to "net" authority servers is enough to eventually get to "a.root-servers.net" so the IP addresses configured in the root zone are not necessary glue. In the case of root priming, however, we *do* want to see the addresses for the "root-servers.net" servers. It is not technically necessary, but resolvers expect this information. The solution to this is to make the root servers answer for the "root-servers.net" zone, as a sort of special case. When BIND 9 is configured this way, it will add in glue to responses to the priming query. Note that NSD does not need to be configured for the "root-servers.net" zone, and happily returns glue in the additional section, whether configured for the "root-servers.net" zone or not. Solutions for Yeti ------------------ In the case of the Yeti testbed, we want the Yeti root servers to respond to priming queries with the addresses of all Yeti root servers in the additonal section. This will make them operate as similar to the IANA root servers as possible. Some possible approaches: 0. Do nothing. Here we assume that resolvers are smart enough to figure out the addresses of the Yeti root servers using normal resolution from the Yeti root servers that they know about already. 1. Use NSD, or other software that includes the glue addresses in the additional section. While using a name server that does what we want easily is attractive, name server diversity is important. A system that doesn't work with BIND 9 is probably not desirable. 2. Patch BIND 9 so that it includes the glue addresses in the additional section, or so that it can be configured to respond this way. Requiring a special, patched version of BIND 9 is probably even worse than not supporting it at all. (It might be possible to get this sort of change included in upstream BIND 9, but given the use case is so narrow I suspect that it would be rejected.) 3. Add a zone file for each root server and answer for all of them at each Yeti server. This idea comes from Akira Kato, so I may not present it completely correctly. The approach here is that each name server would have a small zone file for "bii.dns-lab.net", "yeti-ns.wide.ad.jp", "yeti-ns.tisf.net", and so on. The Yeti root servers would then be configured to answer for these, which would convince BIND 9 to include their addresses in the additional section. This can be scripted; so while it is a bit messy with lots of little files, automation would hide the details. 4. Make a domain similar to "root-servers.net" and put all of the Yeti servers in that (like "root-servers.yeti.org" or similar). The idea here is to work very similar to the IANA root servers, and answer for the Yeti root name servers. Discussion ---------- Perhaps the best solution is the "do nothing" approach. I worry that this will not work on the wider Internet, but that is not based on science. Maybe this is a reasonable experiment to run... after the basic Yeti setup has been operational for a while. If stock BIND 9 is not a requirement, then I think the cleanest solution is to simply include the IP addresses in the zone file served by each Yeti root. Since the glue is not signed, it should be possible for each Yeti root to act totally independently, and return whatever response to the priming query it wants without any coordination. If we assuming stock BIND 9 is a requirement, then using separate zone files gives us the most flexibility. It means that each Yeti root could potentially return a different response to the priming query. This can be considered a bug or a feature, depending on your goals. If we move the Yeti root servers under a single domain, then we have to collect the name servers into a zone file. It makes signing the names different; in the case of separate names all trust flows from the root, in the case of a domain then the management of the keys for the Yeti domain also needs to be considered. It is still possible for diferent Yeti root servers to return different responses to the priming query (indeed they can even use different signing keys), but this violates one of the architectural principles of DNS. (This architectural principle has been thrown away by many, many DNS operators and the DNS continues to function, so perhaps this is not a real problem.)