Yeti DNS Project
--A Live IPv6-only Root DNS Server System Testbed

Preserving IANA NSEC Chain and ZSK RRSIGs

Posted on


DNS Root Server system is an early form of cloud model to replicate the root zone file and enhance high availability. To expand the capacity of the root server system, the root anycast instances were deployed widely given that the list of root servers are the proof of the integrity of root zone content. People trust the data by trusting the servers where the data is pulled from.

Entering DNSSEC, data can be authenticated with crypto-signature. People gain confidence on data integrity by self-validating data. The place where the root zone file is replicated and stored becomes irrelevant. RFC7706 is one example providing a mechanism to give the same answer back from the “local” root server as from the “real” root server on the same level of authentication. Although they may bring some operation concerns and risks, it does gain in terms of performance (reducing the latency). So it becomes popular among some public DNS operators.

By contrast, Yeti provides a different zone file distribution model which introduces a new set of authoritative servers and their instances for the root zone. It is an efficient and large-scale approach to replicate and distribute the root zone file in a straightforward manner compared to RFC7706. Imagine that you have more than 1000 resolvers in your network, you may prefer one root instance located in your network to the management of 1000 “local” root servers.

However, compared to RFC7706, Yeti model cannot provide an unmolested DNSKEY RRset or an unmolested apex RRset or RRSIG. It is questioned as being an alternative root, with the ability to change the TLD meta data and sign with alternative keys. It is regarded as violating the political correctness of keeping a unique identifier system. It is not a technical issue, but an “ethic” risk which Yeti model advocates are undertaking.

To relieve the risk as much as possible, an proposal is formed to build a new root zone containing both Yeti and IANA Keys and RRSIGs. The new root zone are divided into two parts: one is apex records : SOA, NS , DNSKEY including their RRSIG signed by Yeti keys(ZSK & KSK). The other part is TLD meta data like DS, NSEC, NS, A/AAAA glue as well as their RRSIG signed by IANA ZSK(glue and NS has no RRSIG). Note that DNSKEY RRset includes Yeti ZKS(s), KSK(s) and IANA ZSK(s).

Lab test and result

To validate this proposal, a lab environment test was done. In normal DM operation loop, the IANA DNSKEY RRset and DNSSEC signature is removed after downloading the root zone. In the test, we keep IANA ZSK and non-Apex RRSIG (signature for DS and NSEC) and NSEC in the Yeti root zone.

Authority server setup

The routine for generation of a new Yeti root zone is as follows:

  • Download the latest IANA root zone.(using AXFR from F.ROOT-SERVERS.NET for instance);
  • Strip apex SOA, NS and DNSKEY as well as their RRSIGs; Keep the NSEC ,TLD DNS and Glue, RRSIG of non-apex RR unchanged;
  • Put a new DNSKEY RRset into the zone with Yeti ZSK and KSK as well as IANA ZSK.
  • Put a new SOA RR, copying IANA SOA SERIAL, setting MNAME to and setting RNAME to <dm-operator> where <dm-operator> is currently one of “wide”, “bii” or “tisf”.
  • Put a new NS RRset with existing 25 servers in Yeti testbed.
  • Sign the apex SOA, NS RR with Yeti ZSK and sign the DNSKEY RRset with Yeti KSK in the zone.
  • Publish the root zone to testing server A.

There is a diagram of the constructed new Yeti root zone. At left is the IANA root zone, at right Yeti’s one. Only one delegation is put, only to show the mechanism. The (I) suffix means “IANA” and “Y” Yeti:

SOA(I)                          SOA(Y)
RRSIG(. SOA(I)(ZSK(I))          RRSIG(. SOA(Y)(ZSK(Y))

DNSKEY(I)                       DNSKEY(Y)
    KSK(I)                           KSK(Y)
    ZSK(I)                           ZSK(I)

. NS IANA_NS1                   . NS Yeti_NS1
. NS IANA_NS2                   . NS Yeti_NS2
. NS IANA_NS3                   . NS Yeti_NS3
RRSIG(. NS)(ZSK(I))             RRSIG(. NS)(ZSK(Y))

NSEC1(.->net.)                  NSEC1(.->net.)
RRSIG(NSEC1)(ZSK(I))            RRSIG(NSEC1)(ZSK(I))

net. NS NS7                     net. NS NS7
net. NS NS8                     net. NS NS8
net. DS                         net. DS
RRSIG(net. DS)(ZSK(I))          RRSIG(net. DS)(ZSK(I))

NSEC2(net.->.)                  NSEC2(net.->.)
RRSIG(NSEC2)(ZSK(I))            RRSIG(NSEC2)(ZSK(I))

Resolver setup

There was a DNSSEC validating resolver B set up (BIND). Resolver B used Yeti KSK as its trust anchor. The glues record of Yeti_NS1, Yeti_NS2, Yeti_NS3 are configured with the address of server A. So that all queries will be sent to server A. The rollover of KSK and ZSK of the new Yeti root zone is expected to be fine because Yeti Multiple-ZSK experiment had been tested and workable with more than many keys in DNSKEY RRset. The only impact is the size of DNSKEY response.

Result analyses

By analyzing the log, it was found that server B validate the response using the RRSIG by IANA zsk (keyid=14796).

Resolver B’s DNSSEC log:

31-May-2017 15:46:49.278 dnssec: debug 3: validating net/DS: starting 
31-May-2017 15:46:49.278 dnssec: debug 3: validating net/DS: attempting positive response validation 
31-May-2017 15:46:49.278 dnssec: debug 3: validating net/DS: keyset with trust secure 
31-May-2017 15:46:49.278 dnssec: debug 3: validating net/DS: verify rdataset (keyid=14796): success 
31-May-2017 15:46:49.278 dnssec: debug 3: validating net/DS: marking as secure, noqname proof not needed

The current dnskey rrset of IANA root zone is:

dig . dnskey +multi

.                       172800 IN DNSKEY 257 3 8 (
                                ) ; KSK; alg = RSASHA256; key id = 19036
.                       172800 IN DNSKEY 256 3 8 (
                                ) ; ZSK; alg = RSASHA256; key id = 14796

Conclusion and future work

There is an effort to make and demonstrate a diff file between IANA root zone and Yeti root zone in the monitoring website. By reusing IANA DNSSEC stuff, this proposal generates a Yeti root zone including IANA ZSK and signatures, consequently reducing the size of the diff file (smaller diff means less change). If IANA ZSK and RRSIG in Yeti root zone can be viewed in a way as a proof integrity, then it “provides”:

1) Guarantee no edit or changes to existing TLD meta data(DS RR);

2) Guarantee no existing TLD is removed or new TLD is added into the root zone;

3) Yeti keys only sign apex will avoid multiple RRSIGs and keep the response size small.

Observation on Large response issue during Yeti KSK rollover

Posted on


Yeti has successfully performed a KSK roll in 2016. During the first KSK rollover, it was discovered that a view added to a BIND 9 resolver did not use the same RFC 5011 timings as other views, including the default view. An administrator may expect that a BIND 9 resolver will handle RFC 5011 KSK rollover for all zones identically. The newly added views will fail to resolve when the KSK roll happens.It was documented in a posts in Yeti blog.

In order to add more information to the earlier roll, we performed another KSK roll before the ICANN KSK roll begins (2017-10-11). We used similar phases to the IANA roll, although with different timings so that we can finish the roll faster. Our “slots” will be 10 days long, and look like this:

  slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 slot 9
19444 pub+sign pub+sign pub+sign pub+sign pub+sign pub pub revoke  
new   pub pub pub pub pub+sign pub+sign pub+sign pub+sign

As with the first KSK roll, the Yeti root KSK uses the RSA/SHA-256 algorithm with a 2048 bit key, the same as the IANA root KSK. Note that Yeti root ZSK uses 2048 bit key (RSA/SHA-256) as well.

In the design of Multi-ZSK, each Yeti DM used their own ZSK to sign the zone. That is to say at least 3 ZSKs are in the DNSKEY RRset, or more than 3 ZSKs (up to 6) due to ZSK rollover which may cause some overlap. Take the KSK rollover into consideration, it is possible that there are 8 keys (6 ZSK and 2 KSK) in DNSKEY RRset which will produce a very large response to the DNSEKY query with DO bit set. It is cool for yeti experiment looking for impacts of large DNS response.

Monitoring setup

Like we did in the first KSK roll, monitoring was set up on Yeti validating resolvers to capture validation failures. Besides DNSSEC validation, fragmentation and packet loss due to large DNS response are in question which Yeti was started to investigate. We set up RIPE Atlas measurements which are able to spot failures like timeout for DNSKEY queries via UDP.

For each Yeti server, a measurement was setup which ask for 100 IPv6-enabled probes from 5 regions, every 2 hours sending DNS query for DNSKEY via UDP with DO bit set.

The measurement API compatible specification is as follows:

curl --dump-header - -H "Content-Type: application/json" -H "Accept: application/json" -X POST -d '{
 "definitions": [
   "target": "240c:f:1:22::6",
   "af": 6,
   "query_class": "IN",
   "query_type": "DNSKEY",
   "query_argument": ".",
   "use_macros": false,
   "description": "DNS measurement to 240c:f:1:22::6",
   "interval": 7200,
   "use_probe_resolver": false,
   "resolve_on_probe": false,
   "set_nsid_bit": false,
   "protocol": "UDP",
   "udp_payload_size": 4096,
   "retry": 3,
   "skip_dns_check": false,
   "include_qbuf": false,
   "include_abuf": true,
   "prepend_probe_id": false,
   "set_rd_bit": false,
   "set_do_bit": true,
   "set_cd_bit": false,
   "timeout": 5000,
   "type": "dns"
 "probes": [
   "tags": {
    "include": [
    "exclude": []
   "type": "area",
   "value": "West",
   "requested": 20
   "tags": {
    "include": [
    "exclude": []
   "type": "area",
   "value": "North-Central",
   "requested": 20
   "tags": {
    "include": [
    "exclude": []
   "type": "area",
   "value": "South-Central",
   "requested": 20
   "tags": {
    "include": [
    "exclude": []
   "type": "area",
   "value": "North-East",
   "requested": 20
   "tags": {
    "include": [
    "exclude": []
   "type": "area",
   "value": "South-East",
   "requested": 20
 "is_oneoff": false,
 "stop_time": 1501422358

Given there are 25 servers, 25 RIPE Atlas measurements were set up during the rollover(from 2017-05-08 to 2017-05-25). Result files in JSON format were collected.

Note although each measurements ask for 100 IPv6 enabled probes to do the job, there are may lack of qualified probes in some regions. In most of case there are around 66 probes allocated for each measurement, which is acceptable for this experiment.

Data analysis

The basic plan is to simply count the number of failures under different DNS response sizes during the experiment. Failure rate of each response size is the metric to measure if there is a correlation between failure rate and the DNS response size.

Simple Data pre-processing

Before the analysis is proceeded, some observation were made to filter bad samples and result.

  • There are always a few probes found in each measurement failed all the time (>80% failure rate). They are labeled as a weak probes which are excluded for that measurement.
  • There are two probes (prb_id: 17901, 12786) found weird in behavior, who reported timeout for small response(1139 octets) but works for large response (1975). They are excluded for all measurement.
  • The are two measurements (msm_id: 8552432,8552458) reported far more failures and weak probes than other measurement. May be there are some routing problems on the path to these Yeti servers. So They are excluded for further analysis.

DNS response size

Although it is possible that there are 8 keys (6 ZSK and 2 KSK) in DNSKEY RRset and many possible combinations, during KSK rollover experiment we actually observed 4 cases:

  • 3 ZSK and 2 KSK which produces DNSKEY response of 1975 octets;
  • 3 ZSK and 1 KSK which produces DNSKEY response of 1414 octets;
  • 2 ZSK and 1 KSK which produces DNSKEY response of 1139 octets;

WIDE rolls its ZSK every month. BII rolls its ZSK every two weeks. And TISF does not use its own Key but reuse WIDE ZSK instead. It is proved that during the measurements only BII rolled its key which overlapped with KSK rollover.

Note there are also other DNS response size were observed like 1419,1430,1143,1982 octets on other servers.

Counting Failures (timeout)

For every tome slot(2 hours) around 66 probes sent queries to Yeti root servers. We counted the number of failures and number of total queries which are categorized according to the packets size. The statistic is shown as following table.

Packets size number of failures Number of total queries Failure rate
1139 274 64252 0.0042
1143 22 6949 0.0032
1414 3141 126951 0.0247
1419 113 13535 0.0083
1430 33 2255 0.0146
1975 2920 42529 0.0687

The following figure shows vividly the failure rate change during 1 KSK rollover and 2 ZSK rollovers (Note the monitoring was setup when two KSK were published). It experienced four periods: 2KSK+3ZSK, 1KSK+3ZSK, 1KSK+2ZSK, 1KSK+3ZSK.

Figure 1 Failure rate change during Yeti KSK rollover

In ICANN’s KSK rollover plan, the packet size will exceed 1280 Octets limit up to 1414 octets on 2017-Sept-19 and 1424 octets on 2018-Jan-11. It means around 2% IPv6 resolvers(or IPv6 DNSKEY queries with DO bit set) will experience timeout. One study shows that 17% of resolvers cannot ask a query in TCP. So probably in extreme case there are 0.34% of all resolvers around the world will fail to validate the answers. 0.34% of millions, It is not a trivial number.

Figure 2 Impact on the KSK Rollover Process (ICANN)

It should also be noted that during the KSK rollover and other experience in Yeti testbed, no error report was spotted directly due to packet size problem. So it should be further observed and evaluated on the impact of large packets issue.


The monitoring result shows that statistically large packets will trigger higher failure rate (up to 7%) due to IPv6 fragmentation issues, which accordingly increase probability of retries and TCP fallback. Even within 1500 bytes, when response size reaches 1414 bytes, the failure rate reaches around 2%. Note that ICANN KSK rollover will produce packets exceeding 1414 Bytes.

To avoid less than 10% anomaly inside DNS transaction, asking for TCP support seems a light-weighted solution currently. If you want to avoid TCP fallback, you can hold a certain size as a boundary for response, like 512-octets was hold for a long time in DNS specification. In IPv6 era, a new boundary 1220 octets (for IPv6 TCP MSS value) was proposed in APNIC blog posts. I would like to ask is it worthwhile to take any heavier measures to this issue? Does it sound like a plan to use stateful connection in the first place to transmit DNS like TCP or HTTP for queries causing large response, or fragment the packets in the DNS layer?

Monitoring on Yeti Root Zone Update

Posted on


Regarding the timing of Root Zone fetch and soa update, Each Yeti DM checks the root zone serial hourly to see if the IANA root zone has changed , on the following schedule:

DM Time
BII hour + 00
WIDE hour + 20
TISF hour + 40

A new version of the Yeti root zone is generated if the IANA root zone has changed. In this model, root servers will pull the zone from one DM consistently for each new update, because 20 min is expected to be enough for root zone update for all root severs in ideal environment.

There is a finding in the past one Yeti root servers have long delay to update the root zone which is first reported in the Yeti experience I-D:

It is observed one server on Yeti testbed have some bugs on SOA update with more than 10 hours delay. It is running on Bundy 1.2.0 on FreeBSD 10.2-RELEASE. A workaround is to check DM’s SOA status in regular base. But it still need some work to find the bug in code path to improve the software.

To better understand the issue, we design a monitoring test and trace the behavior of zone update and DM selection of each Yeti root server. Now there is some preliminary result.


To setup this monitoring, there are mainly three steps:

1.In the loop of DM, we ask each DM operator to use a special rname in SOA record of Yeti root zone, such as for WIDE DM. By this setting, we marked the root zone with the original DM where each root server pull the zone.

2.In the loop of monitoring, it is simple to query soa record against each root server every minute. The information such as rname, the soa serial, and timestamp of the response is record.

3.Based on the data we collected, it is easy to figure out when the root zone update and where the zone is pulled, for each server and each soa serial.

Note that the time we observer a new zone update on a particular server includes the time of rtt and time error introduced by the interval(every minute) we measured. It is acceptable and does not impact the result and conclusions we made.

Preliminary result

There are there figures in the below to present the preliminary result for three continuous soa serial of Yeti. In the bar chart, the x-axis represent each root server with a bar. The value of bar in the y-axis is the delay in minute. We calculated the delay (di) of server i use the simple formula:


  • Ti is the timestamp for root server i to update the zone
  • Tmin is the smallest timestamp of all server under a same soa serial

Note that we add 10 min to each value to make the bar chart more visible to figure out where the server pull the zone. So the actual delay is the value minus 10.

Figure 1 The latency of SOA 2017032102

Figure 2 The latency of SOA 2017032200

Figure 3 The latency of SOA 2017032201

Intuitively there are some findings from the results:

  • Two servers from still endure high latency of zone update, as we reported before. This issue is not resolved.
  • Besides the servers, half of Yeti servers has more than 20 min delay, some even with 40 min delay. One possible reason may be that the server failed to pull the Zone on one DM and turn to another DM which introduces the delay.
  • In figure 2, it is observed that server has two bars in the chart which means for the soa serial 2017032200 pull twice first from TISF DM secondly BII DM. It’s weird.
  • Also in figure 2, it is observed that even in the same 20-minutes time frame, not all servers pull from a single DM. I guess may be some servers not use FCFS strategy to pull the zone after they receive the notify. They may pull the zone based on other metrics like the rtt , or manual preference.

Another findings: Double SOA RR

During the test, we happened to find that has two soa record due to the multiple-DM settings in Yeti testbed. (We once reported the IXFR fallback issue due to Multiple-DM model.)

$dig . soa +short 2017032002 1800 900 604800 86400 2017032001 1800 900 604800 86400

As far as I know, uses Knot 2.1.0 at that time. We will check if everything goes well after it update to latest version.

BIND EDNS Fallback and DNSSEC issue

Posted on

About EDNS Fallback

EDNS fallback is briefly defined in RFC6891 that a requester can detects and caches the information of remote end whether it support ENDS(0) or not. This behavior avoids fallback delays in the future. According to one of ISC’s document, BIND EDNS fallback has a process describe in below:

1) Query with EDNS, advertising size 4096, DO (DNSSEC OK) bit set
2) If no response, retry with EDNS, size 512, DO bit set
3) If no response, retry without EDNS (no DNSSEC, and buffer size maximum 512)
4) If no response, retry the query over TCP

The merit of EDNS fallback is to identify the capacity of remote server and shorten the delay with less retries. But if the intermittent network causes packet losses or DNS manipulation, it can result in SERVFAILs due to servers that should support EDNS being marked as EDNS-incapable.

A failure case observed in Yeti resolver

All yeti resolvers are required to be DNSSEC-aware. It is reported one of resolver using BIND 9.11.0-P2 (call R1) in China received many SERVFAIL response due to EDNS fallback. With the debug information, we found this resolver has such experience:

1) A client try to resolve via R1; 2) R1 got a response (modified) and start doing DNSSEC validation for; 3) R1 query the DS record of via one NS of .com but got a modified response : in A x.x.x.x.; 4) R1 tried all the other NS of .com, and got modified answer too; 5) R1 fallback to query the DS record with ENDS0 buffer size 512 bytes but still got the modified response; 6) R1 fallback again to query the DS record of without EDNS0 option and receive the same modified response. 7) R1 can not validate the And the client got SERVFAIL.

But when any client try to resolve other normal domains. R1 got the right response for both A or AAAA record, but when it do the DNSSEC validation process, R1 sent the DS query without EDNS0 option, then the validation process failed. Finally, the client got SERVFAIL. There is a log for that process by querying after

15-Feb-2017 13:24:01.203 dnssec: debug 3: validating starting
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating attempting negative response validation
15-Feb-2017 13:24:01.203 dnssec: debug 3:   validating net/SOA: starting
15-Feb-2017 13:24:01.203 dnssec: debug 3:   validating net/SOA: attempting insecurity proof
15-Feb-2017 13:24:01.203 dnssec: debug 3:   validating net/SOA: checking existence of DS at 'net'
15-Feb-2017 13:24:01.203 dnssec: debug 3:   validating net/SOA: insecurity proof failed
15-Feb-2017 13:24:01.203 dnssec: info:   validating net/SOA: got insecure response; parent indicates it should be secure
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating in authvalidated
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating authvalidated: got insecurity proof failed
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating resuming nsecvalidate
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating nonexistence proof(s) not found
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating checking existence of DS at 'net'
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating checking existence of DS at ''
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating continuing validation would lead to deadlock: aborting validation
15-Feb-2017 13:24:01.203 dnssec: debug 3: validating deadlock found (create_fetch)

It is obvious that BIND 9 gets confused about EDNS support and this breaks later DNSSEC lookups. The intuitive thinking in author’s mind is that all BIND 9 deployed in China may affected by this issue. It explains the low penetration of DNSSEC and complains on DNSSEC in that region. In general this bug may cause BIND 9 vulnerable to the on-path DOS attack against the DNSSEC-aware resolver.

Patch to this issue

After locating this problem we contact ISC people and got a following patch to fix it.

diff --git a/lib/dns/resolver.c b/lib/dns/resolver.c
index f935a67..5ca9c47 100644
--- a/lib/dns/resolver.c
+++ b/lib/dns/resolver.c
@@ -8145,6 +8145,7 @@ resquery_response(isc_task_t *task, isc_event_t *event) {
 		dns_adb_changeflags(fctx->adb, query->addrinfo,
+#if 0
 	} else if (opt == NULL && (message->flags & DNS_MESSAGEFLAG_TC) == 0 &&
 		   !EDNSOK(query->addrinfo) &&
 		   (message->rcode == dns_rcode_noerror ||
@@ -8169,6 +8170,7 @@ resquery_response(isc_task_t *task, isc_event_t *event) {
 		dns_adb_changeflags(fctx->adb, query->addrinfo,


EDNS fallback is proposed for good but it may introduce false positives and collateral impacts due to temporary network failure or malicious manipulations. When the name server of certain TLD like .com and .net are marked EDNS-incapable , it will become a disaster for validating resolvers.

One intuitive idea is to stop marking TLD’s NS server as EDNS-incapable, given the fact that 7040 of 7060 (99.72%) of name servers support EDNS. Or we can turn off the fallback function when it comes to DS record(the query to the parent).

Note for coming ksk rollover experiment

Posted on

A newly generated KSK will be published into the Yeti root zone for experiment today. Volunteer resolvers are welcome to join this test. There are some notes for your information:

1) Two actions:

  • A new key(59302 ) will be published today at the serial 2017030200

  • The document and file on the Github repo and yeti website will be update to contain two keys 10 days later(2017-03-12), leaving 10 days to welcome new resolver to join this experiment.

2) About the timeline:

Slot 1: 2017-02-20 to 2017-03-01   change the RRSIG validity period
Slot 2: 2017-03-02 to 2017-03-11   publish the new KSK
Slot 3: 2017-03-12 to 2017-03-23   publish the new KSK
Slot 4: 2017-03-24 to 2017-04-03   publish the new KSK
Slot 5: 2017-04-03 to 2017-04-13   publish the new KSK
Slot 6: 2017-04-14 to 2017-04-23   sign with the new KSK
Slot 7: 2017-04-24 to 2017-05-03   sign with the new KSK
Slot 8: 2017-05-04 to 2017-05-13   revoke the old KSK
Slot 9: 2017-05-14 to 2017-05-23   no longer publish the old KSK

3) For BIND users:

In the last KSK rollover experiment, we found multiple views of BIND may cause problem during the rollover period. Recently ISC published a post to explain it and ask BIND users to aware the change during the KSK rollover.

4) For new resolver

If you would like to join the experiment, please follow the instructions in and set it up before 2017-03-12, because the page will be changed containing the two keys for new comer to start with.

Please let us know, if you found something weird during the experiment.

5) Reference

Second KSK rollover experiment in Yeti testbed

Yeti DNS-over-TLS public resolver

Posted on


There is a new DNS-over-TLS public DNS resolver, and it uses the Yeti root. You want explanations? You’re right.

First, about DNS-over-TLS. The DNS (Domain Name System) protocol is a critical part of the Internet infrastructure. It is used for almost every transaction on the Internet. By default, it does not provide any privacy (see RFC 7626 for a complete discussion of DNS privacy considerations). Among its weaknesses is the fact that, today, DNS requests and responses are sent in the clear so any sniffer can learn that you are interested in or To address this specific problem, a standard for encryption of DNS requests and responses, using the well-known protocol TLS (Transport Layer Security), has been developed. The standard is in RFC 7858.

As of today, there are very few DNS resolvers that accept DNS-over-TLS. The typical ISP resolver, or the big public resolvers, don’t use it. Sadly, this is also the case of resolvers pretending to provide a service for people who do not trust the other resolvers. (See an up-to-date list of existing public resolvers.)

And Yeti, what is it? It is an alternative DNS root which focus, not on creating “dummy” TLDs and selling them, but on technical experimentations about the DNS root service, experimentation which cannot be done on the “real” root, which is way too sensitive. Note there was no public Yeti resolver. To use the Yeti root, the only way was to configure your resolver to forward to the Yeti root.

But, first, a warning: Yeti is a technical experimentation, not a political one. Be aware that DNS queries to the Yeti root name servers are stored, and studied by researchers. (This is the same with the “real” root, by the way, not to mention the unofficial uses such as MoreCowBell.)

Since there are few DNS-over-TLS resolvers, and in order to gather more information from experience, we have set up a public DNS-over-TLS resolver using the Yeti root. It answers on the standard DNS-over-TLS port, 853, at It is IPv6-only, which makes sense for Yeti, whose name servers use only IPv6.

Two warnings: it is an experimental service, managed only on a “best effort” basis, and since it sends requests to the Yeti root, the user’s data is captured and analyzed. So, it is to test technically privacy-enhancing techniques, not to provide actual privacy. (We would be glad to see a real privacy-enabled public DNS resolver, with DNS-over-TLS and other features.)


Today, most DNS clients cannot speak DNS-over-TLS. If you want to use it and don’t know DNS-over-TLS clients, you can find some listed at the DNS privacy portal.

A way to use this service as a forwarder for a local resolver. The Unbound server can do that with a setup like:

  auto-trust-anchor-file: "autokey/yeti-key.key"
  ssl-upstream: yes

  name: "."
  #forward-host: "" # Or the IP address:
  forward-addr: 2001:4b98:dc2:43:216:3eff:fea9:41a@853
  forward-first: no

If you have the getdns utilities installed (for instance via the Debian package getdns-utils), you can test the resolver with the command getdns_query:

% getdns_query  @2001:4b98:dc2:43:216:3eff:fea9:41a -s -l L AAAA
      "address_data": <bindata for 2a04:4e42:1b::201>,
      "address_type": <bindata of "IPv6">

If you use the proxy Stubby, you can run it with:

% stubby  @2001:4b98:dc2:43:216:3eff:fea9:41a -L

(Or similar arguments from Stubby configuration file.)

Good luck with this service and, if there is a problem, do not hesitate to ask details and/or help on the Yeti mailing lists.


The public resolver itself is implemented with Unbound. Here is its configuration:

  use-syslog: yes
  root-hints: "yeti-hints"
  auto-trust-anchor-file: autokey/yeti-key.key
  interface: 2001:4b98:dc2:43:216:3eff:fea9:41a@853go
  qname-minimisation: yes
  harden-below-nxdomain: yes
  harden-referral-path: yes
  harden-glue: yes
  ssl-service-key: "/etc/unbound/tls-server.key"
  ssl-service-pem: "/etc/unbound/tls-server.pem"
  ssl-port: 853
  access-control: ::0/0 allow
  log-queries: yes

As you see, the requests (query name and source IP address) are logged locally (see above the warning about privacy) but not transmitted. The query name is sent to the Yeti coordinators.

(You can also see that, today, QNAME minimisation (RFC 7816 is activated on this resolver, thus limiting the amount of data sent to the Yeti root name servers.)

The Collection of Yeti Technical activities and Findings

Posted on

The Yeti project is a live testbed. This page serves as an all-in-one page to collect all information of Yeti work including Yeti documents, various results that we have gotten from experiments, software tools, and observations during the operation of the Yeti testbed.

1. Project & Testbed Description

2. Yeti operational findings

During the setup and operation of the Yeti project, a number of issues were discovered and documented that are unrelated to specific experiments, but were nevertheless interesting.

It is noteworthy that all of the name servers used in the project had fixes or changes based on the work. That includes BIND 9, NSD, Knot, PowerDNS, and Microsoft DNS.

IPv6 Fragmentation Issues

IPv6 fragmentation is a concern due to DNS large responses. The Yeti experiment produces responses larger than 1600 bytes, so is affected.

Via some tests and discussion inside Yeti mailing list, we have identifed two issues regarding IPv6 fragmentation, not only for the DNS root but for DNS in general.

  • One issue is that the stateless model of UDP-based applications like DNS makes it difficult to use ICMP/ICMPv6 signaling. More information: APNIC article on IPv6 fragmentation.

  • Another issue regarding IPv6 fragmentation is related to the coordination between TCP MSS and IPV6_USE_MIN_MTU option. One TCP segment is fragmented into two IP packets and one of them may be dropped in the middle. Please see TCP and MTU in IPv6 presented by Akira Kato at the 2016 Yeti workshop.

Root Server Glue and BIND 9

It was discovered that BIND 9 does not return glue for the root zone, unless also configured as authoritative for the zone servers themselves. This is true for the IANA root, since the root servers are authoritative for the root and the ROOT-SERVERS.NET domain. For Yeti, this is not true since there is no Yeti zone: all of the Yeti name servers are managed from an independent delegation.

This issue was discussed on the Yeti discuss list:

The BIND 9 team rejected the idea of changing the BIND 9 behavior:

However the Yeti project produced a patch for BIND 9 which some operators now use:

The discussion on root naming issue are introduce in the Yet experience I-D:

dnscap Losing Packets

One of the Yeti participants noticed that dnscap, a tool written to capture DNS packets on the wire, was dropping packets:

Workarounds were found for Yeti, although the dnscap developers continued to research and eventually discovered a fix:

- Use helper library `pcap-thread` when capturing to solve
  missing packets during very low traffic"

libpcap Corrupting pcap output

One of our systems had a full disk and ended up with corrupted pcap files from dnscap. We tracked that down to an issue in libpcap, where the underlying writes were not being properly checked. A fix was made and submitted to the upstream library. While not a perfect solution, it is the best that can be done with the underlying I/O API as well as the published API within libpcap:

RFC 5011 Hold-Down Timer in BIND 9 and Unbound

The first KSK roll on the Yeti project was not carefully planned, but rather handled as the KSK roll for any zone. Because of this we encountered problems with DNS resolvers configured to use RFC 5011. RFC 5011 automatically updates the trust anchors for a zone, but requires that the new KSK be in place for 30 days. What ended up happening is that BIND 9 continued to function, because it does not use the 30 day hold-down timer, but that Unbound stopped working, because it does (as per the recommendation in the RFC).

Corner Case: Knot Name Compression & Go dns Library

A Yeti participant discovered problems in both a popular Go DNS library and a popular authoritative DNS server, Knot when using Knot to server the Yeti root zone and querying the server with a Go program:

The problem was fixed by the developers of these open source projects. There were no reports of this affecting end-users.

NSD Glue Truncation

NSD by default is configured to send minimal responses, which needs to be re-compiled in order to send a complete glue:

3. Yeti experiment and findings

Yeti root testbed is design for experiments and findings are expected. We summarized all experiment and findings in section 4 of draft-song-yeti-testbed-experience. Some important experiments related with Root Key management are introduced here with detailed report:

Multiple ZSK Experiment

The Multi-ZSK (MZSK) experiment was designed to test operating the Yeti root using more than a single ZSK. The goal was to have each distribution master (DM) have a separate ZSK, signed by a single KSK. This allows each DM to operate independently, each maintaining their own key secret material.

A description of the MZSK experiment and the results can be found in Yeti Project Report: Multi-ZSK Experiment (MZSK).

Of particular interest, the experiment triggered a problem with IXFR for some software, the results of which are documented in An IXFR Fallback to AXFR Case.

Big ZSK Experiment

The Big ZSK experiment was designed to test operating the Yeti root with a 2048-bit ZSK. This was based on Verisign’s announcement that they were going to change the ZSK size of the IANA root to 2014-bits.

A description of the BGZSK experiment and the results can be found in Yeti DNS Project Github repository.

KSK Roll Experiment

Since ICANN is going to start KSK rollover on September 19, 2017, the Yeti KSK roll experiment was designed to perform a KSK roll for the Yeti root and observe the effects. One major goal is to deliver some useful feedback before the IANA KSK roll. A significant result was that DNSSEC failures are reported if a new view is added to a BIND server after the KSK roll has started.

For more information:

  1. The Yeti KSK rollover experiment plan is documented in:

  2. A detailed report is published in:

4. Yeti Data Analysis

Now we only have some preliminary analysis on Yeti traffic collected from Yeti server which brief introduced on the presentation Davey Song gave in 2016 Seoul workshop. It is expected that more information will be dug up in 2017 from the traffic, awaiting for more resources spent on this work.

5. Software Tools

BII has written a number of programs during the course of running the Yeti project.

PcapParser: Easy Handling of Large DNS Captured Packets

In Yeti, as with many DNS operations, we use pcap format to store captured DNS traffic. Because Yeti is concerned with large packets, we often have DNS messages that are fragmented or sent via TCP messages. These are hard to analyze, so we wrote a tool to convert these into defragmented, UDP pcap files.

As part of this work we are pushing the IPv6 IP defragmention code back into the underlying gopacket library.

ymmv: Easy and Safe Comparison of IANA and Yeti Traffic

Yeti needs as much traffic as possible, ideally with real user query data. Because it is often unacceptable to use Yeti in production environments, the ymmv program was created, which will send the same queries to Yeti servers as a resolver sends to the IANA servers, and also compare the replies and note any differences.

6. Other Resources

Scoring the Yeti DNS Root Server System

Posted on

1. Introduction

Yeti DNS Project [1] is a live DNS root server system testbed which is designed and operated for experiments and testing. Many people are curious how it works under an environment which is pure IPv6, with more than 13 NS servers, with multiple signers, and testing ZSK/KSK rolling. One key issue here is large DNS responses, because many Yeti experiments lead to large DNS responses - even more than 1600 octets.

Why large DNS responses are relevant and how DNS handles large responses is introduced comprehensively by Geoff’s articles published in the APNIC blog Evaluating IPv4 and IPv6 packet fragmentation [2] and Fragmenting IPv6 [3] (thanks Geoff!). More recently, Geoff published a pair of articles examining the root servers system behavior, Scoring the Root Server System [4] and Scoring the DNS Root Server System, Pt 2 - A Sixth Star? [5] . In these articles, a scoring model is proposed and applied it to evaluate current 13 DNS root servers how they handle large response.

Triggered by Geoff’s work [4] and [5], we formed the idea of scoring the Yeti DNS root server system using the same testing and metrics. The results are summarized in this document.

2. Repeat the Tests on the IANA DNS Root Servers

We first repeat the testing on the IANA DNS root servers. It is expected to confirm the test results, but may show some new findings from different vantage points and metrics. Using the same approach we use the dig command to send queries against each root servers with a long query name for a non-existent domain name like this to get a response of 1268 octets:

dig aaaaaaaa.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +dnssec @    +bufsize=4096

And what we see from each root server is shown in Table 1.

Root Response size(v4) Truncate (v4) Fragment (v4) TCP MSS (v4) Response size(v6) Truncate (v6) Fragment (v6) TCP MSS (v6)
A 1268 N N 1460/1380 1268 Y(900) N 1440/1380
B 1265 Y N 1460/1380 1265 Y N 1440/1380
C 1265 N N 1460/1380 - - - -
D 1265 N N 1460/1380 1265 N N 1440/1380
E 1268 N N 1460/1380 280 ServFail ServFail ServFail
F 1265 N N 1460/1380 1265 N UDP 1440/1380
G 1265 Y N 1460/1380 1265 Y N 1440/1380
H 1268 N N 1460/1380 1268 N N 1440/1220
I 1265 N N 1460/1380 1265 N N 1440/1380
J 1268 N N 1460/1380 1268 Y(900) N 1440/1380
K 1268 N N 1460/1380 1268 N N 1440/1380
L 1268 N N 1460/1380 1268 N N 1440/1380
M - - - - 1268 N TCP 1440/1380

Table 1 – IANA Root Server Response Profile with large DNS responses

The structure of Table 1 is a slightly different from the table in APNIC’s blog article. There are additional columns to show the exact size of DNS reponse messages from each root server. We list them because we found them are different by 3 octets. A,E,H,J,K,L,M have responses of 1268 octets and others 1265 octets. After some digging, we found that different root servers behave differently due to the sequence of NSEC RRs in the response.

In our case the sequence “aaa.” as Next Domain Name in NSEC RR sometimes appears before it is used as a label, and then other “aaa” labels following can be compressed using DNS compression, and we see 1265 octets. Sometimes it does not, and we see 1268 octets.(Compression on Next Domain Name of NSEC RR is not allowed) There is no protocol requirement for ordering the RR for an optimal response. So we regard it as a trivial differences. But the compression issue on #23 in table 2 is nontrivial which will be introduced in next section.

There is another difference when we display the TCP MSS information in the table. In APNIC’s article it said that in IPv4 all the root servers offer a TCP MSS of 1460 octets and the value is 1440 octets in IPv6. It may be true if there is no middle-box or firewall which changes the TCP MSS intentionally. In our observation, the TCP SYN initiated by testing resolver carries the TCP MSS of 1460 octets in IPv4 and 1440 octets in IPv6, but the TCP SYN/ACK responded by root servers are all 1380 octets. As far as we know, TCP MSS of 1380 octets is a special value used by some firewalls on the path for security reasons (CISCO ASA firewall).

Let’s look at fragments!

As with APNIC’s finding, we found that F and M fragment IPv6. It is worthwhile to mention that in our test F only fragments UDP IPv6 packets and M only fragments TCP segment. F fragmenting only UDP can be explained that F’s DNS software sets IPV6_USE_MIN_MTU option to 1 and the TCP implementation respects the IPV6_USE_MIN_MTU, so does not send TCP segments larger than 1280 octets.

TCP MSS setting,in APNIC’s blog article it is suggested that TCP MSS should be set 1220 octets. It is observed that H already accepted Geoff’s suggestion as the time of writing. TCP MSS setting in root server is relevant because there are mainly two risks if TCP MSS is not set properly. One introduced in APNIC’s blog article is Path MTU (PMTU) Black Hole in which large TCP segment may be dropped in the middle but ICMP6 PTB message is lost or filtered. Another risk is not covered by that article but is also relevant. That is the behavior where the IP packets TCP sent for segments are fragmented by root servers if TCP does not respect the IPV6_USE_MIN_MTU socket option and IPV6_USE_MIN_MTU=1 in that server. Please see TCP and MTU in IPv6 [6] , presented by Akira Kato at the 2016 Yeti workshop.

M is fragmenting the TCP segment in this way! It is odd that large UDP responses of M are not fragmented, and unknown why M fragments only TCP segments.

As for the truncation behavior of root servers, our test shows the same pattern that in IPv4 B and G truncate the response, in IPv6 A and J join B and G to truncate the response. By definition truncation happens when DNS message length is greater than that permitted on the transmission channel. The truncation behavior of A, B, G and J looks odd because they truncate the response even when both query and response specifies a large EDNS0 buffer size. There must be a separate method to determine whether to truncate or not. Obviously for these root operators they prefer to truncate the response and fall back to TCP rather than fragmenting a larger response. This preference is actually stronger in IPv6 and IPv4, as A and J send truncated packets of 900 octets, possibly because the potential problem of fragments missing or filtering in IPv6 is worse than in IPv4 (as explained in detailed in APNIC blog article).

We observe that there is a Server Failure response (SERVFAIL) from E in IPv6 which happens only in China. We also observed C in IPv6 and M in IPv4 are unreachable as the time of testing. By trace-route, the problem is spotted by upstream network provides. It reminds us that DNS system - even the root server system - is vulnerable to network attacks or routing problems.

3. Tests on the Yeti DNS Root Servers

The next step is to dig the same query against Yeti DNS root servers. And what we see from each Yeti root server is shown in Table 2.

Num Operator Response size Truncate Fragment TCP MSS
#1 BII 1252 N UDP+TCP 1440/1380
#2 WIDE 1252 N UDP 1440/1220
#3 TISF 1252 N UDP+ TCP 1440/1440
#4 AS59715 1255 N UDP+ TCP 1440/1380
#5 Dahu Group 1255 N N 1440/1440
#6 Bond Internet Systems 1252 N N 1440/1440
#7 MSK-IX 1252 N UDP+ TCP 1440/1440
#8 CERT Austria 1252 N N 1440/1440
#9 ERNET 1252 N N 1440/1440
#10 dnsworkshop/informnis 1255 N UDP 1440/1440
#11 CONIT S.A.S Colombia - - - -
#12 Dahu Group 1255 N N 1440/1440
#13 Aqua Ray SAS 1255 N N 1440/1380
#14 SWITCH 1252 N N 1440/1220
#15 CHILE NIC 1252 N N 1440/1440
#16 BII@Cernet2 1252 N N 1440/1380
#17 BII@Cernet2 1252 N N 1440/1380
#18 BII@CMCC 1252 N N 1440/1380
#19 Yeti@SouthAfrica 1252 N N 1440/1440
#20 Yeti@Australia 1252 N N 1440/1440
#21 ERNET 1252 N N 1440/1380
#22 ERNET 1252 N N 1440/1440
#23 dnsworkshop/informnis 1269 N N 1440/1440
#24 Monshouwer Internet Diensten 1255 N N 1440/1440
#25 DATEV 1255 N N 1440/1380

Table 2 – Yeti Root Server Response Profile to a large DNS response

We see no truncation for any Yeti root server. That’s means none of the Yeti servers have truncation logic apart from the server default truncation, based on the EDNS buffer sizes.

We notice that #2 and #14 accept Geoff’s suggestion to change TCP MSS to 1220 and reduce the risk for TCP segment fragmentation (although perhaps they were configured this way before this recommendation).

For #23 (running Microsoft DNS), the response is bigger than others because it does not apply name compression for the mname and rname fields in the SOA record - leading to an increase of 12 octets, and it also apply name compression on the root label itself, resulting in a bigger packet. Name compression is not a huge optimization but in certain cases octets saved by name compression could avoid truncation or fragmentation.

Note that currently Yeti implements MZSK [7] which produces large DNS responses due to multiple ZSKs. By querying for DNSKEY records with their DNSSEC signature, all Yeti servers response with a DNS message size up to 1689 octets and fragment the UDP response. When the +tcp option is added to dig - performing the DNS query via TCP - the result in the “Fragment” column is the same as that in Table 2 (#1, #3, #4, #7 fragment TCP segments). So in Yeti’s case there is a trade-off between whether to truncate the large responses or to fragment them. There is no way to avoid the cost brought by the large response (1500+ octets) with the existing DNS protocol and implementations. However, some proposals are made to address the problem by DNS message fragments [8] or always transmitting the large DNS response with connection-oriented protocols like TCP [9] or HTTP [10] .

4. Metrics for Scoring

Like Geoff, we can use a similar five star rating system for Yeti root operators. Of course, we won’t use the IPv4 ratings, since Yeti is IPv6-only. We can “borrow” 3 of Geoff’s stars:

  • If the IPv6 UDP packet is sent without fragmentation for packets up to 1,500 octets in size, then let’s give the server a star.
  • If the IPv6 UDP packet is sent without truncation for IPv6 packet sizes up to 1,500 octets, then let’s give the server a star.
  • If the offered IPv6 TCP MSS value is no larger than 1,220 octets, then let’s give the server another star.

We can suggest two more stars:

  • If the IPv6 TCP packets are sent without IP fragmentation, we will give the server a star.
  • If the server compresses all labels in the packet, we give the server a star. (Not technically the “penalize Microsoft” star, but in practice it is.)

5. Scoring Yeti Root Servers

Using this system we rate the Yeti servers.

Num Stars Comments
#1 ★★ Fragments both UDP and TCP, using TCP MSS of 1380
#2 ★★★★ Fragments UDP
#3 ★★ Fragments both UDP and TCP, using TCP MSS of 1440
#4 ★★ Fragments both UDP and TCP, using TCP MSS of 1380
#5 ★★★★ Using TCP MSS of 1440
#6 ★★★★ Using TCP MSS of 1440
#7 ★★★ Fragments both UDP and TCP
#8 ★★★★ Using TCP MSS of 1440
#9 ★★★★ Using TCP MSS of 1440
#10 ★★★★ Using TCP MSS of 1440
#11 nul points Server not responding during the test.
#12 ★★★★ Using TCP MSS of 1440
#13 ★★★★ Using TCP MSS of 1380
#14 ★★★★★ Our only 5-point server!
#15 ★★★★ Using TCP MSS of 1440
#16 ★★★★ Using TCP MSS of 1380
#17 ★★★★ Using TCP MSS of 1380
#18 ★★★★ Using TCP MSS of 1380
#19 ★★★★ Using TCP MSS of 1440
#20 ★★★★ Using TCP MSS of 1440
#21 ★★★★ Using TCP MSS of 1380
#22 ★★★★ Using TCP MSS of 1440
#23 ★★★ Using TCP MSS of 1440, doesn’t compress SOA fully.
#24 ★★★★ Using TCP MSS of 1440
#25 ★★★★ Using TCP MSS of 1380

Table 3 – Starry View of Yeti Servers

If we can make setting the TCP MSS at 1220 a common practice then we should have a lot of “5 star” Yeti servers.

6. Conclusion

We found it interesting to replicate APNIC’s results, and were happy to be able to use similar ratings across the Yeti servers.

Regarding the DNS large response issue in general, operator can adopt Geoff’s suggestions on the IPV6_USE_MIN_MTU option, TCP MSS settings, and DNS message compression. However, good solutions are still unknown for issues like IPv6 DNS fragmentation in UDP for very large response. It is still unknown whether is it better to truncate or fragment the large response… more attention is needed in the network community because we are going to build the whole Internet on IPv6 infrastructure.

As always, please let us know what you think and what you’d like for us to look at in either the IANA or Yeti root servers.

7. Reference

[1] Yeti DNS Project,

[2] Geoff Huston, Evaluating IPv4 and IPv6 packet fragmentation, APNIC blog, January 2016.

[3] Geoff Huston, Fragmenting IPv6, APNIC blog, May 2016.

[4] Geoff Huston, Scoring the Root Server System,APNIC blog, November 2016.

[5] Geoff Huston, Scoring the DNS Root Server System, Pt 2 - A Sixth Star?, APNIC blog, December 2016.

[6] Akira Kato, TCP and MTU in IPv6 , Yeti DNS Workshop, November 2016

[7] Shane Kerr and Linjian Song, Multi-ZSK Experiment, Yeti DNS Project

[8] Mukund Sivaraman, Shane Kerr and Linjian Song, DNS message fragments, IETF Draft, July 20, 2015

[9] Linjian Song,Di Ma Using TCP by Default in Root Priming Exchange,IETF Draft, November 26, 2014

[10] Linjian Song, Paul Vixie, Shane Kerr and Runxia Wan DNS wire-format over HTTP, IETF draft,September 15, 2016

Seoul Yeti DNS Workshop materials(slides, pic, audio)

Posted on

Workshop Agenda

1) Notes on software construction and reliability for privately signed root zones(Paul Vixie) slides

2) Invite talk: IDN introduction and current status(Marc Blanchet)slides

3) Yeti DNS Project status and development (Davey Song)slides

4) Yeti experiment and findings (Shane Kerr) slides

5) IPv6 issues in Yeti (Akira Kato) slides

6) Yeti tools: YmmV and PcapParser (Shane Kerr) slides-YmmVslides-PcapParser

7) Invite talk: Entrada Introduction (Moritz Mueller) slides

8) Open discussion: Improving Root Name space(Paul Vixie) slides

Youtube links:

A DNSSEC issue during Yeti KSK rollover

Posted on


KSK rollover is one of Yeti experiments on Yeti DNS Root testbed. The purpose is to test the KSK rollover in large scale and to find any potential risks before the real KSK rollover to be performed in production network in future. The proposal of Yeti first KSK rollover experiment is documented in one GitHub page [1]. The first rollover happened at the time stamp 20160830073746 (Tue Aug 30 07:37:46 2016 UTC). In the same while the old KSK was revoked, and removed after 30 days.

DNSSEC Bug report

There is an issue spotted on one Yeti resolver (using BIND 9.10.4-p2) during Yeti KSK rollover.The resolver is configured with multiple views before the KSK rollover. DNSSEC failures are reported once we added new view for new users after rolling the key. (In Yeti KSK rollover plan, once new key become active the old key is revoked).

When we view secondfloor which is created before the KSK rollover.

  $echo -n secondfloor|sha256sum 
  b1629b86416e2208ce2492ea462475f77141a2c785e53d8cd64dbf9dabe9f01f - 
  $ cat b1629b86416e2208ce2492ea462475f77141a2c785e53d8cd64dbf9dabe9f01f.mkeys 
  $ORIGIN . 
  $TTL 0 ; 0 seconds 
  @ IN SOA . . ( 
  6490 ; serial 
  0 ; refresh (0 seconds) 
  0 ; retry (0 seconds) 
  0 ; expire (0 seconds) 
  0 ; minimum (0 seconds) 
  KEYDATA 20160926180107 20150929074902 20160930050105 385 3 8 ( 
  ) ; revoked KSK; alg = RSASHA256; key id = 56082 
  ; next refresh: Mon, 26 Sep 2016 18:01:07 GMT 
  ; trusted since: Tue, 29 Sep 2015 07:49:02 GMT 
  ; removal pending: Fri, 30 Sep 2016 05:01:05 GMT 
  KEYDATA 20160926180107 20160809184103 19700101000000 257 3 8 ( 
  ) ; KSK; alg = RSASHA256; key id = 19444 
  ; next refresh: Mon, 26 Sep 2016 18:01:07 GMT 
  ; trusted since: Tue, 09 Aug 2016 18:41:03 GMT 

When we view 6plat which is created after the KSK rollover.

  $ echo -n 6plat|sha256sum 
  1369c5fe9c97ce67fa0a4b3f25e6ceb86105045569eac55db54a6e85353d030b - 
  $ cat 1369c5fe9c97ce67fa0a4b3f25e6ceb86105045569eac55db54a6e85353d030b.mkeys 
  $ORIGIN . 
  $TTL 0 ; 0 seconds 
  @ IN SOA . . ( 
  4 ; serial 
  0 ; refresh (0 seconds) 
  0 ; retry (0 seconds) 
  0 ; expire (0 seconds) 
  0 ; minimum (0 seconds) 
  KEYDATA 20160924170104 19700101000000 19700101000000 257 3 8 ( 
  ) ; KSK; alg = RSASHA256; key id = 55954 
  ; next refresh: Sat, 24 Sep 2016 17:01:04 GMT 
  ; no trust 

It is found that mkeys file of the new view (6plat) only contains the old key which is inherited from the managed-keys (tag 56082) in global setting (named.conf). But it is not inherited the new key (tag 19444 )from the managed keys database which is valid for KSK operation in RFC5011 context. That is why DNSSEC is failed in new view because old key was inactive and revoked.

However, by checking the manual of BIND9.10.4-P2, it is said that unlike trusted-keys, managed-keys may only be set at the top level of named.conf, not within a view. It gives an assumption that for each view, managed-key can not be set per view in BIND. But right after setting the managed-keys of 6plat, the DNSSEC validation works for this view.

Some thoughts

There are two preliminary suggestions for BIND users and developer.

1) Currently BIND multiple-view operation needs extra guidance for RFC5011. Right now ICANN decided to roll the KSK in 2017-10-112. The manage-keys should be set carefully during the KSK rollover for each view when the it is created.

2) It is recommended that in BIND implementation the new views should be operationally set to inherit the managed-keys from one managed keys database of a existing view. It is more natural way for Multiple-view model to adapted to RFC5011.


[1] Yeti KSK Rollover Plan,

[2] 2017 KSK Rollover Operational Implementation Plan,

CFP of Yeti DNS Workshop

Posted on

The Yeti DNS workshop is an annual meeting with reports and discussions of root DNS services, DNS tools based on experiences from the testbed and experiments in the Yeti DNS project. The first Yeti DNS workshop was held before Yokohama IETF meeting in 2015. The second Yeti workshop will be held before Seoul IETF meeting in this November. Now the venue and time is secured and we would like call for participation for people who are interested in Yeti and related topic. The detailed information of this workshop is as follows:

Date: Saturday, November 12, 2016

Time: 13:00-18:00 UTC+8 (04:00-09:00 UTC)

Location: Conrad Seoul, Studio 1-6F (IETF 97 venue)

Host: Beijing Internet Institute

Sponsor: Yeti Coordinators

Remote access (need to Download a software for the first time)

Draft agenda

-Welcome address
-Notes on software construction and reliability for privately signed root zones(Paul Vixie)
-Invite talk: IDN introduction and current status(Marc Blanchet)
-Yeti DNS Project status and development (Davey Song)

Coffee break

-IPv6 issues in Yeti(Akira Kato)
-Yeti experiment, findings and software development (Shane Kerr)
-Invite talk: Entrada Introduction (Moritz Mueller)
-TBD: a Vendor's presentation on Yeti participation
-Open discussion

Yeti Root DNS testbed has been running for more than one years with joint effort of Yeti community. Now the system is composed of 25 NS servers for root zone and attracted IPv6 traffic from interested parties and individuals, like Universities and labs. Some alternative operation models have been experimented, like Multiple-DM, Multiple-ZKS, Big ZSK and KSK rollover, etc.

We also call for presentations from Yeti operators and researchers. Topics outside of the Yeti DNS project itself are also appropriate, if they are related to DNS in general.

Note: Please send a mail to Yeti coordinators if you are interested to attend the workshop. It will help better preparation of the workshop. Thanks!

The Floor plan for Yeti DNS Workshop

Mirroring Traffic Using dnsdist

Posted on


The Yeti project would like DNS resolver operators to send us their query traffic. Because Yeti is an experimental network, DNS operators may not want to put their users on resolvers using Yeti. By using dnsdist, resolver operators can continue to use the normal IANA root servers and also send queries to the Yeti root servers.

To do this an administrator needs to set up a resolver that uses the Yeti servers which runs at the same time as the production IANA resolver. They will then add dnsdist as a DNS load-balancer in front of the resolver using IANA root servers and the resolver using Yeti root servers.

dnsdist is a highly DNS-, DoS- and abuse-aware load balancer. We can use the TeeAction to mirror DNS query to Yeti resolvers.

TeeAction This action sends off a copy of a UDP query to another server, and keeps statistics on the responses received.

dnsdist will only copy UDP query to other server. There is a figure depicting the dnstist mirrored traffic flow below:

|Client|<-->|DNSDIST|<--->Resolvers<--->Other authority servers
            |DNSDIST|<--->Yeti Resolver<--->Yeti root name server 

The dnsdist is installed betweeen client and a normal in-production RDNS server, so that the stub queries heard by that server will be mirrored toward a specified Yeti-capable RDNS server.People who is willing to use dnsdist to mirror traffic to yeti is expected to run another Yeti resolver. (The is also a list of registried Yeti RDNS in the Yeti webpage)

Note that there is no round-robin, load balancing, or RTT-based server selection for the dnsdist ‘tee’ action, and so, all mirrored queries will be sent to the Yeti Resolver server, whose IPv6 address must be hard wired in the dnsdist config.


Please refer to dnsdist

Mirror dns query to Yeti resolvers

1) Yeti resolvers add ACL for dnsdist server.

2) dnsdist add rules:

addAction(AllRule(), TeeAction("240c:f:1:22::103")) -- default port 53


1) running in the foreground.

dnsdist -C /path/to/dnsdist.conf

2) daemon

dnsdist -C /path/to/dnsdist.conf --daemon
dnsdist -C /path/to/dnstist.conf --client # connect to control


  1. dnsdist-github
  2. dnsdist-README

Yeti virtual meeting 03/24/2016

Posted on

It has been 6 months since our last virtual meeting, and more than 4 months since our Yeti DNS workshop. We thought it would be nice to have another virtual meeting to catch up on the status.

Date:  2016-03-24
Time:  07:00 UTC

As with the previous virtual meeting, this is mostly for Yeti participants or people who expect to participate. All interested parties are of course welcome.

We expect this to be around 1~2 hours long.

draft agenda & slides:

We setup a Jitsi link for our virtual meeting this time. You can check availability of this link any time before the meeting (chrome/firefox ).

Yeti monitoring using RIPE Atlas

Posted on

Yeti monitoring with RIPE Atlas

1. Purpose of Yeti Monitorint Sytsem

Currently Yeti testbed lacks sufficient information to reflect the status of Y
-eti operation, in aspect of availability, consistency and performance. It
also lacks tools to measure the result of experiments of like introducing more
root server, KSK rollover, Multi-ZKSs etc. Thus, there is a need of monitoring
the functionality, availability and consistence of Yeti Distribution Master as
well as Yeti root server.

The basic idea is setting regualarly monitoring task in Atlas to query SOA
record of fifteen root servers through both UDP and TCP to check the consi
-stence of SOA record. Use Nagios plugin to periodically get the result of
Atlas monitoring task and parse it to trigger alert. Alert email is sent when
there is an exception.

2. check_yeti, the Nagios plugin

2.1 the design of plugin

chekc_yeti gets test results by Atlas API and analysis it.
then output status results and display it by Nagios Web interface.

get the result from Atlas, sample code:

start_time=str(int(int(stop_time) - 3600))
url=base_url + targetid + "/results" + "?start=" + start_time + "&" + \
                                       "stop=" + stop_time + "&format=json"
urllib.urlretrieve(url, outfile)

2.2 Checking Algorithm

  • Atlas:
    a.set regularly monitoring task for 15 root server.
    b.each time use 10 probes.
  • Nagios:
    a.get result of each root server in one hour
    b.analyse result
    c.return statuse code

    OK: over six probe returns OK result WARNING: OK number blows 4
    CRITICAL: None of proble returns OK
    UNKNOWN: no return data.

PS: nagios status code


3. Deployments

3.1 check_yeti

Give the permission to execute it and put it into Nagios’s plugin directory.

3.2 hosts.cfg

Define monitoring Server

define host{  
    use                     linux-server-yeti-rootserver  
    address                 240c:f:1:22::6  

Different servers should be defined seperately

3.3 commands.cfg

Define check_yeti checking commands

define command{
        command_name    check_yeti
        command_line    $USER1$/check_yeti $ARG1$

$USER1$ : nagios’s plugin directory $ARG1$: plugin input parameter, ID of monitoring task

3.4 service.cfg

Define check_yeti monitoring service

define service{  
   	     use                             generic-service          
   	     service_description             check_yeti  
   	     check_command                   check_yeti!1369633 

generic-service: Define nagios monitoring templates such as checking interval(2 hours), alarming interval, alarming level
1369633 : ID of monitoring tast
Different servers should be defined seperately

3.5 contacts.cfg

Define alarming contactors

 define contact{
       contact_name      yeti          
       use               generic-contact               

contact_name : contactor name, will use directly in templeate
email: contactor, seperate by commas

3.6. Start nagios

servic nagios restart

4. Display Atlas monitoring status on website

  1. seting dnsdomainmon tasks in Atlas, get zone ID
  2. using Atlas API, refering
  3. key parameter: zone: “3069263”
  4. sample code
	    <!DOCTYPE html>
	    <title>domainmon test</title>
	    <script type="text/javascript" src=""></script>
	    <script type="text/javascript" src="">
	    <script type="text/javascript" src=""></script>
	    <script type="text/javascript" src=""></script>
	    <script type="text/javascript" src=""></script>
	    <script type="text/javascript" src=""></script>
	    <script type="text/javascript" src=""></script>
	    <script type="text/javascript" src=""></script>
	    <script type="text/javascript" src="/variables.js?v=Archangel"></script>
	    <script src="/easteregg.js?v=Archangel"></script>
	    <script type="text/javascript" src="" ></script>
	    <div id="domainmon"></div>
	            var dnsmon;
	            $(function() {
	                var hasUdp = true;
	                var hasTcp = true;
	                function onDraw(params) {
	                    var tab;
	                    if (params.isTcp) {
	                        tab = $(".protocol-tabs a[data-protocol=tcp");
	                    } else {
	                        tab = $(".protocol-tabs a[data-protocol=udp");
	                dnsmon = initDNSmon(
	                            dev: false,
	                            lang: "en",
	                            load: onDraw,
	                            change: onDraw
	                        }, {
	                            type: "zone-servers",
	                            zone: "3069263",
	                            isTcp: hasUdp ? false : true

The meeting note of Yeti workshop

Posted on

This is a short summary for the people who are interested but not able to attend the workshop. The audio record on youtube

(1) The first speech was given by Paul Vixie(from 0:15:00 of the youtube record). The slides link

He is one of the founder of Yeti. He introduced his 10+ Years of Responsible Alternate Rootism, as a track of the time that lead to Yeti. He admitted he has been in the center of constrovercy of DNS and root system for years.

Paul Vixie’s presentation talked about the evolution of his involvement with the root system, starting from the concept& his experience of root system, anycasting, his idea of alternative rootism, ultimately leading up to the Yeti project. He insisted many times that to keep one namespace is not necessary to keep one set of root servers where the root zone is served. He proposed IANA to sign another zone parallelely to helped get IDN, DNSSEC and IPv6 out there faster. But it is viewed controversial and failed.

As note writer observed, it is true that due to DNSSEC encryption technology, the signed data can be authenticated by the data itself not necessary to the authority who publish it. So in Yeti project today, Paul and people with same interest turn to use its own key to sign the zone for a testbed instead asking IANA to sign the alternative zone.

(2)The second speaker is Keith Mitchell who is the president of DNS OARC. He was invited to speak about AS112 Operations. (from 00:48:00) The slides link

There was some discussion about researching using DNAME in the same way that AS112 uses DNAME, by adding a DNAME delegation representing a name from the special-use registry. While an interesting idea, there seemed to be consensus that this was slightly risky because it would actually be changing the contents of the root zone.

(3) Davey Song is one of the founders of the Yeti project, and he presented the current status of the Yeti DNS Project(from 01:09:00) The slides link

There is a informational draft song-yeti-testbed-experience to record the Yeti experience gained from the project which covers the most part of Davey’s presentation.

There was a lot of discussion about the DM model and the details of the process. This is documented and the source code published on the GitHub repository, but people still wanted clarification.

There was discussion about using Yeti in an IPv4 scenario.There was discussion about the duration of the project - 3 years.

The limitations of emergency fixes in the root zone was discussed. This is not much worse in the Yeti system than in the IANA root. If it becomes a problem, we can synchronize much more closely with the IANA root.

(4) Shane Kerr works for BII, one of the Yeti DM. He presented the current experiment situation in the Yeti project and activities in BII lab(from 02:32:00). The slides link

He first introduced in detail how Yeti DM model works and followed with a Multi-ZKS experiment for Yeti DM model. He then briefed us the KSK plan by ICANN and what is expected in Yeti testbed. There are also other related topics about hint management and DNS layer fragment.

One point of concern was with updating hints files, since that could allow an attacker (or operational mistake) to prevent a system from ever using the real root servers. DNSSEC validation makes this less scary. One suspicion was that broken resolvers were the problem, and that no amount of improving working resolvers would help much.

Regarding the DNS layer fragment, there is a joint document by BII and ISC.

A proposal to investigate TCP-only root service was made, and a discussion around that.

(5) Stephane Boryzmeyer is a DNS expert, and has been actively involved with the Yeti project since its inception. He presented his observations from monitoring Yeti and introduction of various monitoring tool for Yeti (from 03:13:00). The slides link

Currently, Yeti testbed is operational, but lack of monitoring and fine-grained measurement for the dos quality of Yeti. Apparently, dig from time to time is not enough. From Stephane’s presentation he encouraged us try to use automatic tools for Yeti monitoring even thought it’s not for production usage.

He briefed us the state of art of DNS monitoring tools from his experience, varying from Icinga, Check_dig, Zonemaster, DNSmon, DNSviz and finally the RIPE Atlas. He suggested us using those tools for improving the system monitoring & measurement going forward.For example, providing a status page on the Yeti web page that could be used to update monitoring results.

(6) Dmitrii Kovalenko is a DNS operator at MSK-IX, and was an early Yeti participant. Recently MSK-IX set up a mirroring of resolver traffic, and Dmitrii was asked to present this(from 03:50). The slides link

There was discussion of possible additional work that might be interesting on the resolver query duplication.

Also noted that we should make it clearer how to submit a trouble ticket, as there were some issues during Dmitrii’s work that should have been tracked.

Other topics:

  • There was a discussion of investigating ways to shutdown the project. Certainly we need to encourage resolver operators to join the mailing lists.

  • We discussed future workshop plans. Currently there are none.

  • We discussed the timeline of experiments. Currently there is none, but this will be proposed soon.

  • Everyone should configure all of their resolvers to use Yeti! :)

Yeti workshop slides,records & pictures

Posted on

Dear Colleagues,

In October 31st afternoon Yokohama, we held the first Yeti Workshop. There are 25+ participants (including online participants) there and we have a fruitful discussion in Yokohama. Firstly I would like to thank all the workshop participants and speakers who contribute this workshop. Thanks to Akira Kato who helped to find a nice meeting room for us. Special thanks to Paul Vixie, who arrived at Yokohama 2 days ahead to prepare yeti DM setup for the workshop.

Now all slides and record are posted into Yeti website. Please check the links below more information. I would like to quote a saying with consensus here during the workshop:”Yeti project is totoally driven by the community and the peopole”. Please send request, questions, and even push us! It’s a wonderfull experince and hope to meet more Yeti friends again.

Agenda(with slides):

Gotomeeting Audio record of this workshop

I also would like to share some pictures during the meeting.

Yeti DM Setup

Posted on


This document describes the process used to generate and distribute the Yeti root.

The Yeti DNS Project takes the IANA root zone, and performs minimal changes needed to serve the zone from the Yeti root servers instead of the IANA root servers.

In Yeti, this modified root zone is generated by the Yeti Distribution Masters (DM), which provide it to the Yeti root servers.

While in principle this could be done by a single DM, Yeti uses a set of three DM. These DM coordinate their work so that the resulting Yeti root zone is always consistent. (We need to avoid the case where not all DM share the same set of Yeti root servers, and so they produce different versions of the Yeti root zone with the same serial.)


The generation process is:

  1. Download the latest IANA root zone
  2. Make modifications to change from the IANA to Yeti root servers
  3. Sign the new Yeti root zone
  4. Publish the new Yeti root zone

IANA Root Zone

The root zone is currently downloaded using AXFR from F.ROOT-SERVERS.NET:

   $ dig -t axfr .

This appears to be the best way to get the updated version of the root zone as soon as possible.

TODO: The DM should check that the zone they are retrieving from IANA is properly signed, with something like ldns-verify-zone.

A new version of the Yeti zone is only generated when the IANA zone changes, which is detected by the change in the serial number value in the SOA record.

KSK secret

All DM share the same KSK secret material. This generated using the BIND 9 dnssec-keygen tool, and then sent via encrypted PGP to the other DM operators.


The root zone is modified as follows:

  • The SOA is updated:
    • The MNAME and RNAME are set to Yeti values
  • The IANA DNSSEC information is removed:
    • The DNSKEY records
    • The RRSIG and NSEC records
  • The IANA root server records are removed:
    • The NS records for [A-M].ROOT-SERVERS.NET
  • The Yeti DNSSEC information is added:
    • The DNSKEY records
  • The Yeti root server records are added:
    • The NS records
    • The AAAA glue records
  • The Yeti root zone is signed

It might be worthwhile to use the serial value in the SOA field, however for now we duplicate the IANA serial value.

The list of Yeti name Servers is synchronized between the DM as described below.


Each Yeti DM checks to see if the IANA root zone has changed hourly, on the following schedule:

DM Time
BII hour + 00
WIDE hour + 20
TISF hour + 40

A new version of the Yeti root zone is generated if the IANA root zone has changed.

Synchronizing List of Yeti Name Servers

It is important that the root zone produced by the Yeti DM is always consistent. In order to do this, we use something like a 2-phase commit procedure.

A change to the list of Yeti name servers gets put into a PENDING directory on any one of the Yeti DM. This directory contains:

  • the new list of Yeti name servers
  • the time when the new list will be valid from
  • a file for each Yeti DM which has confirmed the new list

Each DM will periodically check this PENDING directory. If the directory is present, then the DM will download the new information, add a file documenting that it has received it.

Sometime after the scheduled time arrives and before the next Yeti root zone is generated, each DM will check if the other DM have both received the new list of Yeti name servers. If they have, then the list of Yeti name servers will be replaced with the new one. If they have NOT, then an alarm is raised, and humans debug the failure.

In pseudocode, something like this:

    loop forever:
        try rsync with PENDING directory with each other DM

        if PENDING list of roots != my list of roots:
            add a DM file for me in the PENDING directory

        if current time > PENDING scheduled time:
            if the DM file for each DM is present:
                copy PENDING list of roots to my list of roots
                PANIC (notify a human being)

        sleep a bit

We choose 48 hours as the current time to adopt a new list of Yeti name servers. This allows plenty of time for for DM administrators to fix issues.

Only a single PENDING change is possible at one time. This is an entire new list of Yeti root servers. Further changes must be held until the current set is applied.

Note that it might be possible to start using the new list of Yeti name servers as soon as all DM have received it. However for predictability and simplicity we will always use the scheduled time for now.

Yeti project problem statement

Posted on

Problem Statement

Some problems and policy concerns over the DNS Root Server system stem from unfortunate centralization from the point of view of DNS content consumers. These include external dependencies and surveillance threats.

  • External Dependency. Currently, there are 12 DNS Root Server operators for the 13 Root Server letters, with more than 400 instances deployed globally. Compared to the number of connected devices, AS networks, and recursive DNS servers, the number of root instances is far from sufficient. Connectivity loss between one autonomous network and the IANA root name servers usually results in loss of local service within the local network, even when internal connectivity is perfect. Also this kind of external dependency will introduce extra network traffic cost when BGP routing is inefficient.

  • Surveillance risk. Even when one or more root name server anycast instances are deployed locally or in a nearby network, the queries sent to the root servers carry DNS lookup information which enables root operators or other parties to analyize the DNS query traffic. This is a kind of information leakage which is to some extent not acceptable to some policy makers.

There are some technical issues in the areas of IPv6 and DNSSEC, which were introduced to the DNS Root Server system after it was created, and also when renumbering DNS Root Servers.

  • Currently DNS mostly relies on IPv4. Some DNS servers which support both A & AAAA (IPv4 & IPv6) records still do not respond to IPv6 queries. IPv6 introduces larger IP packet MTU (1280 bytes) and a different fragmentation model. It is not clear whether it can survive without IPv4 (in an IPv6-only enviroment), or what the impact of IPv6-only environment introduces to current DNS operations (especially in the DNS Root Server system).

  • KSK rollover, as a procedure to update the public key in resolvers, has been a significant issue in DNSSEC. Currently, IANA rolls the ZSK every six weeks but the KSK has never been rolled as of writing. Thus, the way of rolling the KSK and the effect of rolling keys (including both ZSK and KSK) frequently are not yet fully examined. It is worthwhile to test KSK rollover using RFC5011 to synchronize the validators in a live DNS root system. In addition, currently for the ZSK 1024-bit RSA keys are used, and for the KSK 2048-bit RSA keys are used. The effect of using key with more bits has never tested. A longer key will enlarge DNS answer packets with DNSSEC, which is not desirable. It is valuable to observe the effect of changing key bit-lengths in a test environment. Different encryption algorithms, such as ECC, are another factor that would also affect packet size.

  • Renumbering issue. Currently Internet users or enterprises may change their network providers. As a result their Internet numbers for particular servers or services, like IP address and AS numbers, may change accordingly. This is called renumbering networks and servers. It is likely that root operators may change their IP addresses for root servers as well. Since there is no dynamic update mechanism to inform resolvers and other internet infrastructure relying on root servic of such changes, the renumbering issue of root server is a fragile part of the whole system.

Based on the problem space there is a solution space which needs experiments to test and verify in the scope of the Yeti DNS project. These experiments will provide some information about the above issues.

  • IPv6-Only Operation. We are try to run the Yeti testbed in pure IPv6 environment.

  • Key/Algorithm rollover. We are going to design a plan on Yeti testbed and conduct some experiment with more frequent change of ZSK and KSK.

  • DNS Root Server renumbering. We may come up with a mechnism which dynamically updates root server addresses to hint file; this is like another kind of rollover.

  • More root servers. We are going to test more than 13 root name server in Yeti testbed and to see “how many is too many”.

  • Multiple zone file editors. We will use IANA root zone as a source of zone info. Each of BII, TISF, and WIDE modifies the zone independantly at only its apex. Some mechinisms will be coined to prevent accidental mis-modificaiton of the DNS Root zone. In addition we may implement and test “shared zone control” ideas proposed in the ICANN ITI report from 2014. ICANN ITI report:

  • Multiple zone file signers. To discover the flexibility and resiliency limits of Internet root zone distribution designs, we can try multiple DMs with one KSK and one ZSK for all, and we can try multiple DMs with one KSK for all and one ZSK for each.

We are not

  • We never and ever try to create and provide alternate name space. Yeti DNS project has complete fealty to IANA as the DNS name space manager from the beginning of its conception. Any necessary modifications of the current IANA zone (like the NS records for “.” ) will be dicussed publicly and given a clear reason.

  • We are not going to develop or expriment with alternative governance models, regarding the concern arised in many occasions that a certain TLD (mostly ccTLD) will be removed intentionally as an additional option for punishment or sanction from USG to against its rivals. It maybe discussed or studied by different projects, but not Yeti. In Yeti we keep the same trust anchor (KSK) and the chain of trust to prevent on-path attacks and distribute root services based on the current model.

Welcome to Yeti DNS Projet!

Posted on

This is the Yeti DNS Project, an experimental testbed network of DNS root name servers.

This testbed network will be used to discover the limits of DNS root name service, including the following:

  • Can root name service succeed if it is only connected via IPv6 (and never via IPv4)?
  • Can we change the DNSSEC “ZSK” more frequently, perhaps every two weeks?
  • Can we change the DNSSEC “KSK” more frequently, perhaps every six weeks?
  • How many root name servers is enough? How many is too many?
  • Can we add and delete root name server operators frequently, such as every month?
  • Can the IANA name space be served by more than one set of root name servers?

Note that the Yeti DNS project has complete fealty to IANA as the DNS name space manager. All IANA top-level domain names will be precisely expressed in the Yeti DNS system, including all TLD data and meta-data. So, the Yeti DNS project is not an “alternative root” in any sense of that term. We hope to inform the IANA community by peer-reviewed science as to future possibilities to consider for the IANA root DNS system.


The latest zone file, public key, and description of the Yeti DNS project are placed in a repository in GitHub:

There is a discussion mailing list for project participants, reachable here:


The DSC page of the Yeti root servers is available, where you can see the traffic of the testbed:

Coordination and Participation

The Yeti DNS project was launched in March 2015 by representatives from WIDE, BII, and TISF, who now act as co-equal project coordinators. The role of the coordinators includes outreach to the community, cooperation with Yeti DNS authority name server operators (“providers”), and support for Yeti DNS recursive name server operators (“subscribers”). Initially, the coordinators will also operate the Yeti DNS distribution master servers.

The Yeti DNS project coordinators invite interested authority name server operators, recursive name server operators, to contact us by e-mail, either individually: (Davey Song) (Akira Kato) (Paul Vixie)

…or as a team:

About Us

BII Group — the parent company of BII (Beijing Internet Institute), a public interest company serving as BII’s Engineering Research Center.

WIDE — Widely Integrated Distributed Environment.

TISF — a collaborative engineering and security project by Paul Vixie.

Hello Yeti

Posted on

Hello Yeti

“Hello “