Since the Google Public DNS caches are constantly being updated by millions of queries, they end up having most of the root zones cached anyhow (TTLs for these are long, on the order of a day). So running a local copy of the root zone as suggested in RFC 7706
doesn't provide much of a performance increase, and just adds moving parts to the system that need to be monitored in case there is a problem.
Since Google Public DNS is already performing DNSSEC validation of all signed zones (such as the root), the elimination of most root queries is managed instead by implementing RFC 8198
(Aggressive Use of DNSSEC-Validated Cache). The NXDOMAIN response to a query for "a.domain.that-is.invalid" is synthesized from the NSEC records you can see with this query:
$ dig +dnssec a.domain.that-is.invalid @126.96.36.199 | grep -E 'NX|NSEC\t'
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 29422
. 86321 IN NSEC aaa. NS SOA RRSIG NSEC DNSKEY
intuit. 85832 IN NSEC investments. NS DS RRSIG NSEC
The second record (with the associated RRSIG filtered out by grep
) proves that there are no names in the root zone between "intuit" and "investments" and that therefore the "invalid" domain (and any subdomains) do not exist. These NSEC records were cached as the result of a previous query for some other domain in that range (perhaps the (typo?) domain "inuit") and as the TTLs for these are long, can be used literally millions of times, notably for queries for "localhost", the NXDOMAINs for which
are sent thousands of times a second:
$ dig +dnssec localhost @188.8.131.52 | grep -E 'NX|NSEC\t'
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 31757
. 16330 IN NSEC aaa. NS SOA RRSIG NSEC DNSKEY
loans. 16330 IN NSEC locker. NS DS RRSIG NSEC
In case you were wondering, the first NSEC record proves the nonexistence of a root wildcard (*.) that would otherwise match nonexistent TLDs like invalid and localhost.