Tuesday, February 25, 2014

Authoritative Name Server attack

As of early February I have been observing new weird DNS requests that I think can only be labeled as an Resource Exhaustion Attack against Authoritative Name Servers. An attack strong enough to cripple DNS providers that are hosting thousands and thousands of domains....

Though I am hearing a lot of sounds about it being related to malware but evidence for that has yet to surface.

The first report I read about this was the following post on Spiceworks[1], where some people labeled this as an in -or out going DNS amplification attack. Since then I have heard from DNS admins from all over the world who where seeing similar traffic and rules they wrote to defend themselves against it.

I believe that this attack is also what was troubling some Linux PowerDNS[2] installs.

Amplification Attacks

When an attacker wants to take down a website or host it has different ways to do so. One of which is a Denial of Service (DOS) attack. One common form of attack is a DNS and since recently NTP Reflective Amplification Attacks. These attacks focus on flooding the internet pipe of the victim with useless traffic generated by open DNS/NTP servers on the web.

For these attacks to work an attacker needs -multiple- host with 1gbit uplinks and the ability to spoof source IPs on that AS and a list of good Open DNS/NTP servers.

A good open server is in this case a DNS or NTP server that is capable of sending a much larger response to a small request. For DNS one would search for DNS server supporting EDNS and for NTP servers that support the Monlist command.

The attack

The DNS based attack I have been observing does not require very high quality DNS servers, actually any open resolver will do.

The attackers simply floods the open resolver(s) with non-existent sub-domains for a domain. This will require the resolver to query the DNS hierarchic and contact the authoritative name server for the domain.  One can imagine the effects of hundreds, thousands or even millions of open resolvers contacting the same bunch of authoritative name servers with unique requests multiple times per second.

While writing this blog I observed lots of queries to *.www.0538hj.com. The name servers for this domains are:

;0538hj.com.                    IN      NS

0538hj.com.             60345   IN      NS      dns11.hichina.com.
0538hj.com.             60345   IN      NS      dns12.hichina.com.

dns12.hichina.com.      84272   IN      A
dns12.hichina.com.      84272   IN      A
dns12.hichina.com.      84272   IN      A
dns11.hichina.com.      84272   IN      A
dns11.hichina.com.      84272   IN      A
dns12.hichina.com.      84272   IN      A
dns11.hichina.com.      84272   IN      A
dns12.hichina.com.      84272   IN      A
dns11.hichina.com.      84272   IN      A
dns11.hichina.com.      84272   IN      A
dns11.hichina.com.      84272   IN      A
dns12.hichina.com.      84272   IN      A

While this attack was ongoing it was very difficult to get a response from one of these servers. Goes to show how effective the attack is.

dig a 0538hj.com @dns11.hichina.com
;; global options: +cmd
;; connection timed out; no servers could be reached

About 1 out of 8 queries seemed to get an answer.
The query rate was about 2 - 8 queries per second.


My logs suggest I first started seeing these attacks on Febuary the 3th with domain: abpdesthvwxyz.gb41.com.
The following graph shows the amount of unique domain names this resolver has been seeing each day in February.

Normally this would only be about 10 a day as this server is not used in any legitimate way but participates only in DNS amp and receives some DNS scans. On days I have been seeing these attacks I have seen spikes as high as 16.000 unique domains.

Domains the method - Name Servers the targets

Over time I have seen a multitude of domains. Here are some of the bigger attacks I have seen. The count represent the amount of sub-domains I observed that day. As each sub domain is only requested once this is equal to the amount of IPs and requests.

Count     Date             Domain
 103294 2014-02-11 .jn176.com
  74525 2014-02-22 .sf123.com
  69164 2014-02-13 .iidns.com
  60176 2014-02-23 .sf123.com
  49855 2014-02-21 .sf123.com
  46308 2014-02-14 .567uu.com
  46023 2014-02-11 .hcq99.com
  41051 2014-02-22 .51pop.net
  31899 2014-02-12 .gx911.com
  30139 2014-02-11 .gx911.com
  29984 2014-02-12 .999qp.net
  28956 2014-02-19 .jd176.com
  27736 2014-02-18 .269sf.com
  27006 2014-02-10 .yinquanxuan.com
  25780 2014-02-14 .iidns.com
  25576 2014-02-15 .567uu.com
  25417 2014-02-05 .139hg.com
  22184 2014-02-23 .52ssff.com
  20424 2014-02-15 .liehoo.net
  19609 2014-02-11 .sf717.com
  19525 2014-02-18 .chinahjfu.com
  19452 2014-02-14 .369df.com
  18496 2014-02-05 .hqsy120.com
  18086 2014-02-18 .5kkx.com
  17932 2014-02-23 .51pop.net
  17257 2014-02-14 .love303.com
  16617 2014-02-15 .cxmyy.com
  16614 2014-02-15 .cc176.com
  16380 2014-02-11 .999qp.net
  16244 2014-02-15 .jdgaj.com
  15977 2014-02-19 .bdhope.com
  15316 2014-02-12 .hcq99.com
  14808 2014-02-19 .seluoluo3.com
  14675 2014-02-14 .422ko.com
  14086 2014-02-19 .250hj.com
  13900 2014-02-22 .5ipop.net
  13477 2014-02-14 .lcjba.com
  13415 2014-02-04 .wb123.com
  13315 2014-02-23 .luse7.com
  13079 2014-02-23 .luse8.com

Name servers:

The above domains use the following name servers and we can assume that during these attacks these name servers where very difficult to reach. 

      4 iidns.com.
      3 hichina.com.
      3 dnsabc-b.com.
      2 dnsabc-g.com.
      3 gfdns.net.
      1 zndns.com.
      1 51dns.com.
      1 360wzb.com.
      1 51dns.com.
      1 domaincontrol.com.
      1 dnspod.com.

Most of these name servers belong to Chinese registrars. Some of these registrars are responsible for up to half a million domains.

Spoofed or not?

Each DNS query is received from a different IP-address. This suggest spoofing but not in the way it is used with reflective amplification attacks, to specify the target. Here it seems to be used to cloak the origin of these queries from the resolvers.

I keep track of a few values for each query that comes in. Among others its Time To Live (TTL), a value that is not often spoofed.

Count     TTL
   1074    234
   2226    235
  19106   236
  53624   237
  54193   238
 107010  239
 197934  240
 234902  241
 226965  242
 322752  243
 308978  244
 239031  245
 185288  246
 158441  247
  62255   248
  23045   249

16 different TTLs not bad. Suggests it is from all over the globe. Until I noticed the following request for a domain matching this regex:


Two queries occurred within the same hours but its TTL was off by a lot:

ip= ; domain=bryaiqfvenakbsr.www.luse0.com ; count=1 ; qtype=A ; ttl=234
ip= ; domain=izeuvqnkcooofqx.www.luse6.com ; count=1 ; qtype=A ; ttl=247

13 hops difference, that could be the difference between a request from Europe or the US a change like that doesn't add up. I call that evidence of spoofing.


For me it is pretty easy to detect these attacks now that I know what to look for. But I am fortunate enough to have very little legit traffic so this malicious traffic stands out nicely. When running a (very) large resolver for a network it will be more difficult to spot, let alone block.

So far I have only seen queries for:

- All queries are for A records
- No OPT resource record in query
- One label is randomized
-The random sub-domain contains only chars a-z
- Random sub-domain label length is between: 1 <> 16


Automatically flagging of these domains might result in false positive. So until a characteristic is found that can be used to isolate this traffic more specifically it will be mainly manual labor to maintain blacklists.

One way of dropping this traffic would be by using the IPtables strings module:

iptables --insert INPUT -p udp --dport 53 -m string --from 34 --to 80 --algo bm --hex-string '|056c7573653003636f6d00|' -j DROP -m comment --comment "DROP DNS Q luse0.com"

An employee of Secure64 pointed me to their blog about the subject:



[1] - http://community.spiceworks.com/topic/441721-what-does-a-dns-amplification-ddos-attack-look-like
[2] - http://blog.powerdns.com/2014/02/06/related-to-recent-dos-attacks-recursor-configuration-file-guidance/


  1. I have a different opinion on the targets of these attacks. I believe the queries are sent to open resolvers to attempt to DoS them. Each of these queries that recurse to name servers that do not respond consumes a recursive query client (bind option recursive-clients). On my network, the sources of these queries are open resolvers that can be abused in the usual DNS amplification attacks (not the actual source, but the open resolvers that use us as forwarders). Somebody thinks he's saving the world from open recursive resolvers by attempting to DoS them.

  2. I have those issues on two of my Servers only. One in Japan and one in Sweden. And we have a bunch of Servers. And Phil is right when he states that the Servers we have are used as forwarders because 95% of the ip's are open resolvers and the rest are dynamic ip's. I watched those Kind of attacks a few weeks ago when I disabled the query "ANY" on all Servers. Then almost at the same time these weird queries came up. Maybe this occurred accidentally, I have no idea. I also noticed, this only happens on debian Systems or centos Systems with the normal bind Versions installed by the repos. On the ubuntu LTS Systems it does not occur. There I use latest Version of bind (9.9.5). Maybe there is a coherence??? I have no idea. First I started blocking the ip's that sent those queries, but that was not a good solution, then I started using IPTABLES to stop those queries in Sweden. But everyday new queries come up with different hostnames. Looks like a dictionary attack on dns System. I have no idea...

  3. I work for a large ISP and we are seeing many of these attacks. I'm glad to finally see this article. Over the last couple weeks I was hitting up google every once in a while to try and figure out if we were the only ones being attacked and what the nature of these attacks are.

    We ended up writing some custom scripts to look for the pattern of these queries, identify the domains being used, and then blacklist them at our servers. This way rather than exhausting the recursive clients, the servers just return nxdomain. It is not an ideal solution but it is all we have at the moment.

  4. Are the IP's spoofed after all? Every time I get such a weird query on the Server in Sweden that is a Debian 7 System there is a kernel message in the logfile that describes that there was a martian source eth0. Example:
    Feb 28 11:43:41 servername kernel: [81330.188825] martian source serverIP from, on dev eth0
    Feb 28 11:43:41 servername kernel: [81330.188830] ll header: 02:00:2e:f6:5e:88:00:13:5f:21:29:40:08:00
    Feb 28 11:46:54 servername kernel: [81522.929568] martian source serverip from, on dev eth0

    Maybe this is useful for investigation.

    However, @ Jim, it would be nice to make your custom script public for others who have the same problems, because for many people it seems to a helpless fight.
    Feb 28 11:46:54 sto1 kernel: [81522.929572] ll header: 02:00:2e:f6:5e:88:00:13:5f:21:29:40:08:00

    1. Glad too see more indicators of spoofed traffic.

  5. Unfortunately my script is quite specific for our system, however I can describe the method we're using.

    We are using tshark to capture the packets on the front end interface of our servers using the following syntax....

    tshark -i -c 1000 -T fields -E separator=, -e ip.src -e dns.qry.name dst port 53

    Actually, depending on your server you could pull the same things out of query logs, but all this command does is capture 1000 dns queries for your server and return the records being queried for.

    Then a little bit of perl looks for domains that have multiple hostnames being queried for. Generally a domain shouldn't have too many different hostnames. I look for domains that have over 60 in a given time period. This pretty accurately identifies those that are generating hostnames.

    So after that I have a list of domains which I then block in our system. If you're using BIND, you could autogenerate a config file with stub zone files and then include it with your bind config and do a "reconfig" to load in the new zones.

  6. I too work at a large ISP and am seeing this on practically all of our rDNS servers. I too have developed a script that I am willing to share. It was written to run as a cron job and requires ISC BIND > v9.7.0a1 or any variant thereof that supports logging of SERVFAIL errors. It may or may not work in your instance, but you are free to try it out. In my specific instance, the cron job is set to run every 10 minutes.


    1. Thanks a lot for the script you've published. After accommodating I will test it.
      But did the weird queries stop? I don't get any weird queries anymore on the Sweden Server. Well, I must admit that I adjusted some system config files. After a reboot those weird queries have gone away... Anyway it is very strange...

    2. Changing some system config files surely could not stop this, as the attack just uses DNS in its traditional form - no amplification, no bogus records, etc

  7. I'm experiencing the same type of attack although the source IP addresses are not spoofed.
    Also, recently I've noticed attacks directed against root name servers:
    42333+ A? biskxggamzytvgpbrgfelxaynzh.com. (49)
    42333 NXDomain 0/1/0 (122)
    30927+ A? kndexoqohbyytaunvnbgmuovs.biz. (47)
    30927 NXDomain 0/1/0 (109)
    21997+ A? qvofqvoducmkjbunylxsondx.org. (46)
    21997 NXDomain 0/1/0 (109)
    10175+ A? vocmpqldupvhugxzdvgupgm.info. (46)

    1. The script above has been edited to NOT block any TLD based attacks. I am also working on an option to cleanup the iptables rules after 48 hours to not keep the iptables ruleset too big. If the domain just happens to be hit again in 48 hours, it will get added back. Just for information, I am still seeing 15-20 different domains targeted per evening changing about every 1-2 hours in a 10-12 hour timeframe (nighttime for me)

  8. Could this be a cache poisoning attack against the Auth Name Servers (Kaminsky attack) ?
    The idea being to flood the server with requests (due to the domain being randomised) and simultaneously send out spoofed responses to cache servers in the hope that you poison their cache.

    Been seeing this in burst for the past few days, so far using a script to block the domains when they come in is working, but the real answer is to get our subscribers to clean up their routers.

  9. Hello!

    What to do with the normal domain?
    For example I have seen too much in the domain names baidu.com (Chinese search engine)

    baidu.com. - 1946 - request ; 1763 - subdom; 156-uniq ip (10 second)

    Simply can not be DROP.
    Any ideas?

  10. This comment has been removed by the author.

  11. If you run a authoritative dns server with no means of supporting recursive queries this solution might help you:

  12. These are awesome! I've been looking for one too thanks!!!!!
    Server hosting 1gbit

  13. I wrote a little script that uses tcpdump etc to find what domains are being abused and adds them to a backlist file which is then parsed and blocked.
    Saves a lot of trouble with this kind of attack.
    Feel free to download it and see if it works for you. It sure does for me :)

  14. Citated from that blog - "Secure64’s DNS Cache has built-in defenses against such an attack. Under attack conditions, the Secure64 resolver will not consume any CPU or memory resources attempting to reach nameservers that it already knows are non-responsive. This adaptive behavior allows the Secure64 resolver to remain 100% available to legitimate clients under such attack conditions." - end of citate.

    Hmm, such mitigation could create the problems with false positives. Imagine the attacker will send the queries to well-known (and viable domain) nameserver:
    Such mechanism could block all (including completely legal queries such as www.yahoo.com) queries to yahoo.com' nameservers in such a way. And so the attack will have an impact of stopping you clients from seeing yahoo.com records.

    More right way is to implement protective mechanism via blocking clients queries, instead of stopping the attempts to reach a nameserver.
    I've had implemented mitigation mechanism against DNS slow drip attack in such a way: collecting run-time statistics about top queries, analyzing their domain part, basing on that hit counts I put in blacklist a record which include client's IP, client's query (only domain part that was hit) and so all further client queries that contains the same domain part won't be processed by DNS for some period of time. You need some additional memory for storing blacklist data, of course. IMO it is better than false positives...

  15. hello, is there any solution with this ? because it seems my dns server (as caching server - freebsd 10 - bind 9.10) had this kind of queries that eating the server ram (resource-exhaustion) ... rate-limiting doesn't effect.