Rendered at 19:24:23 GMT+0000 (Coordinated Universal Time) with Netlify.
krystofbe 4 days ago [-]
Looks like a DNSSEC issue, not a nameserver outage. Validating resolvers SERVFAIL on every .de name with EDE:
RRSIG with malformed signature found for
a0d5d1p51kijsevll74k523htmq406bk.de/nsec3 (keytag=33834)
dig +cd amazon.de @8.8.8.8 works, dig amazon.de @a.nic.de works. Zone data is intact, DENIC just published an RRSIG over an NSEC3 record that doesn't validate against ZSK 33834. Every validating resolver therefore refuses to answer.
Intermittency fits anycast: some [a-n].nic.de instances still serve the previous (good) signatures, so retries occasionally land on a healthy auth. Per DENIC's FAQ the .de ZSK rotates every 5 weeks via pre-publish, so this smells like a botched rollover.
qazwsxedchac 4 days ago [-]
So a single configuration mistake in a single place wiped out external reachability of a major economy. It happened in the evening local time and should be fixable, modulo cache TTLs, by morning. This will limit the blast radius somewhat.
Still, at this level, brittle infrastructure is a political risk. The internet's famous "routing around damage" isn't quite working here. Should make for an interesting post mortem.
belorn 4 days ago [-]
I am reminded of the warning that zonemaster gives about putting your domain name servers on a single AS, as is common practice for many larger providers. A lot of people do not want others to see this as a problem since a single AS is a convenient configuration for routing, but it has the downside of being a single point of failure.
Building redundant infrastructure that can withstand BGP and DNS configuration mistakes are not that simple but it can be done.
walrus01 4 days ago [-]
As the CPU/RAM resources to run an authoritative-only slave nameserver for a few domains are extremely minimal (mine run at a unix load of 0.01), it's a very wise idea to put your ns3 or something at a totally different service provider on another continent. It costs less than a cup of coffee per month.
belorn 3 days ago [-]
For a very long time, the computer club I was in operated a DNS server on a Pentium 75MHz and after the last major hardware upgrade it had a total of 110MB RAM memory and 2G disk space. It worked great except that before the upgrade it tended to run out of ram whenever there was a Linux kernel update, a problem we solved forever by populating all the ram slots with the maximum that the motherboard could handle to that nice 110 MB.
psd1 3 days ago [-]
Did you populate the motherboard with the most it could handle, or the most you could assemble from a box of assorted sticks?
Otherwise, 110MB would hint at a fascinating engineering culture at the motherboard manufacturer.
walrus01 3 days ago [-]
If I remember right there were certain very early pentium 3 processor competitors from VIA and other non-intel, non-AMD sources (with much worse performance) that had integrated onboard SVGA video, where the video RAM was shared with the system DRAM. Meaning that depending how you configured the video in the BIOS, you could have something like a 128GB RAM server "minus" 16GB RAM withheld for video, with like 112GB usable by the OS.
But if this guy is talking about a pentium 75 MHz (socket 5 CPU) that's a totally different generation of stuff several generations before that.
account42 3 days ago [-]
This makes sense for larger providers but just for a small/personal website there is literally zero advantages to having distributed authoritative DNS servers when the webserver is on a single host.
Ironically, denic still requires you to have two separate name servers with different IPs for your domain (which can be worked around by changing the IP of the registered name server afterwards lol), a requirement that all other registries I use have dropped or never had because enforcing such a policy at the registry level makes zero sense.
walrus01 3 days ago [-]
For a domain owned by someone in North America, it costs me literally $1.50 a month to have an authoritative only ns3 in Europe on a totally different ISP.
icedchai 3 days ago [-]
It depends. Do you also have email or other services for that domain? The advantage is your email doesn't start bouncing when your single host web site / DNS server is down.
account42 3 days ago [-]
Email bouncing during rare downtimes is hardly that big of an issue - if its actually important the sender will retry, possibly with a different contact method. And for short downtimes most likely the sender's MTA will just automatically retry a bit later - email is designed to work with temporary failures.
There isn't some magic reliability that everyone needs which just so happens to fall into "not achievable with a single authoritative name server" and "guaranteed with two servers". I'm not saying you should never have more than one, just that isn't the registry's business to decide what kind of availability guarantees you need for your domain.
4 days ago [-]
icedchai 3 days ago [-]
It's simple enough to get a secondary DNS server somewhere and put it on $5/month VPS. I use BIND and DNS replication (AXFR/IXFR) handles it.
ondohotola 3 days ago [-]
Have you ANY clue about the size of .DE's name server infrastructure?
throw0101c 3 days ago [-]
> Have you ANY clue about the size of .DE's name server infrastructure?
Is it more or less than the F-root server run by ISC?
Would .de have more, or less, traffic than some of the root servers?
icedchai 3 days ago [-]
Are you following the thread? We're talking about redundancy for a single domain here.
amiga386 3 days ago [-]
The single domain here is a ccTLD, and DNS's heirarchical nature means your personal domain's redundant DNS can't mitigate an outage at the ccTLD level.
icedchai 3 days ago [-]
Sorry, no. I was responding to "I am reminded of the warning that zonemaster gives about putting your domain name servers on a single AS, as is common practice for many larger providers."
That is not the ccTLD, that is an individual domain and its name servers. I recall being given that warning for early domain registrations.
Would not make any sense to do four of them if it's a single AZ. Also, they are geo-aware and routed to your nearest region.
seabrookmx 4 days ago [-]
Are you conflating autonomous system (AS) with availability zone (AZ)?
deepsun 4 days ago [-]
Uhh, you're right, I totally did. Now I see the parent's point, thank you.
pocksuppet 4 days ago [-]
DNS is a centralization risk, yes. Somehow we've decided this is fine. DNSSEC isn't the only issue - your TLD's nameservers could also be offline, or censored in your country.
skywhopper 4 days ago [-]
DNS is barely centralized. Is there an alternative global name lookup system that is less centralized without even worse downsides?
miki123211 3 days ago [-]
The blockchain.
The only thing a blockchain is good for is achieving decentralized consensus on what value a key points to, which is what DNS is.
An alternative way of looking at this is that acquiring domains must be somewhat expensive by definition; either you enforce it at the system level, or you make it free, but then somebody will inevitably grab all the interesting ones and re-sell them to others. A blockchain is the only way to make decentralized financial infrastructure viable.
fc417fc802 4 days ago [-]
GNS is the obvious response here, in addition to the various blockchain based solutions. Nothing that enjoys widespread support or mindshare unfortunately.
Even the current centralized ICANN flavor could be substantially more resilient if it instead handed out key fingerprints and semi-permanent addresses when queried. That way it would only ever need to be used as a fallback when the previously queried information failed to resolve.
account42 3 days ago [-]
GP said it was a risk (and it is), not that there are better alternatives. Not all risks can be eliminated easily but you should still be aware of them.
pocksuppet 4 days ago [-]
BGP, but the names in question are limited to 128 bits, of which at most 48 will be looked up, and you don't get to choose which 48 bits are assigned to you.
greatgib 4 days ago [-]
Normally it should not have been, with cache and all, but that was the past...
Think about what would happen the day that letsencrypt is borken for whatever reason technical or like having a retarded US leader and being located in the wrong country. Taken into account the push of letsencrypt with major web browsers to restrict certificate validities for short periods like only a few days...
muvlon 4 days ago [-]
Let's Encrypt has to be down for days before people begin to feel the pain. DNS is very different, it breaks stuff immediately everywhere.
tharkun__ 4 days ago [-]
No it doesn't. DNS breaks as soon as TTLs run out. It's your choice to set them so low that stuff breaks immediately.
__float 4 days ago [-]
What do you recommend then? DNS doesn't usually change that often, but if you mess it up when it does, you're in for some pain if TTLs are high!
htgb 4 days ago [-]
Not the one you're replying to, but I'd keep TTL high normally and lower it one TTL ahead of a planned change.
kenniskrag 3 days ago [-]
I would define high as "double time needed to fix a dns issue" and account for weekends
stouset 4 days ago [-]
This is the way.
account42 3 days ago [-]
Unfortunately you can't set DNS TTL arbitrarily high (or low) without some resolvers ignoring your suggestion and using arbitrary values.
Arnt 3 days ago [-]
Most historical outages lasted minutes or hours. One arguably lasted much longer, when someone lost control of their servers due to civil war.
I haven't followed this closely, but have there been any... shall we say plain outages longer than six hours? That's not an outrageous TTL. Or a day.
ale42 3 days ago [-]
This assumes that the host name you want has been recently queried. If it's not cached, good luck...
tharkun__ 3 days ago [-]
TL;DR: If it's not cached, does it really matter if it's offline for some time?
Long version:
If you're so popular all around that you really really want a very very short TTL, people will query all the time from all the places that "count", won't they? So it's gonna be cached.
If you're not so popular or not all around, what does it matter even if you had a very very short TTL? You're not loosing much.
cyberax 4 days ago [-]
Not really? .com and .net are still up
If Let's Encrypt goes down, half of the Internet will become inaccessible in a week.
akerl_ 4 days ago [-]
Presumably if LetsEncrypt goes down and stays down for a week, the sites that go down are the ones that see that their CA went down and at no point in the week take the option to get certs from a different CA?
bluejekyll 4 days ago [-]
I guarantee that there are a ton of sites out there not monitoring their certs.
"The internet's famous "routing around damage" isn't quite working here."
DNS is a look up service that runs on the internet.
Internet routing of IP packets is what the internet does and that is working fine (for a given value of fine).
You remind me of someone using the term "the internet is down" that really means: "I've forgotten my wifi password".
LastTrain 4 days ago [-]
Us non pod-people caught his drift.
eru 4 days ago [-]
What's a pod-people?
the8472 4 days ago [-]
fail-closed protocols have introduced some brittleness. A HTTP 1.0 server from 1999 probably still can service visitors today. A HTTPS/TLS 1.0 server from the same year wouldn't.
zelon88 4 days ago [-]
I think I see the point you're making here and I agree.
There is designing something to be fail-closed because it needs to be secure in a physical sense (actually secure, physically protected), and then there's designing something fail-closed because it needs to be secure from an intellectual sense (gatekept, intellectually protected). While most of the internet is "open source" by nature, the complexity has been increased to the point where significant financial and technical investment must be made to even just participate. We've let the gatekeepers raise the gates so high that nobody can reach them. AI will let the gatekeepers keep raising the gates, but then even they won't be able to reach the top. Then what?
I think the point you're trying to make, put another way is in the context of "availability" and "accessibility" we've compromised a lot of both availability and accessibility in the name of security since the dawn of the internet. How much of that security actually benefits the internet, and how much of that security hinders it? How much of it exists as a gatekeeping measure by those who can afford to write the rules?
account42 3 days ago [-]
Backwards compatibility is unfortunately not something security folk care about.
sam_lowry_ 3 days ago [-]
This is why I still run my blog on HTTP/1.1 only.
account42 3 days ago [-]
What no HTTP/1.0 for those of us too lazy to type the Host header into telnet???
sam_lowry_ 3 days ago [-]
Oh, because I host it with a few more sites on my tiny Hetzner cloud server.
fc417fc802 4 days ago [-]
You're not wrong but objecting to fail-closed in a security sensitive context is entirely missing the point.
walrus01 4 days ago [-]
It looks like a failed key replacement during a scheduled maintenance event. Normally this sort of thing is thoroughly tested and has multiple eyes on for detailed review and planning before changes get committed, but obviously something got missed.
wildylion 2 days ago [-]
.ru [had this](https://habr.com/ru/news/790214/) in 2024.
On the other hand, the russian government in itself had been doing a stellar job at breaking the internet.
(I hope I'll live to see them all sentenced to life without parole)
account42 3 days ago [-]
Would be interesting to know how something could get missed. You'd think the system was set up so that new keys could not be published without being verified working in a staging system.
Muromec 4 days ago [-]
>So a single configuration mistake in a single place wiped out external reachability of a major economy.
And fuck nothing at all happened as a result.
Our_Benefactors 4 days ago [-]
Prove it? I’m sure many lifespans were lost to stress
pinkgolem 4 days ago [-]
As someone with oncall yesterday it was a fun experience, but you noticed quickly that everything .de was down and then it was just a waiting game.
We had a short discussion about migrating to .com, but decided risk != reward as no one would know the new tld
I assume there are a couple people working for denic who had a stressfull night..
Woodi 4 days ago [-]
> So a single configuration mistake in a single place wiped out external reachability of a major economy.
Real world beats sci-fi :) And isn't it why we love IT ? And hate it too, because of "peoples in charge"...
throw0101c 3 days ago [-]
> So a single configuration mistake in a single place wiped out external reachability of a major economy.
No different than a bunch of BGP issues we've seen over the years.
And you don't even need DNSSEC for DNS to break things: reminder of the October 2025 AWS outage:
I have a bad feeling, that the impact will be quite severe for some services, as monitoring, performance, and security services might get disrupted. and just cleaning up is a big mess.. Worst case, some ot will experience outage and / or damage. But maybe I am just overestimating the severity of this.
miki123211 3 days ago [-]
The more interesting question is, could a political adversary do this to a country on purpose, and how hard would that be?
number6 4 days ago [-]
There is the kritis (critical infrastructure law) law, which trys to enforce some standards to make things not as brittle.
otabdeveloper4 4 days ago [-]
> The internet's famous "routing around damage"
...is only for Pentagon networks and military stuff. It's not for us normal people. (We get Cloudflare and FAANG bullshit instead.)
zelon88 4 days ago [-]
This is actually startlingly true.
Every FAANG company has their own fiber backbone. Why invest the internet that everyone uses when you can invest in your own private internet and then sell that instead?
profmonocle 3 days ago [-]
It's not like the long-haul fiber not owned by FAANG is a public utility, at least not in most places.
Traffic that goes over "the Internet" traverses some mix of your ISP's fiber, fiber belonging to some other ISP they have a deal with, then fiber belong to some ISP they have a deal with, etc.
All those ISPs are being paid to provide service, they can invest in their own networks.
account42 3 days ago [-]
And we all know that ISPs are famous for investing in timely infrastructure upgrades.
beeforpork 3 days ago [-]
... wiped out external reachability of a major economy ...
internal reachability (from Germany to .de domains), too... :-)))
dlopes7 4 days ago [-]
I love how I work with IT for 20 years and don't understand a single acronym here other than DNSSEC
icedchai 4 days ago [-]
I've been in IT 30+ years, been running DNS, web servers, etc. since at least 1994. I haven't bothered with DNSSEC due to perceived operational complexity. The penalty for a screw up, a total outage, just doesn't seem worth the security it provides.
gerdesj 4 days ago [-]
That was my experience too until I decided that just running email systems for 30 odd years when HN says that is unnatural piqued my weird or something!
I ran up three new VMs on three different sites. I linked all three systems via a private Wireguard mesh. MariaDB on each VM bound to the wg IP and stock replication from the "primary". PowerDNS runs across that lot. One of the VMs is not available from the internet and has no identity within the DNS. The idea is that if the Eye of Sauron bears down on me, I can bring another DNS server online quite quickly and fiddle the records to bring it online. It also serves as a third authority for replication.
Now I have DNS with DNSSEC and dynamic DNS and all the rest. This is how you start signing a zone and PowerDNS will look after everything else:
# pdnsutil secure-zone example.co.uk
# pdnsutil zone set-nsec3 example.co.uk
# pdnsutil zone rectify example.co.uk
Grab a test zone and work it all out first, it will cost you not a lot and then go for "production".
My home systems are DNSSEC signed.
qingcharles 4 days ago [-]
How simple sysadmin was in 1994 with no cryptography on any protocol. Everything could be easily MITM'd. Your credit card number would get jacked left and right in the 90s.
icedchai 4 days ago [-]
Nobody was taking credit cards online then. Your telnet sessions were easily sniffed, however.
qingcharles 4 days ago [-]
Not in '94, sure. But a couple of years later it was common and SSL was still uncommon, for a bunch of reasons, and also everyone was storing the card numbers in plaintext on their servers too.
Telnet was sniffed. IRC was being sniffed and logged.
icedchai 3 days ago [-]
Yes, I worked on some early ecommerce sites. Often, we'd accept credit cards with SSL and then send them out with email (plain text SMTP) to the customer, for manual entry. Very secure.
account42 3 days ago [-]
And your mailman can also just open your letters. So what, it mostly doesn't happen in developed countries. Not everything needs an airtight technical solution, we have way less costly ways to deal with unwanted behavior.
gerdesj 4 days ago [-]
Cool. Feel free to explain how to tighten things up.
I've just given them part of a recipe for using DNSSEC. I suspect you are not actually human .. qingcharles.
qingcharles 4 days ago [-]
I don't even understand what your comment is about, my dude. Given who a recipe? DENIC?
icedchai 3 days ago [-]
Look at his previous post. He described how to set up DNSSEC with PowerDNS.
walrus01 4 days ago [-]
To be fair, advanced real world knowledge of public/private key PKIs (x.509 or other), things like root CAs, are a fairly esoteric and very specialized field of study. There's people whose regular day jobs are nothing but doing stuff with PKI infrastructure and their depth of knowledge on many other non-PKI subjects is probably surface level only.
hannob 4 days ago [-]
I know quite a bit about PKI and X.509, and I can tell you that much: the overlap with how DNSSEC works is limited.
silisili 4 days ago [-]
As is the overlap between DNSSEC and DNS itself, to be honest.
I once worked at the level of administering DNSSEC for 300+ TLDs. It's its own world. When that company was winding down, I tried to continue in the field but the most common response (outside of no response, of course), was 'we already have a DNS team/vendor/guy.'
And well, then things like this happen. I won't throw stones though, it's a lot to learn and can be incredibly brittle.
mschuster91 4 days ago [-]
It's not made easier by the fact that a lot of cryptography is either very old and arcane or it's one hell of a mess of code that doesn't make sense without reading standards.
I had the misfortune of having to dig deep into constructing ASN.1 payloads by hand [1] because that's the only thing Java speaks, and oh holy hell is this A MESS because OF COURSE there's two ways to encode a bunch of bytes (BIT STRING vs OCTET STRING) and encoding ed25519 keys uses BOTH [2].
And ed25519 is a mess in itself. The more-or-less standard implementation by orlp [3] is almost completely lacking any comments explaining what is going on where and reading the relevant RFCs alone doesn't help, it's probably only understandable by reading a 500 pages math paper.
It's almost as if cryptographers have zero interest in interested random people to join the field.
The trick to asn.1 is to generate both parser and serializer from the spec. Elliptic curve math on the other hand is ... yeah, you need to know the math and also know the tricks to code that implements it. Both of those have steep learning curve, but it's hardly because it's a mess or it's old.
thayne 4 days ago [-]
The problem with ASN.1 is that it is big and complicated, and you only need a fraction of it for cryptography, and it isn't really used for anything outside of pki anymore.
It wouldn't be as bad if asn.1 had cought on more as a general purpose serialization format and there were ubiquitous decent libraries for dealing with it. But that didn't happen. Probably partly because there are so many different representations of asn.1.
A bespoke serialization specifically for certificates might actually have aged better, if it was well designed.
jll29 4 days ago [-]
Assuming there are some libraries for it, would this make a pretty good case for LLM-generated ports of these existing libraries into other languages or onto other OSs/platforms?
One implementation could be treated as "the spect".
pocksuppet 4 days ago [-]
ASN.1 is protobufs designed by committee. It is a general-purpose serialization format, but there's no good reason to choose it instead of protobufs.
thayne 3 days ago [-]
> It is a general-purpose serialization format
Yes. My point is that in practice it hasn't really been used for much outside of cryptography.
> there's no good reason to choose it instead of protobufs
Well, the reason it is used in a lot of the places it is, is because protobufs didn't exist when those protocols or file formats were created.
There are also some things that ASN.1 does better at a technical level. Of important significance to cryptography is that the DER representation is "canonical", meaning that there is only one way to serialize a set of data to bytes. That's important because it means that you can just hash the contents of the serialization for signatures, rather than having to have some kind of separate canonicalization step (which is a common source of mistakes).
mschuster91 4 days ago [-]
> Both of those have steep learning curve, but it's hardly because it's a mess or it's old.
Bitpacking structures used to be important in the 60s. That time has passed, unless you're dealing with LoRa, NFC or other cases of highly constrained bandwidth there are way better options to serialize and deserialize information. It's time to move on, and the complexity of all the legacy garbage in crypto has been the case of many a security vulnerability in the past.
As for the code, it might be personal preference but I'd love to have at least some comments referring back to a specification or original research paper in the code.
Muromec 4 days ago [-]
I think you misunderstand the problem asn.1 solves and constrains it works within (both 30 years ago and now). We sure can have a better one now once we learned all the lessons and know what good parts to keep, but this critique of bitpacking is misplaced.
Avamander 4 days ago [-]
ASN.1 is not used because of just bitpacking. There are other benefits to ASN.1 and it's probably one of the least problematic parts there.
People who have thought they can do better have made things like PGP. It's one of the worst cryptographic solutions out there. You're free to try as well though.
Muromec 4 days ago [-]
People who though they can do better did JWT, that is not complicated at all and has no bugs as well. Also solves 20% of what asn.1 is used for.
thayne 4 days ago [-]
Maybe a bit pedantic, but it would actually be the more general JOSE which includes tokens (JWT), signatures (JWS), and key transmission (JWK).
And there is a related binary format that uses CBOR (COSE) as well.
tptacek 4 days ago [-]
The trick to ASN.1 is to serialize/unserialize it backwards.
dwattttt 4 days ago [-]
#1 NSA, I get it now!
cyberax 4 days ago [-]
X.509 is a deep legacy, but at least at this point it's well tested.
I'm 100% certain that you also can do that with raw java.security. I did that about 15 years ago with raw RSA/EC keys. You can just directly specify the private exponent for RSA (as a bigint!) or the curve point for EC.
Ditto for ed25519, you can just take the canonical implementation from DJB. And you really really shouldn't do that anyway, please just use OpenSSL or another similar major crypto library.
mschuster91 4 days ago [-]
> I'm 100% certain that you also can do that with raw java.security.
I tried that, the problem is Meshcore specific - they do their own weird shit with private and public keys [1]. Haven't figured out how to do the private key import either, because in the C source code (or in python re-implementations) Meshcore just calls directly into the raw ed25519 library to do their custom math... it's a mess.
I'm playing with LORA/Meshcore right now (I have an nRF52840 lying around). I'm pretty sure I know how to do that, will take a look.
Muromec 4 days ago [-]
I wouldn't recommend touching openssl (the library, command line tools are okay-ish) with anything that breaths life.
tptacek 4 days ago [-]
The typical vector for entering cryptography as a professional is called "grad school".
hathawsh 4 days ago [-]
Is that actually true, though? Even though it's not really my job, I find myself debugging certificates and keys at least once a month, and that's after automating as much as possible with certbot and cloud certificates. PKI always seems to demand attention.
walrus01 4 days ago [-]
In my initial comment, I meant more in terms of complexity and planning from the perspective of the people who are running the public/private key infrastructure on the other side/upstream of what you're doing as a letsencrypt end user.
Broadly similar general concept to the team responsible for the DNSSSEC signing keys for an entire ccTLD.
Yeah a x509 PKI / root CA is a very different thing than DNSSSEC but they have a number of general logical similarities in that the chain of trust ultimately comes down to a "do not fuck this up" single point of failure.
bflesch 4 days ago [-]
Don't worry, that's by design ;)
1vuio0pswjnm7 3 days ago [-]
Curious how this comment was originally marked as "dead" but is now on top
I saw it at bottom of thread and vouched for it. Usually when I "vouch" nothing happens
I always ignore the RRSIG lines in zone files. To me it's not "DNS data", it's cruft
But DNSSEC has its true believers. I'm just not one of them
Interesting "bus problem" to have in a scenario where everyone who is qualified, experienced and trusted enough to commit lives changes (or perform a revert, undo results of a botched maintenance, etc) in an emergency situation is not completely sober.
femto 4 days ago [-]
Sobriety is just factor to be weighed in an emergency situation. 30 years ago I was at a ski resort with about 50 friends having a drinking competition in the resort's main bar. Late that night two ski lodges collapsed, trapping people inside. Around midnight, soon after the winner was announced, the police entered and asked "who's able to drive a crane truck?" The winner of the competition put his hand up and informed them of how much he had had to drink. Don't care they said, so he drove a crane big enough to lift a building up a single lane 35km mountain road in nighttime ice conditions. (The crane made it, but sadly most of the people in the ski lodges didn't. https://en.wikipedia.org/wiki/1997_Thredbo_landslide )
jamesfinlayson 4 days ago [-]
Sounds like Australian police. I remember 15 or so years ago being in a big team assisting the Australian police with something on a remote farm. There were 20 people that needed to be taken back to base and one 10 seater car. Someone asked the police if everyone could get in the car and policeman shrugged and said you can try. So the policeman drove a four wheel drive across farmland with 16 people stuffed into the back.
Muromec 4 days ago [-]
Sounds like Europe, yes.
FinnKuhn 4 days ago [-]
A real party killer if I have ever seen one.
SOLAR_FIELDS 4 days ago [-]
At least all of the appropriate people were in a room together when the outage happened
SpaceNoodled 4 days ago [-]
Sounds like poor risk pooling. If that room crashed, we'd have nobody to fix this.
bflesch 4 days ago [-]
nation state actor picking right time to sabotage a tiny part of the key rotation process. on monday someone cut major fiber lines, on tuesday DENIC is failing.
maybe someone is showing off?
SOLAR_FIELDS 4 days ago [-]
Unironically yeah, we are at the level of weaponizable sophistication that this metaphorical dick waving you are suggesting is probably something that happens
That only worked because the attacker didn't understand dnssec. If they had unsigned the domain first and then hijacked it they would have succeeded.
I haven't been able to find any cases of genuine dns hijack attacks in the last few years. Would love to know if anyone else can?
Only about 40% of the crypto companies seem to use dnssec. Seems like a target rich environment.
thayne 4 days ago [-]
Probably the most common reason to use DNSSEC is to check a box on a list of compliance rules. And I don't think this will change anything for people who need DNSSEC for compliance.
tptacek 4 days ago [-]
There's no commercial compliance regime that requires DNSSEC (FedRAMP might be the only exception --- I'm uncertain about the current state of FedRAMP DNSSEC rules --- but that makes sense given that DNSSEC is a giant key escrow scheme.)
thayne 4 days ago [-]
FedRAMP requires it, although like many requirements, you may be able to get out of it if you have a good reason and/or your sponsoring agency doesn't care about it.
There are also some large businesses that require, or strongly pressure SaaS providers to use DNSSEC. You can often contest that, but if you have DNSSEC, that's one less thing to argue about in the contract.
tptacek 4 days ago [-]
Which businesses are those? (I ask because if they're North American, I have a pretty good sense of which large North American businesses even have DNSSEC signatures set up, and it's not many; small enough that you can easily memorize the list.)
whh 3 days ago [-]
I found another reason... MS365 require DNSSEC to be enabled if you want DANE for TLS-enforced SMTP. You could also use MTA-STS.
matteocontrini 3 days ago [-]
As far as I know, the DANE spec (RFC 7671) requires DNSSEC to be enabled, while MTA-STS does not.
tptacek 3 days ago [-]
MTA-STS was standardized explicitly to support the (nearly universal) use case of mail providers without DNSSEC. Even O365, which ostensibly supports DANE/DNSSEC for email security, does so only for select customers and not for ordinary ones (go look for the TLSAs).
pocksuppet 4 days ago [-]
Probably the most common reason to use TLS is to check a box on a list of compliance rules. Is that bad?
weird-eye-issue 4 days ago [-]
Do browsers even load non-HTTPS sites anymore without a massive warning?
red_admiral 3 days ago [-]
neverssl.com works fine for me, only a small warning in the place where the padlock usually is, that no-one checks anyway.
The browser would be very unhappy with an <input type="password"/> on a non-TLS site (localhost excepted). HSTS would trigger the "massive" warning and refuse to load the site, however.
weird-eye-issue 3 days ago [-]
It's more pronounced on desktop
Ah yes I think the HSTS issue is what I was thinking of
pocksuppet 4 days ago [-]
Yes, they do.
weird-eye-issue 4 days ago [-]
Yeah just ignore the big "not secure" warning in the URL bar
pocksuppet 4 days ago [-]
I just checked it. You mean the very small open padlock icon? The era of browsers warning loudly about HTTP was a decade ago, it got reversed due to pushback.
weird-eye-issue 3 days ago [-]
Well I checked both Chrome and Firefox on mobile and my desktop and they were all much more obvious than just an "open padlock". They both said "Not Secure" and in Firefox it was bright red text. Also in incognito mode Chrome refused to even open the site without a full screen warning. They all make it super clear non-HTTPS sites are not secure so I'm not really sure what your point is?
liveoneggs 4 days ago [-]
browsers pushed it, not compliance
jeroenhd 4 days ago [-]
I doubt it. The root cause of this was a root server misconfiguration or bug. It happened to DNSSEC records this time, which is a pain, but next time it might as well flip bits or point to wrong IP addresses instead.
Paradoxically, resolvers wouldn't have noticed the misconfiguration if it weren't for DNSSEC.
amluto 4 days ago [-]
Hahaha. You wish :-p
tptacek 4 days ago [-]
It's a pretty hard argument to work around: WebPKI certificates should go in the DNS, and also the largest DNS providers might at any moment decide not to validate DNSSEC anymore to get through an outage.
farfatched 4 days ago [-]
Yes, it's a crappy outcome, but endpoints can still choose to enforce this. Further, it's not a persuasive argument against more DNSSEC usage, since if there was more DNSSEC usage then resolvers would be more reluctant to disable it.
pocksuppet 4 days ago [-]
If there's going to be a single point of failure in front of your website, that single point of failure may as well be the only single point of failure instead of having two single points of failure, and it's probably important that people can't spoof responses.
akerl_ 4 days ago [-]
Nobody had to hack it. A system at DENIC broke, and so Cloudflare turned off DNSSEC validation for all of their users accessing .de. If DNSSEC was actually important for the security model of those users, that would be a huge deal.
phicoh 3 days ago [-]
If DNSSEC is part of your security model, you want local validation. Not relying on third party resolver that you don't have a contract with.
Beyond that, DNS has the AD bit. If you need DNSSEC secure data (for example for the TLSA record), then when Cloudflare turns off DNSSEC validation, the AD bit will be clear and things will stop working.
amluto 3 days ago [-]
Am I the only one who thinks that the AD bit is about as useful as the RFC 3514 evil bit?
We have this elaborate, complex, and extremely fragile cryptographic system behind DNSSEC and we distill it down to one single bit that we carry over unauthenticated links. Why?
At least WebPKI answers the right question: should I trust a particular claim to represent host.domain at the time in the following range? (Of course it defers determining the current time to some unspecified other mechanism.) DNSSEC tries to do everything and cannot survive an upstream error even within the downstream validity window. And yet, despite the fact that most of the spec leans heavily toward failing secure, the actual communication of validation status is entirely unprotected.
tptacek 3 days ago [-]
I can answer that! Because when DNSSEC was designed, it was believed that serverside compute could not keep up with per-request cryptography. DNSSEC contorts itself in several ways to maintain affordances for offline cryptography, which has been retconned into a security mechanism but was in reality just a bunch of non-cryptography-engineers making a terrible prediction about the feasability of cryptography.
(Source: I'm one of the few weirdos on Earth who has read the mailing lists all the way back to when DNSSEC was a TIS project).
pocksuppet 3 days ago [-]
The intention is clearly that the client is a minimal implementation that will only forward a request to a resolver it trusts. The fact that Cloudflare and Google have convinced us all to use Cloudflare's and Google's resolvers is the problem.
DNSSEC and WebPKI both rely on chains of trust. If the problem was that .de's keys expired, you'd have the same problem when Let's Encrypt's keys expired.
akerl_ 3 days ago [-]
> If the problem was that .de's keys expired, you'd have the same problem when Let's Encrypt's keys expired.
Even this incident proves that’s not the case.
If LetsEncrypt has a temporary availability issue, my users don’t notice unless it spans longer than my need to renew a cert.
If LetsEncrypt has a CA cert expire, I can get a cert from another provider.
If DENIC’s DNSSEC records break, either due to an operational error or an expiry issue, my .de site becomes inaccessible and my users see a DNS lookup failure. My only option is to hope resolvers do what Cloudflare did, or move my site to a new TLD and just pray that TLD never has the same problem.
tptacek 3 days ago [-]
The WebPKI works end-to-end all the way to use devices; DNSSEC build an explicit client/server trust model into that. The former is obviously superior to the latter.
Yes, it's also quite damaging to DNSSEC's trust model that the world has transitioned to centralized resolver caches. But the fundamental problem we're talking about with the AD bit wouldn't vanish if 8.8.8.8 and 1.1.1.1 did too; instead, users would be even more reliant on ISP nameservers, which are literally the least trustworthy pieces of infrastructure on the entire Internet.
tptacek 4 days ago [-]
This is a non sequitur.
cluckindan 4 days ago [-]
If it turns out the DNSSEC issue was caused by threat actors, this downstream effect could very well have been the reason to do it.
amluto 4 days ago [-]
It is indeed a bit sad that Cloudflare had to turn off DNSSEC completely. But I completely understand that they don't have a production-ready, tested path to override DNSSEC validation for only some domains.
vendemiat 4 days ago [-]
Sorry! status message was not clear. DNSSEC validation is temporarily disabled only for .de domains.
tptacek 4 days ago [-]
That's not much better!
fastest963 4 days ago [-]
[flagged]
jonah-archive 4 days ago [-]
Originally it said:
---
The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation on 1.1.1.1 resolver in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.
---
(and in case it changes again, now it says)
---
The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation for .de domains on 1.1.1.1 resolver (as per RFC 7646) in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.
The RFC 7646 thing here is the funniest possible addition. This is the greatest day.
tptacek 4 days ago [-]
It didn't originally say that. They added the clarification just a few minutes ago. The guidelines ask you not to ask people these kinds of questions, for what it's worth.
petee 3 days ago [-]
Temporarily is a fairly important word to include with that link
account42 3 days ago [-]
This seems like it should be the bigger news here. Disappointing knee jerk reaction from Cloudflare.
liveoneggs 4 days ago [-]
We only disabled SSL on all the websites in one country for a little bit.. I'm sure those credit card numbers were perfectly safe over the wire
weird-eye-issue 4 days ago [-]
They didn't disable SSL you dingus.
liveoneggs 3 days ago [-]
it was an analogy to try highlighting how silly "security" is when it's opt-in and any intermediary can just disable it
weird-eye-issue 3 days ago [-]
I'm not sure you know how analogies work
acdha 3 days ago [-]
That comparison really makes the contrast clear: losing TLS would’ve put millions of people either into full downtime or immediately at significant risk (you can’t uncapture data). Losing DNSSEC, however, placed no one at risk and improved uptime.
There’s a reason why one of the two has roughly 10% adoption after three decades and the other is high 90-something percent.
pocksuppet 4 days ago [-]
I must be early. There's not a single tptacek DNSSEC rant in this thread yet.
tptacek 4 days ago [-]
What would I need to rant about? Sometimes the world does my ranting for me.
0123456789ABCDE 4 days ago [-]
doesn't this event speak for itself though?
Avamander 4 days ago [-]
Kind-of. But there are worse things than outages when it's PKIs we're talking about. DNSSEC is also extremely opaque and unmonitored. Any compromise will not be noticed. Nor will anyone have any recourse against misbehaving roots.
Fun fact, CloudFlare has used the same KSK for zones it serves more than a decade now.
daneel_w 4 days ago [-]
Which is fine. Not because KSK rollover is supposedly complicated, but if you can't manage to keep your private keys and PKI safe in the first place then key rotation is just a security circus trick. But if you do know how to keep them safe, then...
Avamander 4 days ago [-]
It is not fine. Keeping key material safe is not a boolean between "permanently safe" and "leaks immediately".
Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.
For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
daneel_w 3 days ago [-]
Yeah, theoretically. They "only" need continued access to CF's internal systems. Surely you're aware that the ZSK is confined to your zone and can be rotated as much as you want without having to involve the root/registrar, and with none of the risks or consequences of not knowing how to perform a KSK rollover?
What's your take on the conundrum of Amazon Trust's 20+ year root cert, with which they sign a 5+ year intermediate, with which they sign a 2-month leaf?
cyberax 4 days ago [-]
> Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.
Nope. Key material rotation is just circus when it's done for the sake of rotation.
> For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
Or maybe an employee has compromised the new key that is going to be rotated in, while the old key is securely rooted in an HSM?
tptacek 4 days ago [-]
The point of rotation for these kinds of keys is that it limits the blast radius of what happens if an employee compromises such a key. This is sort of like how there are one or two die-hard PGP advocates who have come up with a whole Cinematic Universe where authenticated encryption is problematic ("it breaks error recovery! it's usually not what you want!") because mainstream PGP doesn't do it. Except here, it's that key rotation is bad, because of how often DNSSEC has failed to successfully pull off coordinated key rotations.
cyberax 4 days ago [-]
I can see the periodic rotations used as a way to keep up the operational experience. This is indeed a valid reason, although it needs to be weighted against the increased risk of compromise due to the rotation procedure itself.
I'm just saying that rotating the key just in case someone compromised it is not a great idea. Doubly so if it's done infrequently enough for the operational experience to atrophy between rotations.
And yeah, I fully agree that anything surrounding the DNSSEC operations is a burning trash fire. It doesn't have to be this way, but it is.
tptacek 4 days ago [-]
I'm glad we agree about DNSSEC, but the rationale I'm giving you for key rotation is the same reason we use short-lived secrets everywhere in modern cryptosystems. It's not controversial (except among Unix systems administrators).
cyberax 4 days ago [-]
Oh, I never disagreed about the state of DNSSEC. It's horrible. Along with the rest of the DNS infrastructure (I just had the reason to remember the DNS haiku again today, unrelated to .de). My disagreement is that I believe that DNSSEC should be fixed, rather than abandoned. And I believe that this does not actually require all that much work.
And I just don't fully buy this rationale for asymmetric key rotation. It makes total sense for symmetric secrets (except for passwords).
Avamander 3 days ago [-]
> Or maybe an employee has compromised the new key that is going to be rotated in, while the old key is securely rooted in an HSM?
Also possible, but that'd be an active threat that has some probability of being caught.
Never replacing keys allows permanent compromise that can only be caught if someone directly observes misuse.
Though nobody monitors DNSSEC like that, nor uses it, so it's fine from that aspect I guess.
jcgl 3 days ago [-]
> Nope. Key material rotation is just circus when it's done for the sake of rotation.
I'm a mere sysadmin and not a cybersecurity expert. But this is always something that leaves me torn.
On the one hand, yes, rotation periods for many/most credentials are long enough that you're not really de-risking yourself all that much.
On the other hand, doing regular rotations allows you to tighten up your threat model. A regularly-rotated credential allows you to say "I implicitly trust that this credential has not been compromised prior to the previous rotation."[0] Whereas, without credential rotation, you're saying "I implicitly trust that this credential has not been compromised ever."
The latter to me seems clearly like the inferior model. The question is just whether the cost-benefit pencils out. And that is obviously very situationally dependent. That calculus doesn't pencil out when dealing with user-owned passwords for instance (i.e. the costs of regular password rotation dominate the benefits of the improved threat model). Human limitations with memory and such are the main issue there. However, that doesn't apply to e.g. hypothetical sufficiently developed DNSSEC infrastructure. Does that calculus pencil out there? I don't know. But it seems plausible at least.
[0] Modulo attackers having been able to pivot into a persistent threat with a previously-compromised credential.
account42 3 days ago [-]
No?
pocksuppet 4 days ago [-]
Let's Encrypt going down isn't equivalent to a rant about how encryption was a terrible idea from the very beginning and we should all just use unencrypted traffic.
tptacek 4 days ago [-]
Pretty sure that rant doesn't exist.
greensh 4 days ago [-]
It does kinda? at least the part about to much security and it's really funny: https://tom7.org/httpv/httpv.pdf also available as Video on YouTube.
sam_lowry_ 3 days ago [-]
I host my blog on HTTP/1.1 only. But I also have an amateur radio station and I listen occasionally to (unencrypted!) air traffic frequencies around nearby airport.
3 days ago [-]
0123456789ABCDE 3 days ago [-]
not to disagree on the merits of encryption — i'm not a clown, but scripting.com is still port 80 only, and Dave is the type to write a rant
petee 3 days ago [-]
Perhaps its more fair to call it 'passionate'.
That said, the last few dnssec posts that got traction, tptacek tends to be at least 20% of the comments alone (ex, 55/259), ignoring word count. Today seems calm
0123456789ABCDE 3 days ago [-]
"When the enemy is making a false movement, we must take good care not to interrupt him." — some guy, you wouldn't have hear of him
apaprocki 4 days ago [-]
Maybe he drank a little too much Malört with the DENIC team last night?
We should frame it as "all .de domains are ready to be impersonated because everyone will disable DNSSEC".
chromehearts 4 days ago [-]
I was STRESSING tf out because I wasn't able to connect to my services & apps through my domains like at all .. they only work when using my phone data ? .. thank god it's not my fault this time
Locke80 4 days ago [-]
But we're Germans, and we need someone to blame.
lschueller 4 days ago [-]
Thank god for the german chain of blame:
1. The system
2. The neighbor
3. China
warpspin 4 days ago [-]
You definitely forgot Merkel and Habeck.
Cockbrand 4 days ago [-]
Danke Merkel!!1!11!!
AndroTux 4 days ago [-]
I'm blaming chromehearts anyways
chromehearts 4 days ago [-]
I can live with that
siva7 4 days ago [-]
Crazy. I can't remember an incident like this ever happened before and it's still not fixed? .de is probably the most important unrestricted domain after .com from an economical perspective. Millions of businesses are "down".
> For instance, the name "www.nytimes.com" corresponds to nine different computers that answer requests for The New York Times on the Web, one of which is 199.181.172.242
In other words: I expect this German DNS SNAFU to have 0.000000001% impact on the world's GDP this year.
ulfw 4 days ago [-]
How is 1/10th the size of number 1 and 2 COMBINED small? In what world is that a small number? Especially as those two are 1.8 billion people vs 0.08 billion for Germany
tommit 3 days ago [-]
This comparison threw me for such a loop. What an odd way to present a point.
itsyonas 4 days ago [-]
> In other words: I expect this German DNS SNAFU to have 0.000000001% impact on the world's GDP this year.
126 trillion USD * 0.00000000001 = 1260USD
I'm pretty sure the impact was higher than that ;)
NooneAtAll3 4 days ago [-]
what's SME?
phillipseamore 4 days ago [-]
Small/Medium enterprises
carstenhag 4 days ago [-]
Well it was already very late in the day (21-22?) so the impact was not big I would say
Even when every site in the world’s 3rd biggest economy goes down it’s still just a ‘Partial’ service disruption :D
yorwba 4 days ago [-]
Not every site, just the ones using DNSSEC. Clearly, denic.de was online, for instance.
tom1337 3 days ago [-]
Sites without DNSSEC have also been affected. If you could reach any .de page, you had the DNS entries cached.
gruselhaus 4 days ago [-]
Whole Germany is offline. DENIC: "Partial Service Disruption". That's one way to phrase it.
MASNeo 4 days ago [-]
At least they have some humor left.
Edit: Now even the humor is gone.
sunaookami 4 days ago [-]
Can only be topped when the status page is not reachable anymore :D
EDIT: called it...
lschueller 4 days ago [-]
Or only accessible through a german dns server
niklasrde 4 days ago [-]
It says "Server Not Found" now
cubefox 4 days ago [-]
"All Systems Operational"
Zopieux 4 days ago [-]
Yes, it's fixed.
tom1337 4 days ago [-]
I have never used DNSSEC and never really bothered implementing it, but do I understand it correctly that we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it which now breaks because the central organisation managing this certificate has an outage taking basically all domains with them?
gucci-on-fleek 4 days ago [-]
> which now breaks because the central organisation managing this certificate has an outage
The ".de" TLD is inherently managed by a single organization, and things wouldn't be much better if its nameservers went down. Some of the records would be cached by downstream resolvers, but not all of them, and not for very long.
> we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it
DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
Similarly, without DNSSEC, a domain owner needs to absolutely trust its authoritative nameservers, since they can trivially forge trusted results. But with DNSSEC, you don't need to trust your authoritative nameservers nearly as much [0], meaning that you can safely host some of them with third-parties.
> DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down? As far as I understood the TTL for those keys is different and for DENIC it seems to be 1h [0]. So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
[0]
dig RRSIG de. @8.8.8.8
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519214514 20260505201514 26755 de. [...]
gucci-on-fleek 4 days ago [-]
> but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down?
In theory, this shouldn't happen, because if you use the same TTLs for your DNSSEC records and your "regular" records, then if the regular records are present in the cache, the DNSSEC records will be too.
> So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
Yes, but I'd argue that the DNSSEC records should have the same TTLs for exactly this reason. That's how my domain is set up at least:
$ dig +nocmd +nocomments +nostats +dnssec @any.ca-servers.ca. maxchernoff.ca. DS
;maxchernoff.ca. IN DS
maxchernoff.ca. 86400 IN DS 62673 15 2 487B95FEFF04265826F037C9DB2E1F14FF9ADBF2C7BE246A2B9F9BFD 481BE928
maxchernoff.ca. 86400 IN RRSIG DS 13 2 86400 20260512131336 20260505104433 46762 ca. ppc9LrWniPWdAI2Xq1g3FrYJGQVYayA5TtgFRkJfqOqNfe6zu/n0gwti IO3c9pOoUpIum5gPB6GLOGbGU+sfhg==
$ dig +nocmd +nocomments +nostats +dnssec @ns.maxchernoff.ca. maxchernoff.ca. DNSKEY
;maxchernoff.ca. IN DNSKEY
maxchernoff.ca. 86400 IN DNSKEY 257 3 15 DYs9mPDMRx/hQ9R9iGLi1Ysx1eFdhlXeCujY6PqJWeU=
maxchernoff.ca. 86400 IN RRSIG DNSKEY 15 2 86400 20260518072823 20260504055823 62673 maxchernoff.ca. RgPyEvB/kjXIvoidRNF/hfm7utzDs0kxXn4qJL17TUAVYOdbLl0Vd8zt E52bGBBFv2TNEnf9O9LkiT2GBH0jAA==
$ dig +nocmd +nocomments +nostats +dnssec @ns.maxchernoff.ca. maxchernoff.ca. A
;maxchernoff.ca. IN A
maxchernoff.ca. 86400 IN A 152.53.36.213
maxchernoff.ca. 86400 IN RRSIG A 15 2 86400 20260518072823 20260504055823 62673 maxchernoff.ca. bRfTVHnMjCFRaIh5uc0aT1vD4yh1UZrqOZDRunLbxFI1eth6nNlTiOOC xti7axVoXwB6VAoHOAnW0nL0eeJNDQ==
tom1337 4 days ago [-]
Thanks for explaining. I thought that once any key in the chain-of-trust of any domains DNSSEC expired the whole record went stale but turns out that was a wrong assumption. If the DNSKEY and the other records have the same TTL and the DNSSEC verification is also "cached" then that makes a lot more sense.
gucci-on-fleek 4 days ago [-]
> I thought that once any key in the chain-of-trust of any domains DNSSEC expired the whole record went stale but turns out that was a wrong assumption.
No, that actually is true, but I think (?) that the part that you were missing is that DNSSEC records are mostly the same as any other record, so they can be cached the same way. And since most resolvers are DNSSEC-enabled these days, they'll tend to request (and therefore cache) the DNSSEC records at the same time as the regular records.
There are tons of edge cases here, but it should hopefully be pretty rare for a cache to have a current A/AAAA record and stale/missing DNSSEC records.
> the DNSSEC verification is also "cached"
Technically the verification itself isn't cached, but since verification only depends on the chain of DNSSEC records, and those records are cached, it has the same effect.
wahern 4 days ago [-]
DNSSEC doesn't change the degree to which DNS is decentralized. It's always been hierarchical. In the absence of caching, every DNS query starts with a request to the root DNS servers. For foo.com or foo.de, you first need to query the root servers to determine the nameservers responsible for .com and .de. Then you contact the .com or .de servers to ask for the foo.com and foo.de nameservers. All DNSSEC does is add signatures to these responses, and adds public keys so you can authenticate responses the next level down.
A list of root nameserver IP addresses is included with every local recursive DNS resolver. The list changes, albeit slowly, over the years. With DNSSEC, this list also includes public keys of those root servers, which also rotate, slowly.
Medowar 4 days ago [-]
What you see here is decentralisation working. The issue is with the operator of the de TLD, and as such only that TLD is affected.
DNS is not decentralised in such a way, that multiple organisations run the infrastructure of a TLD, those are always run by a single entity.(.com and .net are operated by Verisign)
So what the issue is, that the operator has, does not change the impact.
AndroTux 4 days ago [-]
What if the root (.) certificate breaks?
pocksuppet 4 days ago [-]
Resolvers are free to cache each TLD's keys. There's a finite, well-known list of TLDs and their keys - you can download all the root zone data from IANA: https://www.iana.org/domains/root/files (it's a few megabytes in uncompressed text form)
The world might be a little bit better with more decentralization of the root zone.
kuerbel 4 days ago [-]
I just spent the better half of an hour to debug unbound and the pihole because I thought it's a me problem...
Good news though, if you add domain-insecure: "de" to your unbound config everything works fine
Bender 4 days ago [-]
I don't even enable DNSSEC in Unbound. There just isn't enough adoption yet for me to feel like I am missing out on something, yet.
"Cloudflare Radar data shows 8.11% of domains are signed with DNSSEC, but only 0.47% of queries are validated end-to-end." [1]
That's cool, ty for that. The only one I put credentials into is Amazon it is unsigned. [1] There probably needs to be a DNSSECv2 .vbis that reduces risk somehow to get more adoption.
For what it's worth, technically we're already on something like DNSSEC-ter or DNSSEC-quater. -bis was back in the early 2000s with the typecode roll. It was really called DNSSEC-bis!
Bender 3 days ago [-]
It was really called DNSSEC-bis!
That's too funny. I was just kidding. Back in the day Ericsson always added that to their upgraded product lines (Including GSM and what-not)
tptacek 3 days ago [-]
Right, it's OSI/ITU-speak, and it's ironic to see it applied at IETF.
pocksuppet 3 days ago [-]
Do we know what their root mistake was? I've studied and deployed DNSSEC, and as I see it, the current version is pretty much the simplest thing that could possibly work, given the way DNS works.
Bender 3 days ago [-]
The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
That's their current official statement. I could guess but I would rather wait until they have an official statement. I would imagine they must know but they are probably going back and forth with their legal team to word it very carefully, or at least that is what I would be doing if I were in their situation.
V__ 4 days ago [-]
Just before the outage happened I updated multiple client servers. That was a very stressfull hour trying to figure out why nothing works.
chromehearts 4 days ago [-]
SAMEEEEE !!!
victorbjorklund 4 days ago [-]
Same haha
__michaelg 4 days ago [-]
Finally establishing the concept of Feiertag on the internet. Come back tomorrow.
throw1234567891 4 days ago [-]
Internetfreie Dienstage, 21st century variant of Autofreie Sonntage.
9753268996433 4 days ago [-]
Using this newfangled thingamabob on a silent holiday will result in the police kicking in your door the next morning.
sgbeal 4 days ago [-]
> will result in the police kicking in your door the next morning
But not before 8am.
1vuio0pswjnm7 4 days ago [-]
.de TLD is online. DNS working fine
DNSSEC not working
If using an open resolver, i.e., a shared DNS cache, e.g., third party DNS service such as Google, Cloudflare, etc., then it might fail, or it might not. It depends on the third party DNS provider
To me, a response from a "root DNS server", i.e., [a-m].root-servers.net, is not "wrong" if it contains the correct data that I'm requesting, e.g., domainnames and associated IP numbers
I'm never requesting RRSIGs as I do not use that data. For me, it's just cruft that now comes in the response
3 days ago [-]
basilikum 4 days ago [-]
This is the kind of system failure that we need really good and well tested disaster recovery plans for. While not necessary this time, DENIC and any critical infrastructure provider should be able to rebuild their entire infrastructure from scratch in a tolerable amount of time (Rather days than hours in the case of a full rebuild). Importantly the disaster recovery plan has to work without reliance on either the system that is failing, but also on adjacent systems that might have hidden dependencies on the failing system.
I'm really not too close to Denic and know nothing about their internals, but just close enough to have experienced the stress of someone working for DENIC second hand during the outage. From the very limited information I happened to gather DENIC had some trouble in addressing the issue because, surprise, infrastructure that they need to do so runs on de domains. [1]
I'm convinced there are all kinds of extended cyclic decencies between different centralization points in the net.
If some important backbone of the internet is down for an extended time, this will absolutely cause cascading failures. And thesw central points of failure are only getting worse. I love Let's Encrypt, but if something causes them to hard fail things will go really bad once certificates start to expire.
We need concrete plans to cold start extended parts of the internet. If things go really bad once and communication lines start to fail, we're in for a bad time.
Maybe governments have redundant, ultra resistant, low tech communication lines, war rooms and a list of important people in the industry who they can find and put in these war rooms so they can coordinate the rebuild of infrastructure. But I doubt it.
[^1] I don't know if there is some kind of disaster plan in the drawer at DENIC that would address this. I don't mean to allege anything against DENIC specifically, but broadly speaking about companies and infrastructure providers, I would not be surprised if there was absolutely no plan on what to do if things really go down and how to cold start cyclic dependencies or where they even are.
toast0 3 days ago [-]
> This is the kind of system failure that we need really good and well tested disaster recovery plans for.
All the cool kids offer their services over multiple TLDs and have their name servers of record in multiple TLDs, too. It's not quite best practices for recursive DNS to regularly fetch the complete root zone to cache it, but it's not unreasonable to do so.
basilikum 1 days ago [-]
Having nameservers on multiple TLDs is definitely smart. But shouldn't glue address this at least for short outages already? How does the delegating zone handle and cache glue records?
Providing the same site on multiple TLDs is kind of a phishing hazard though. It may make sense for non end user facing APIs and other critical points, but for end user sites I think sites should stick to one domain. Otherwise they train users to fall for phishing sites.
Almost certainly /s.
"Danke Merkel" ("Thanks Merkel") was once a sincere criticism from conservatives regarding her policies (esp. during 2015 refugee crisis), but it quickly evolved into a sarcastic, deadpan joke used to blame her for literally anything that goes wrong in daily Germany - even years after she left. Interesting phenomenon...
Oh, yeah, I'm sure feeling chastened right now. You got me.
SAI_Peregrinus 4 days ago [-]
Parmigianino-Reggiano is aged milk, so I'm not sure what people have against aged milk. Aged milk can be great
sgc 4 days ago [-]
My poor fellow. You wrote about how something is a bad tool for a long list of serious reasons. Then it failed spectacularly because everybody decided to depend on it anyway - exactly what you were cautioning against. But somehow you have to respond to people who think you are the one who got it wrong! As a third party the whole affair gave me a good chuckle at least ;)
tptacek 4 days ago [-]
Germany appears to depend on it. Virtually none of North America does. I'm pretty satisfied with how this whole thing shook out!
cyberax 4 days ago [-]
You're wrong. Both .com and .net are signed (`dig RRSIG com.`), and if they screw up, then all the com/net zones will become inaccessible.
tptacek 4 days ago [-]
Virtually no zones under .com/.net are signed, which was the only point I was making. It has no adoption here.
profmonocle 3 days ago [-]
Even if example.com is unsigned, the delegation from .com to example.com will still be signed (including an attestation that example.com is unsigned). So lack of DNSSEC adoption by users of the TLD wouldn't save them here.
cyberax 4 days ago [-]
Sure. But that was not the issue with .de, it has about the same level of DNSSEC adoption as .com
DENIC screwed up the TLD itself, and .com/.net are just as susceptible.
theMMaI 4 days ago [-]
Sssshh, don't give Verisign any bad ideas!
yassiniz 4 days ago [-]
Shops open normally from 8am to 8pm in Germany. Today we decided to pilot opening hours for .de domains as well
Things seem to be on their way up now, and https://status.denic.de/ is working again, at least from here.
DENIC's status page currently says "Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability.
The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
kangalioo 4 days ago [-]
So glad I found someone mention this. Amazon.de, SPIEGEL.de is down. Highly prominent sites unreachable. I wonder how long this will last and how big of a thing this ends up being once people talk about it :o Feels big to me
moltar 4 days ago [-]
Both examples open for me
irundebian 4 days ago [-]
Some domains work, some not. I assume that working domains are cached.
balou23 4 days ago [-]
amazon.de, spiegel.de are down for me, too. heise.de works, but that might've been cached somewhere on my side.
yk 4 days ago [-]
dig manages to dig out ips for heise.de and tagesschau.de but not spiegel.de amazon.de and google.de However, dig @8.8.8.8 has still amazon.de cached, unlike 1.1.1.1 so perhaps Google to the rescue?
[Edit] After playing around with it, google seems to have at least some pages cached. After setting dns to 8.8.8.8 amazon.de and spiegel.de work again, my blog does not.
theanonymousone 4 days ago [-]
idealo.de, ebay.de, and spiegel.de are down, but amazon.de opens for me.
elevation 4 days ago [-]
I've considered hard-coding some addresses into firmware as a fallback for a DNS outtage (which is more likely than not just misconfigured local DNS.) Events like this help justify this approach to the unconcerned.
whalesalad 4 days ago [-]
The irony is that DNS is a global and distributed system meant to be resilient. It’s the DNSSEC layer on top in this case causing problems.
jeroenhd 4 days ago [-]
The global and distributed system relies on the system actually returning valid responses. If the root servers are broken, whether it's a problem with RRSIG records or A records, the TLD is broken.
If my domains' DNS servers start pointing at localhost, that doesn't mean DNS is a broken protocol.
cedilla 4 days ago [-]
denic is the single source of truth for zones under .de.
The only problem with DNSSEC here is that it's complex.
akerl_ 4 days ago [-]
A complex thing where making a mistake makes your domains drop off the internet seems like a pretty big "only problem".
account42 3 days ago [-]
There is no more complexity other than what is inherent to the task.
We shall transmit the postmortem to you via fax within 25 business days, ja.
alper 3 days ago [-]
Given how amateurish German IT operations is, there is no guarantee whatsoever there will be a post-mortem nor whether it then will make it out under 3-6 months with all the necessary approvals.
"Die Störung ist inzwischen behoben und alle Systeme laufen wieder stabil. Die genaue Ursache wird derzeit noch analysiert. Sobald belastbare Erkenntnisse vorliegen, wird DENIC diese transparent zur Verfügung stellen."
translation:
‘The disruption has now been resolved and all systems are running smoothly again. The exact cause is currently being investigated. As soon as reliable findings are available, DENIC will make them publicly available.’
account42 3 days ago [-]
Also always easy to announce "Sobald belastbare Erkenntnisse vorliegen, wird DENIC diese transparent zur Verfügung stellen." and then remain silent until the media forgets about the incident and never actually publish anything.
hulitu 3 days ago [-]
> alle Systeme laufen wieder stabil
in their dreams.
Culonavirus 4 days ago [-]
Ok children, sit down and listen, uncle Culonavirus will tell you a story:
"It all began with the decommissioning of the last nuclear power plant, ..."
If so, it still worked for several hours after the maintenance was completed.
dwedge 4 days ago [-]
On a slightly unrelated note, I was setting nameservers for two .de domains a few weeks ago and thought my provider was being crazily strict because they kept getting rejected. Turns out you can't point to a nameserver until that nameserver has a zone for the domain, and you can't use nameservers from two providers unless those two providers are both in the NS records at both ends
whalesalad 4 days ago [-]
Common paint point with DNSSEC. It’s brutal in the domain industry because when you buy a name with DNSSEC enabled it oftentimes can’t be setup to resolve due to these sorts of issues. Typically seller needs to deactivate first.
taf2 4 days ago [-]
ok i picked a bad day to move from one register to another... i just spent the last hour frantically trying to figure out why the new register screwed us or the old register was screwing us...
On Monday there was a huge outage affecting several cities quite close to Frankfurt because someone cut major fiber line; today DENIC is having a party and right when everyone is drunk this happens because some post-rotation task cannot be completed.
I work with a few people specialised in IT security, and some of them take their jobs too seriously and will "lock down" everything to the point that it becomes a very real risk that they lock out everyone including themselves.
Fundamentally, security is a solution to an availability problem: The desire of the users is for a system to remain available despite external attack.
Systems that become unavailable to everyone fail this requirement.
A door with its keyhole welded shut is not "secure", it's broken.
QuantumNomad_ 4 days ago [-]
Security is not just a solution to availability. It is also to keep sensitive data (PII, or business secrets, or passwords, or cryptographic private keys, and so on) away from the hands of bad actors.
If I’m unable to use Amazon for 24 hours it doesn’t really matter. If a photo copy of my passport is leaked that’s worries and potential troubles for years.
Gotta satisfy all parts in order to have security.
jiggawatts 4 days ago [-]
If you squint at it, you can convert all three to just availability.
Confidentiality = available to us, but nobody else.
Integrity = available to us in a pristine condition.
It's a bit reductive, I'll admit, but it can be a useful exercise in the same way that everything in an economy can be reduce to units of either: "human time", "money" or "energy". Roughly speaking they're interchangeable.
E.g.: What's the benefit to you if your data is so confidential that you can't read it either? This is a real problem with some health information systems, where I can't access my own health records! Ditto with many government bureaucracies that keep my records safe and secure from me.
dnnddidiej 4 days ago [-]
That squint loses too much nuance. I don't think of a site data leak as an availiability problem.
Bad UX and bugs are in general not always an availiability problem.
If it hard to get what you want due to bad design but the site is up, the site is still up.
DNSSEC operations feels like one of those problems that should be tackled with formal methods, like how some subway controllers are.
But I expect it's treated like "very serious and scary ops", which isn't wrong, but isn't enough.
0x80h 4 days ago [-]
Am I reading this correctly? All .de domains are down? Looking forward to reading the postmortem.
warpspin 4 days ago [-]
Whole .de TLD seems to go offline right now due to dnssec or missing nic.de nameservers?
fweimer 4 days ago [-]
This works:
$ unbound-host -t A www.denic.de
www.denic.de has address 81.91.170.12
This does not:
$ unbound-host -D -t A www.denic.de
www.denic.de has address 81.91.170.12
validation failure <www.denic.de. A IN>: signature crypto failed from 194.246.96.1 for DS denic.de. while building chain of trust
So it does seem DNSSEC-related.
EDIT My explanation was wrong, this is not how keytags work. The published keytag data is consistent:
de. 3600 IN DNSKEY 256 3 8 AwEAAfRLmzuIXVf7x5A0+U7hke0dS+GEJG0EdPhnOthCCLhy0t0WqLyoXJOhnfsTJ8vQX5fd9qOJc9gyr3SWJZkXAhPm3yPSC7FWWHF70WZTKKM9CekmKdqwMwq6ZCjMSUcecCuSF4Sbt1MRszV7rFmfGVklA1l5UzNbqwD+Dr5vfcLn ;{id = 33834 (zsk), size = 1024b}
de. 3600 IN DNSKEY 257 3 8 AwEAAbWUSd/QN9Ae543xzdiacY6qbjwtZ21QfmdgxRdm4Z7bjjHWy249uqxCyjjjoS4LDoRDKmj7ElffMKvTWKE1qFKu0p8TUy4wyhX0M+m5FUjvQ3CiZMi+qY7GSHA5B+Zd73cidmnTeb3e8lso6jEsXg05/VZ2AyAqWF6FexEIFxIqiwwLk4UP0BwZ17Ur3q1qx9VSbPMyHgQ9d6nHUN1EEJsTDA2v0vKumsUyp74ZanRZ/bB/6IzpaaZyr5BLF5pSCNdbRNjVmkwYD0993vm79LueyOeibsoHRc16jhALrIJou1PFjdq7YQsYN0KtqRiJtaAfPprDBREpeamPuW/MnW0= ;{id = 26755 (ksk), size = 2048b}
de. 3600 IN DNSKEY 256 3 8 AwEAAbTe1PJi8EgIudNGb+KRTxBL2aCu5rXkZ+aIe/TC88pwRdrXYeXODp1ihZWFop5CrbWRBLrk/YUPBE8aBc6oJP+58dSkdMLYkjSkmvdvYx+zXnRLWlF2bapxvZxshATJDfGjGbCiWxKEOoyRx3UhICtHC+cUSddsEvzfacUcBb6n ;{id = 32911 (zsk), size = 1024b}
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519030655 20260505013655 26755 de. ke56T5GZt/X6zMBAF+ouyCTnAd7RY7MsnDcfa9jyyOwSouRXhvzim/V13JDTMBAnpAHxWQXoruXrAZ6A6re5N+8Pp2utVkAEKTWs0r4UOLNKoZ2+zMwNplKjNNnY5PJIbHfa5myyziLiIsi//qDIgQEACFk+pZcHXrRdqRoXPCL3UtfaXjk3+duDQdlPnYsJys5UshjVpkALSMChW7J0anzr0sG+f9ytstBneymMwFYOUC3NqbejbLPZsXGPZBQKPAoVJuV5q3znopbcqrDFfjI7bmX3QPYNvOaiT1ElBfi2piJVpDzMaMAmm2jCmvrf5VeTOBccMroh8sBtDPsaEg== ;{id = 26755}
The signature on the SOA record still does not verify:
de. 86400 IN SOA f.nic.de. dns-operations.denic.de. 1778014672 7200 7200 3600000 7200
de. 86400 IN RRSIG SOA 8 1 86400 20260519205754 20260505192754 33834 de. aZoiAJ+PaHUDVSHNXfV/R26ZK3GpFB7ek2Z46VnZdmPEDaTww+a7PkiQ98W83xohUunXYSvQCMeGYfUre5UT76eBKThdxW2a6ImX9/x/oEzQ9x/69Y/NSeTckOv9m3HCLBOug01op1koiHOIAVEvonOmXEHHqo1P4sR/fNbcVg4= ;{id = 33834}
from my analysis DENIC resigned the .de zone today (May 5, 2026, ~17:49 UTC). The DNSSEC signature (RRSIG) for the NSEC3 record covering the hash range of nearly all .de TLD is cryptographically broken (malformed).
yosamino 4 days ago [-]
The last time .de I remember .de had a major outage like this was 2010. I would cite some sources but... you know. That was a fun afternoon, though.
I am very happy that it doesn't happen more often.
jamietanna 4 days ago [-]
Was wondering why a few of my sites aren't CSSing, as they use https://classless.de
kaltsturm 4 days ago [-]
cache
Oarch 4 days ago [-]
Germany has fallen.
victorbjorklund 4 days ago [-]
I was just wondering what was up with our .de site.
alper 3 days ago [-]
I'd expect political escalation for something like this but given that this is Germany, who knows.
Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability.
The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
Based on current information, users and operators of .de domains may experience impairments in domain resolution. Further updates will be provided as soon as reliable findings on the cause and recovery are available.
DENIC asks all affected parties for their understanding.
For further enquiries, DENIC can be contacted via the usual channels.
Wow, I thought I was somehow unaffected but my resolver must just have cached the sites I'd tried.
0xbadcafebee 4 days ago [-]
I can't wait for the .com TLD outage. Ya'll thought Cloudflare down was bad? Lol
baby 4 days ago [-]
Should I do my usual rent about how the web PKI refuses to move to a consensus protocol
4 days ago [-]
dark-star 4 days ago [-]
How come I have zero problems with any .de domain I tried accessing in the last half hour?
AndroTux 4 days ago [-]
maybe your upstream doesn't validate DNSSEC?
dark-star 4 days ago [-]
maybe? I'm using PiHole and 8.8.8.8/1.1.1.1 as upstream, and both options show "DNSSEC" next to their options in settings, so I assumed DNSSEC was enabled (unless I have to enable this somewhere else as well?)
warpspin 4 days ago [-]
That's weird cause 8.8.8.8/1.1.1.1 will already answer with SERVFAIL right now, unless the domain is still in the cache.
dark-star 3 days ago [-]
[dead]
pw6hv 4 days ago [-]
cache
dark-star 3 days ago [-]
unlikely, as I have also successfully tried domains that I never visited before (at least not in the last 12 months) and according to my PiHole log they were successfully retrieved from 1.1.1.1. and/or 8.8.8.8, which should use DNSSEC
4 days ago [-]
binghatch 4 days ago [-]
Wow… it’s definitely not all .de TLDs, but a lot of prominent ones definitely.
phit_ 4 days ago [-]
its gonna be all .de domains once caches dry out, anything that still works right now is bound to eventually fail until the underlying issue is resolved
fossdd 4 days ago [-]
Any .de domain with DNSSEC
mrngm 4 days ago [-]
Unfortunately, even domains that did not have DNSSEC enabled earlier today are affected.
We observed issues on a non-DNSSEC .de domain at 19:45Z and confirmed around 20:12Z it wasn't just us, but also more high profile domain names.
meineerde 4 days ago [-]
Any .de domain is affected, regardless of the domain's dnssec deployment status, as long as you use a resolver which validates dnssec.
eliaskg 4 days ago [-]
Amazon is completely down in Germany. Not only on amazon.de, even in the app.
egberts1 3 days ago [-]
Resolved ... after recovering from a mass German DNSSEC drinking party?
Ok.
tarruda 4 days ago [-]
Mailbox.org (also from Germany) seems to be experiencing issues too.
adamas 3 days ago [-]
I wasn't even aware that was possible..?
jiveturkey 4 days ago [-]
It’s not DNS
There’s no way it’s DNS
It was DNSSEC
kaltsturm 4 days ago [-]
With chrome it works again
g4cg54g54 4 days ago [-]
funfact: enabling DNS sec NOW will fix your domain instantly if dnssec was disabled before
-> no idea if that also "heals" anyone who had dnssec on before.
-> no idea if maybe they need to roll back something and then rebreak the new dnssec i made a minute later lol...
efreak 2 days ago [-]
I had dnssec enabled from 2018 until 1984 hosting messed it up in 2023 and I had to remove/disable keys from the registrar (oddly no information about this on their website or elsewhere, the issue apparently only exists in emails. Apparently they had an issue with BIND upgrade and it made up new keys...).
I thought about reenabling it, but very few people other than me access my server, it has a static IP, and Firefox still doesn't validate dnssec anyways.
Animux 4 days ago [-]
Seems to be fixed now.
NooneAtAll3 4 days ago [-]
quad9 seems to be having problems with DNSSEC as well
jdthedisciple 4 days ago [-]
Seems up again. How briefly did the outage last?
siginator 4 days ago [-]
how is that possible?
4 days ago [-]
aweiher 4 days ago [-]
Solar Flares
dnnddidiej 4 days ago [-]
Took more than cloud flares?
sanbaideng 4 days ago [-]
aiimageupscaler
pogii123 4 days ago [-]
For me bmw.de works but www.bmw.de not
benny_s 4 days ago [-]
bmw.de is down for me too
MikeNotThePope 4 days ago [-]
Both domains page load for me from Amsterdam. I wonder if there's communication disruption. Undersea cable severed?
dark-star 4 days ago [-]
You mean the big undersea cable between the Netherlands and Germany? ;-)
MikeNotThePope 3 days ago [-]
Lol, I meant between users across the sea who couldn't see and users in Europe who could.
RRSIG with malformed signature found for a0d5d1p51kijsevll74k523htmq406bk.de/nsec3 (keytag=33834) dig +cd amazon.de @8.8.8.8 works, dig amazon.de @a.nic.de works. Zone data is intact, DENIC just published an RRSIG over an NSEC3 record that doesn't validate against ZSK 33834. Every validating resolver therefore refuses to answer.
Intermittency fits anycast: some [a-n].nic.de instances still serve the previous (good) signatures, so retries occasionally land on a healthy auth. Per DENIC's FAQ the .de ZSK rotates every 5 weeks via pre-publish, so this smells like a botched rollover.
Still, at this level, brittle infrastructure is a political risk. The internet's famous "routing around damage" isn't quite working here. Should make for an interesting post mortem.
Building redundant infrastructure that can withstand BGP and DNS configuration mistakes are not that simple but it can be done.
Otherwise, 110MB would hint at a fascinating engineering culture at the motherboard manufacturer.
But if this guy is talking about a pentium 75 MHz (socket 5 CPU) that's a totally different generation of stuff several generations before that.
Ironically, denic still requires you to have two separate name servers with different IPs for your domain (which can be worked around by changing the IP of the registered name server afterwards lol), a requirement that all other registries I use have dropped or never had because enforcing such a policy at the registry level makes zero sense.
There isn't some magic reliability that everyone needs which just so happens to fall into "not achievable with a single authoritative name server" and "guaranteed with two servers". I'm not saying you should never have more than one, just that isn't the registry's business to decide what kind of availability guarantees you need for your domain.
Is it more or less than the F-root server run by ISC?
* https://www.isc.org/f-root/
If you want, you can even request your own instance (a 1U Dell):
* https://www.isc.org/froot-process/
Or an instance of ICANN's L-root server, also 1U:
* https://www.dns.icann.org/imrs/
Would .de have more, or less, traffic than some of the root servers?
That is not the ccTLD, that is an individual domain and its name servers. I recall being given that warning for early domain registrations.
The only thing a blockchain is good for is achieving decentralized consensus on what value a key points to, which is what DNS is.
An alternative way of looking at this is that acquiring domains must be somewhat expensive by definition; either you enforce it at the system level, or you make it free, but then somebody will inevitably grab all the interesting ones and re-sell them to others. A blockchain is the only way to make decentralized financial infrastructure viable.
Even the current centralized ICANN flavor could be substantially more resilient if it instead handed out key fingerprints and semi-permanent addresses when queried. That way it would only ever need to be used as a fallback when the previously queried information failed to resolve.
Think about what would happen the day that letsencrypt is borken for whatever reason technical or like having a retarded US leader and being located in the wrong country. Taken into account the push of letsencrypt with major web browsers to restrict certificate validities for short periods like only a few days...
I haven't followed this closely, but have there been any... shall we say plain outages longer than six hours? That's not an outrageous TTL. Or a day.
Long version:
If you're so popular all around that you really really want a very very short TTL, people will query all the time from all the places that "count", won't they? So it's gonna be cached.
If you're not so popular or not all around, what does it matter even if you had a very very short TTL? You're not loosing much.
If Let's Encrypt goes down, half of the Internet will become inaccessible in a week.
* https://www.keyfactor.com/blog/2023s-biggest-certificate-out...
[1] https://outerspaceinstitute.ca/crashclock/
DNS is a look up service that runs on the internet.
Internet routing of IP packets is what the internet does and that is working fine (for a given value of fine).
You remind me of someone using the term "the internet is down" that really means: "I've forgotten my wifi password".
There is designing something to be fail-closed because it needs to be secure in a physical sense (actually secure, physically protected), and then there's designing something fail-closed because it needs to be secure from an intellectual sense (gatekept, intellectually protected). While most of the internet is "open source" by nature, the complexity has been increased to the point where significant financial and technical investment must be made to even just participate. We've let the gatekeepers raise the gates so high that nobody can reach them. AI will let the gatekeepers keep raising the gates, but then even they won't be able to reach the top. Then what?
I think the point you're trying to make, put another way is in the context of "availability" and "accessibility" we've compromised a lot of both availability and accessibility in the name of security since the dawn of the internet. How much of that security actually benefits the internet, and how much of that security hinders it? How much of it exists as a gatekeeping measure by those who can afford to write the rules?
(I hope I'll live to see them all sentenced to life without parole)
And fuck nothing at all happened as a result.
We had a short discussion about migrating to .com, but decided risk != reward as no one would know the new tld
I assume there are a couple people working for denic who had a stressfull night..
Real world beats sci-fi :) And isn't it why we love IT ? And hate it too, because of "peoples in charge"...
No different than a bunch of BGP issues we've seen over the years.
And you don't even need DNSSEC for DNS to break things: reminder of the October 2025 AWS outage:
* https://www.akamai.com/blog/security/when-cloud-breaks-lesso...
...is only for Pentagon networks and military stuff. It's not for us normal people. (We get Cloudflare and FAANG bullshit instead.)
Every FAANG company has their own fiber backbone. Why invest the internet that everyone uses when you can invest in your own private internet and then sell that instead?
Traffic that goes over "the Internet" traverses some mix of your ISP's fiber, fiber belonging to some other ISP they have a deal with, then fiber belong to some ISP they have a deal with, etc.
All those ISPs are being paid to provide service, they can invest in their own networks.
internal reachability (from Germany to .de domains), too... :-)))
I ran up three new VMs on three different sites. I linked all three systems via a private Wireguard mesh. MariaDB on each VM bound to the wg IP and stock replication from the "primary". PowerDNS runs across that lot. One of the VMs is not available from the internet and has no identity within the DNS. The idea is that if the Eye of Sauron bears down on me, I can bring another DNS server online quite quickly and fiddle the records to bring it online. It also serves as a third authority for replication.
I also deployed https://github.com/PowerDNS-Admin/PowerDNS-Admin which is getting on a bit and will be replaced eventually but works beautifully.
Now I have DNS with DNSSEC and dynamic DNS and all the rest. This is how you start signing a zone and PowerDNS will look after everything else:
Grab a test zone and work it all out first, it will cost you not a lot and then go for "production".My home systems are DNSSEC signed.
Telnet was sniffed. IRC was being sniffed and logged.
I've just given them part of a recipe for using DNSSEC. I suspect you are not actually human .. qingcharles.
I once worked at the level of administering DNSSEC for 300+ TLDs. It's its own world. When that company was winding down, I tried to continue in the field but the most common response (outside of no response, of course), was 'we already have a DNS team/vendor/guy.' And well, then things like this happen. I won't throw stones though, it's a lot to learn and can be incredibly brittle.
I had the misfortune of having to dig deep into constructing ASN.1 payloads by hand [1] because that's the only thing Java speaks, and oh holy hell is this A MESS because OF COURSE there's two ways to encode a bunch of bytes (BIT STRING vs OCTET STRING) and encoding ed25519 keys uses BOTH [2].
And ed25519 is a mess in itself. The more-or-less standard implementation by orlp [3] is almost completely lacking any comments explaining what is going on where and reading the relevant RFCs alone doesn't help, it's probably only understandable by reading a 500 pages math paper.
It's almost as if cryptographers have zero interest in interested random people to join the field.
End of rant.
[1] https://github.com/msmuenchen/meshcore-packets-java/blob/mai...
[2] https://datatracker.ietf.org/doc/html/rfc8410#appendix-A
[3] https://github.com/orlp/ed25519/tree/master
It wouldn't be as bad if asn.1 had cought on more as a general purpose serialization format and there were ubiquitous decent libraries for dealing with it. But that didn't happen. Probably partly because there are so many different representations of asn.1.
A bespoke serialization specifically for certificates might actually have aged better, if it was well designed.
Yes. My point is that in practice it hasn't really been used for much outside of cryptography.
> there's no good reason to choose it instead of protobufs
Well, the reason it is used in a lot of the places it is, is because protobufs didn't exist when those protocols or file formats were created.
There are also some things that ASN.1 does better at a technical level. Of important significance to cryptography is that the DER representation is "canonical", meaning that there is only one way to serialize a set of data to bytes. That's important because it means that you can just hash the contents of the serialization for signatures, rather than having to have some kind of separate canonicalization step (which is a common source of mistakes).
Bitpacking structures used to be important in the 60s. That time has passed, unless you're dealing with LoRa, NFC or other cases of highly constrained bandwidth there are way better options to serialize and deserialize information. It's time to move on, and the complexity of all the legacy garbage in crypto has been the case of many a security vulnerability in the past.
As for the code, it might be personal preference but I'd love to have at least some comments referring back to a specification or original research paper in the code.
People who have thought they can do better have made things like PGP. It's one of the worst cryptographic solutions out there. You're free to try as well though.
And there is a related binary format that uses CBOR (COSE) as well.
> because that's the only thing Java speaks
No, it most definitely is not. You can just construct a private key directly in BouncyCastle: https://downloads.bouncycastle.org/java/docs/bcprov-jdk18on-...
I'm 100% certain that you also can do that with raw java.security. I did that about 15 years ago with raw RSA/EC keys. You can just directly specify the private exponent for RSA (as a bigint!) or the curve point for EC.
Ditto for ed25519, you can just take the canonical implementation from DJB. And you really really shouldn't do that anyway, please just use OpenSSL or another similar major crypto library.
I tried that, the problem is Meshcore specific - they do their own weird shit with private and public keys [1]. Haven't figured out how to do the private key import either, because in the C source code (or in python re-implementations) Meshcore just calls directly into the raw ed25519 library to do their custom math... it's a mess.
[1] https://jacksbrain.com/2026/01/a-hitchhiker-s-guide-to-meshc...
Broadly similar general concept to the team responsible for the DNSSSEC signing keys for an entire ccTLD.
Yeah a x509 PKI / root CA is a very different thing than DNSSSEC but they have a number of general logical similarities in that the chain of trust ultimately comes down to a "do not fuck this up" single point of failure.
I saw it at bottom of thread and vouched for it. Usually when I "vouch" nothing happens
I always ignore the RRSIG lines in zone files. To me it's not "DNS data", it's cruft
But DNSSEC has its true believers. I'm just not one of them
maybe someone is showing off?
I haven't been able to find any cases of genuine dns hijack attacks in the last few years. Would love to know if anyone else can?
Only about 40% of the crypto companies seem to use dnssec. Seems like a target rich environment.
There are also some large businesses that require, or strongly pressure SaaS providers to use DNSSEC. You can often contest that, but if you have DNSSEC, that's one less thing to argue about in the contract.
The browser would be very unhappy with an <input type="password"/> on a non-TLS site (localhost excepted). HSTS would trigger the "massive" warning and refuse to load the site, however.
Ah yes I think the HSTS issue is what I was thinking of
Paradoxically, resolvers wouldn't have noticed the misconfiguration if it weren't for DNSSEC.
Beyond that, DNS has the AD bit. If you need DNSSEC secure data (for example for the TLSA record), then when Cloudflare turns off DNSSEC validation, the AD bit will be clear and things will stop working.
We have this elaborate, complex, and extremely fragile cryptographic system behind DNSSEC and we distill it down to one single bit that we carry over unauthenticated links. Why?
At least WebPKI answers the right question: should I trust a particular claim to represent host.domain at the time in the following range? (Of course it defers determining the current time to some unspecified other mechanism.) DNSSEC tries to do everything and cannot survive an upstream error even within the downstream validity window. And yet, despite the fact that most of the spec leans heavily toward failing secure, the actual communication of validation status is entirely unprotected.
(Source: I'm one of the few weirdos on Earth who has read the mailing lists all the way back to when DNSSEC was a TIS project).
DNSSEC and WebPKI both rely on chains of trust. If the problem was that .de's keys expired, you'd have the same problem when Let's Encrypt's keys expired.
Even this incident proves that’s not the case.
If LetsEncrypt has a temporary availability issue, my users don’t notice unless it spans longer than my need to renew a cert.
If LetsEncrypt has a CA cert expire, I can get a cert from another provider.
If DENIC’s DNSSEC records break, either due to an operational error or an expiry issue, my .de site becomes inaccessible and my users see a DNS lookup failure. My only option is to hope resolvers do what Cloudflare did, or move my site to a new TLD and just pray that TLD never has the same problem.
Yes, it's also quite damaging to DNSSEC's trust model that the world has transitioned to centralized resolver caches. But the fundamental problem we're talking about with the AD bit wouldn't vanish if 8.8.8.8 and 1.1.1.1 did too; instead, users would be even more reliant on ISP nameservers, which are literally the least trustworthy pieces of infrastructure on the entire Internet.
---
The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation on 1.1.1.1 resolver in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.
---
(and in case it changes again, now it says)
---
The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation for .de domains on 1.1.1.1 resolver (as per RFC 7646) in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.
See RFC 7646 for more details: https://datatracker.ietf.org/doc/html/rfc7646
---
There’s a reason why one of the two has roughly 10% adoption after three decades and the other is high 90-something percent.
Fun fact, CloudFlare has used the same KSK for zones it serves more than a decade now.
Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.
For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
What's your take on the conundrum of Amazon Trust's 20+ year root cert, with which they sign a 5+ year intermediate, with which they sign a 2-month leaf?
Nope. Key material rotation is just circus when it's done for the sake of rotation.
> For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.
Or maybe an employee has compromised the new key that is going to be rotated in, while the old key is securely rooted in an HSM?
I'm just saying that rotating the key just in case someone compromised it is not a great idea. Doubly so if it's done infrequently enough for the operational experience to atrophy between rotations.
And yeah, I fully agree that anything surrounding the DNSSEC operations is a burning trash fire. It doesn't have to be this way, but it is.
And I just don't fully buy this rationale for asymmetric key rotation. It makes total sense for symmetric secrets (except for passwords).
Also possible, but that'd be an active threat that has some probability of being caught.
Never replacing keys allows permanent compromise that can only be caught if someone directly observes misuse.
Though nobody monitors DNSSEC like that, nor uses it, so it's fine from that aspect I guess.
I'm a mere sysadmin and not a cybersecurity expert. But this is always something that leaves me torn.
On the one hand, yes, rotation periods for many/most credentials are long enough that you're not really de-risking yourself all that much.
On the other hand, doing regular rotations allows you to tighten up your threat model. A regularly-rotated credential allows you to say "I implicitly trust that this credential has not been compromised prior to the previous rotation."[0] Whereas, without credential rotation, you're saying "I implicitly trust that this credential has not been compromised ever."
The latter to me seems clearly like the inferior model. The question is just whether the cost-benefit pencils out. And that is obviously very situationally dependent. That calculus doesn't pencil out when dealing with user-owned passwords for instance (i.e. the costs of regular password rotation dominate the benefits of the improved threat model). Human limitations with memory and such are the main issue there. However, that doesn't apply to e.g. hypothetical sufficiently developed DNSSEC infrastructure. Does that calculus pencil out there? I don't know. But it seems plausible at least.
[0] Modulo attackers having been able to pivot into a persistent threat with a previously-compromised credential.
That said, the last few dnssec posts that got traction, tptacek tends to be at least 20% of the comments alone (ex, 55/259), ignoring word count. Today seems calm
Edit: Alternative link: https://www.cyberciti.biz/media/new/cms/2017/04/dns.jpg
In fact, voting with your feet and leaving is far more effective at fixing political issues than the democratic voting process.
Yes. I've done so myself. The fact that people do this all the time doesn't mean it's the best thing to do when your country has problems.
People also move houses all the time. It's a big undertaking. Not the default solution whenever your kitchen needs renovations.
> In fact, voting with your feet and leaving is far more effective at fixing political issues than the democratic voting process.
Citation needed. Sounds very defeatist.
It's been like that for over two years now.
Or: https://dns.kitchen/jingle
https://archive.nytimes.com/www.nytimes.com/library/cyber/we...
https://en.wikipedia.org/wiki/Kehrwoche
You can both be the 3rd biggest economy in the world and still only be 1/10th of US+China GDPs combined.
And only three companies in the Top 100 for Germany:
https://companiesmarketcap.com/
Germany is the kingdom of the "mittelstand": many, many, many SMEs.
Both GP and you are right: it's the 3rd largest economy in the world and yet it's simply not that big.
https://en.wikipedia.org/wiki/Mittelstand
In other words: I expect this German DNS SNAFU to have 0.000000001% impact on the world's GDP this year.
126 trillion USD * 0.00000000001 = 1260USD
I'm pretty sure the impact was higher than that ;)
EDIT: it says "Service Disruption" now
Edit: Now even the humor is gone.
EDIT: called it...
The ".de" TLD is inherently managed by a single organization, and things wouldn't be much better if its nameservers went down. Some of the records would be cached by downstream resolvers, but not all of them, and not for very long.
> we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it
DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
Similarly, without DNSSEC, a domain owner needs to absolutely trust its authoritative nameservers, since they can trivially forge trusted results. But with DNSSEC, you don't need to trust your authoritative nameservers nearly as much [0], meaning that you can safely host some of them with third-parties.
[0]: https://news.ycombinator.com/item?id=47409728
but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down? As far as I understood the TTL for those keys is different and for DENIC it seems to be 1h [0]. So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
[0] dig RRSIG de. @8.8.8.8
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519214514 20260505201514 26755 de. [...]
In theory, this shouldn't happen, because if you use the same TTLs for your DNSSEC records and your "regular" records, then if the regular records are present in the cache, the DNSSEC records will be too.
> So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
Yes, but I'd argue that the DNSSEC records should have the same TTLs for exactly this reason. That's how my domain is set up at least:
No, that actually is true, but I think (?) that the part that you were missing is that DNSSEC records are mostly the same as any other record, so they can be cached the same way. And since most resolvers are DNSSEC-enabled these days, they'll tend to request (and therefore cache) the DNSSEC records at the same time as the regular records.
There are tons of edge cases here, but it should hopefully be pretty rare for a cache to have a current A/AAAA record and stale/missing DNSSEC records.
> the DNSSEC verification is also "cached"
Technically the verification itself isn't cached, but since verification only depends on the chain of DNSSEC records, and those records are cached, it has the same effect.
A list of root nameserver IP addresses is included with every local recursive DNS resolver. The list changes, albeit slowly, over the years. With DNSSEC, this list also includes public keys of those root servers, which also rotate, slowly.
So what the issue is, that the operator has, does not change the impact.
The world might be a little bit better with more decentralization of the root zone.
Good news though, if you add domain-insecure: "de" to your unbound config everything works fine
"Cloudflare Radar data shows 8.11% of domains are signed with DNSSEC, but only 0.47% of queries are validated end-to-end." [1]
Zones I may care about:
- Amazon.com: unsigned
- My banks: unsigned
- Hacker News: unsigned
- Email that I do not host: unsigned
- My power companies billing: unsigned
- I found some! id.me and irs.gov are signed.
[1] - https://technologychecker.io/blog/dnssec-adoption
https://dnssecmenot.fly.dev/
[1] - https://dnssec-analyzer.verisignlabs.com/amazon.com
That's too funny. I was just kidding. Back in the day Ericsson always added that to their upgraded product lines (Including GSM and what-not)
That's their current official statement. I could guess but I would rather wait until they have an official statement. I would imagine they must know but they are probably going back and forth with their legal team to word it very carefully, or at least that is what I would be doing if I were in their situation.
But not before 8am.
DNSSEC not working
If using an open resolver, i.e., a shared DNS cache, e.g., third party DNS service such as Google, Cloudflare, etc., then it might fail, or it might not. It depends on the third party DNS provider
https://datatracker.ietf.org/meeting/118/materials/slides-11...
It's the cryptographic version of that one time the same TLD told the world domains starting with certain letters didn't exist: https://www.theregister.com/2010/05/12/germany_top_level_dom...
I'm never requesting RRSIGs as I do not use that data. For me, it's just cruft that now comes in the response
I'm really not too close to Denic and know nothing about their internals, but just close enough to have experienced the stress of someone working for DENIC second hand during the outage. From the very limited information I happened to gather DENIC had some trouble in addressing the issue because, surprise, infrastructure that they need to do so runs on de domains. [1]
I'm convinced there are all kinds of extended cyclic decencies between different centralization points in the net.
If some important backbone of the internet is down for an extended time, this will absolutely cause cascading failures. And thesw central points of failure are only getting worse. I love Let's Encrypt, but if something causes them to hard fail things will go really bad once certificates start to expire.
We need concrete plans to cold start extended parts of the internet. If things go really bad once and communication lines start to fail, we're in for a bad time.
Maybe governments have redundant, ultra resistant, low tech communication lines, war rooms and a list of important people in the industry who they can find and put in these war rooms so they can coordinate the rebuild of infrastructure. But I doubt it.
[^1] I don't know if there is some kind of disaster plan in the drawer at DENIC that would address this. I don't mean to allege anything against DENIC specifically, but broadly speaking about companies and infrastructure providers, I would not be surprised if there was absolutely no plan on what to do if things really go down and how to cold start cyclic dependencies or where they even are.
All the cool kids offer their services over multiple TLDs and have their name servers of record in multiple TLDs, too. It's not quite best practices for recursive DNS to regularly fetch the complete root zone to cache it, but it's not unreasonable to do so.
Providing the same site on multiple TLDs is kind of a phishing hazard though. It may make sense for non end user facing APIs and other critical points, but for end user sites I think sites should stick to one domain. Otherwise they train users to fall for phishing sites.
[0] https://en.wikipedia.org/wiki/Thanks,_Obama
https://sockpuppet.org/blog/2015/01/15/against-dnssec/
DENIC screwed up the TLD itself, and .com/.net are just as susceptible.
DENIC's status page currently says "Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability. The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
[Edit] After playing around with it, google seems to have at least some pages cached. After setting dns to 8.8.8.8 amazon.de and spiegel.de work again, my blog does not.
If my domains' DNS servers start pointing at localhost, that doesn't mean DNS is a broken protocol.
The only problem with DNSSEC here is that it's complex.
https://blog.denic.de/denic-informiert-uber-die-behebung-der...
"Die Störung ist inzwischen behoben und alle Systeme laufen wieder stabil. Die genaue Ursache wird derzeit noch analysiert. Sobald belastbare Erkenntnisse vorliegen, wird DENIC diese transparent zur Verfügung stellen."
translation:
‘The disruption has now been resolved and all systems are running smoothly again. The exact cause is currently being investigated. As soon as reliable findings are available, DENIC will make them publicly available.’
in their dreams.
"It all began with the decommissioning of the last nuclear power plant, ..."
Surely a wealth tax is not worth mentioning.
They made the point that more immigration / growth wouldn't help fix the core problem if they don't fix that asap.
yes indeed
Looks like it failed after a maintenance: https://www.namecheap.com/status-updates/planned-denic-de-re...
https://status.denic.de/
There are too many coincidences happening.
Fundamentally, security is a solution to an availability problem: The desire of the users is for a system to remain available despite external attack.
Systems that become unavailable to everyone fail this requirement.
A door with its keyhole welded shut is not "secure", it's broken.
If I’m unable to use Amazon for 24 hours it doesn’t really matter. If a photo copy of my passport is leaked that’s worries and potential troubles for years.
or alternatively,
Security = (exclude unauth'd reads) + (exclude unauth'd writes) + (include auth'd reads and auth'd writes)
Gotta satisfy all parts in order to have security.
E.g.: What's the benefit to you if your data is so confidential that you can't read it either? This is a real problem with some health information systems, where I can't access my own health records! Ditto with many government bureaucracies that keep my records safe and secure from me.
Bad UX and bugs are in general not always an availiability problem.
If it hard to get what you want due to bad design but the site is up, the site is still up.
https://dnssec-analyzer.verisignlabs.com/nic.de
But I expect it's treated like "very serious and scary ops", which isn't wrong, but isn't enough.
EDIT My explanation was wrong, this is not how keytags work. The published keytag data is consistent:
The signature on the SOA record still does not verify:I am very happy that it doesn't happen more often.
As fallback they should use their X account: https://x.com/denic_de
May 5, 2026 23:28 CEST
May 5, 2026 21:28 UTC
INVESTIGATING
Frankfurt am Main, 5 May 2026 – DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability. The root cause of the disruption has not yet been fully identified. DENIC’s technical teams are working intensively on analysis and on restoring stable operations as quickly as possible. Based on current information, users and operators of .de domains may experience impairments in domain resolution. Further updates will be provided as soon as reliable findings on the cause and recovery are available. DENIC asks all affected parties for their understanding. For further enquiries, DENIC can be contacted via the usual channels.
We observed issues on a non-DNSSEC .de domain at 19:45Z and confirmed around 20:12Z it wasn't just us, but also more high profile domain names.
Ok.
There’s no way it’s DNS
It was DNSSEC
-> no idea if that also "heals" anyone who had dnssec on before.
-> no idea if maybe they need to roll back something and then rebreak the new dnssec i made a minute later lol...
Non-authoritative answer: Name: bmw.de Address: 160.46.226.165
$ nslookup www.bmw.de ~ ;; Got SERVFAIL reply from 8.8.8.8, trying next server Server: 8.8.4.4 Address: 8.8.4.4#53
* server can't find www.bmw.de: SERVFAIL
https://edition.cnn.com/2026/05/01/politics/us-troop-withdra...