paul at nohats.ca wrote
Thu, 14 Apr 2016 11:28:56 -0400 (EDT):
| On Thu, 14 Apr 2016, Linus Nordberg wrote:
|
| > | You could create/store a hash of the entire chain? Possibly exclude the
| > | valid-from/expiry fields, so resigns of the same record time can be
| > | skipped for the log. Although it would be nice to see when certain
| > | records were resigned.
| >
| > We do store the chain. The question is under what rules we should create
| > another log entry for a DS RR that already exist.
|
| At least when the number of elements in the chain changes. Ideally when
| any DNSKEY in the chain changes.
This is useful. What about DS records in the chain? For this particular
experiment where we're logging DS records in the root and a handful of
TLD's it might not matter (or does it?) but let's say we'd like to use
this implementation for logging DS records in paul.nohats.com?
Disregarding DS records for establishing equalness between two chains
for now, do you think that the following is a proper definition?
- Two submissons, A and B, are considered equal iff all of the
following is true
- the canonicalised DS RR in A and B are bitwise equal
- the number of DNSKEY RR's in A and B are equal
- all DNSKEY RR's in A and B are bitwise equal
- Accept up to 12 duplicates per day.
I'd like to point out that the contents of the chain is _not_ covered by
any of the log signatures (in SCT's or STH's). This because we're
adapting RFC6962, as per the draft. One effect of this is that a log can
change the contents of any chain without risk being held responsible for
it, at least not cryptographically.
| > | Perhaps a simple cap of 12 per day would do? That allows us to catch some
| > | strange/unexpected resigns yet preventing spamming the log. Normal DS
| > | records should not have a validity of less then 2h.
| >
| > What stops an attacker from generating 12 distinct chains and submit
| > them to the log, blocking any attacked client from submitting that day?
|
| Nothing. But these attacks can only be mounted by your parent or higher
| up in the chain, or you yourself. So I don't see the problem? We have no
| 600 different CA paths that can do things here.
Ah. Regardless of number of roots, this is just changing the number of
accepted duplicates from 0 to 11.
With a duplication check like the one outlined above, a log would still
accept and expose
| > | > - How stable is the chain? Should it be included in duplicate checks,
| > | > canonicalisation assumed? No, RRSIG's expire and get replaced and are
| > | > not stable.
| > |
| > | The chain must be included because the parent could be skipping a
| > | delegation while keeping the same DS (for now), and we surely would want
| > | to log that event.
| >
| > What does "skipping a delegation" mean?
|
| Currently .ca hosts a DS for nohats.ca. The root could remove the NS/DS for
| .ca and host the NS/DS for nohats.ca directly itself. Nothing would yet
| change for nohats.ca, but now the root (instead of .ca) has the power
| to change that DS directly. If the root zone publishes NS/DS records for
| .ca, then the root cannot publish a DS for nohats.ca (we should test but
| I hope all resolvers would call that a DNSSEC validation failure)
I see, thanks.
Wouldn't it be sufficient if we detected when the DS was actually
changed? The price of detecting this kind of preparation of an attack
might be non-zero. If it can be done at all (see comment about chain not
being signed above).
| > When you say "must be included", do you mean that a log must accept the
| > same DS record twice if any bit in the chain is different?
|
| Yes,
This ("any bit") contradicts the idea presented above where only the
number of and the contents of the DNSKEY RR's were suggested to affect
the process of determining if two chains are considered equal or
not. Unless I misunderstood you above.