paul at nohats.ca wrote
Thu, 14 Apr 2016 08:22:39 -0400 (EDT):
| On Thu, 14 Apr 2016, Linus Nordberg wrote:
|
| > The draft is not precise on the question of what comprises an entry with
| > regards to duplicates. Here are my thoughts about this today:
| >
| > - Duplicate checks are important to get right to avoid the log being
| > spammed to death.
| >
| > - Should we perform duplicate checks on the key only, i.e. DS RDATA?
| > That's what we're interested in detecting misisuance of after
| > all. Well, owner name has to be included too. Type and class are
| > fixed, leaving us with TTL. Since canonicalisation of the DS record
| > sets its TTL to the Original TTL Field of the covering RRSIG, TTL
| > might be stable enough to be included in duplicate checks.
|
| You could create/store a hash of the entire chain? Possibly exclude the
| valid-from/expiry fields, so resigns of the same record time can be
| skipped for the log. Although it would be nice to see when certain
| records were resigned.
We do store the chain. The question is under what rules we should create
another log entry for a DS RR that already exist.
| Perhaps a simple cap of 12 per day would do? That allows us to catch some
| strange/unexpected resigns yet preventing spamming the log. Normal DS
| records should not have a validity of less then 2h.
What stops an attacker from generating 12 distinct chains and submit
them to the log, blocking any attacked client from submitting that day?
| > - How stable is the chain? Should it be included in duplicate checks,
| > canonicalisation assumed? No, RRSIG's expire and get replaced and are
| > not stable.
|
| The chain must be included because the parent could be skipping a
| delegation while keeping the same DS (for now), and we surely would want
| to log that event.
What does "skipping a delegation" mean?
When you say "must be included", do you mean that a log must accept the
same DS record twice if any bit in the chain is different?