Our current setup has a directory (git versioned) with all our records.
Each file is named after the zone such as service1.dns.example.com and example.de etc.
A usual file usually looks like:
$TTL 1800
example.com. IN SOA dns.example.com. domains.example.com. (
2012062701 ; serial
300 ; refresh
1800 ; retry
14400 ; expire
300 ) ; minimum
@ IN NS dns.example.com.
api 42000 IN CNAME sample.service.dns.example.de.
www 42000 IN CNAME sample.service.dns.example.de.
blog 42000 IN CNAME sample.service.dns.example.de.
@ 3600 IN MX 1 ASPMX.L.google.com.
@ 300 IN TXT "v=spf1 include:_spf.google.com ~all"
If duplicate records are needed say for .de and .com we symlink the replica (with the actual zone as name) to the primary. Our custom script will overwrite the zone from the filename, if it defers.
Would it be possible to have something like this as Corefile:
.:53 {
file records/ # use filenames as zone + serve all files in directory
proxy . 8.8.8.8:53 # not found zones/files are proxied
errors stdout
log stdout
}
Not sure how it would be done to specify a directory of files. Changing the Corefile to add new zone seems unnecessary.
Thanks again for your work on coredns.
Perhaps understanding more might enable me to contribute some feedback, docs etc.
More commonly you can encode the origin of the zone in the file itself:
$TTL 1800
$ORIGIN example.com.
@ IN SOA dns domains (
Which means the file does not really matter, which is a nice side effect, as you then don’t have to parse the filenames and extract the origin info from there.
We could still use a regexp (or file glob?) to explicitly say which filenames should be auto added:
auto db.*
or something as a directive?
Feel free to add bug (documention or otherwise) on the issue tracker in github if you have other questions. We are working hard on docs and features
The idea behind not always using the origin provided within the file is to enable duplicate records.
Basically there are records defined in db.example.com, but I have two zones basically being duplicates (usually .com and .net resolving to the same IPs or so)
Using only the origin from within the file we would only get db.example.com served instead of also having zone1 + zone2. (aka ability to overwrite origin via filenames)
I like the idea to have file as it currently is.
basic:
file zones.db zone1 zone2 zone3 # using one file with 1+ zones listed, but only serving the specified zones
file zones.db # serving all zones specified within the db file
advanced:
dir records # serving all zones within directory
complex: # is there a need to support this?
auto records/*
auto zones.com.* # serving all zones valid for regexp?
Still trying to understand the inner workings, will open issues, when they provide more value.
As a first workaround for duplicate records one could use rewrite right?
rewrite zone1 zone2
Would this be visible externally? Or is this similar to caddy rewrites so that I would request zone1 and the redirect would just give me all records for zone2 instead.
Rewrites or duplicating records is something not as often done as changing the records so it might be alright to not have it supported via files directly. One other approach could be to use an import directive as caddy is doing.
I like your thinking. But no this is not externally visible, because the client expects a response for zone1, so this won’t expose zone2. (also this hasn’t been implemented yet).
Of course there could be a rewrite middleware that works the other way around - beauty there is that it will work for other backends as well.
The current rewrite middleware does not expose zone2 and responds with the records of zone2 (while the client thinks it’s zone1). Basically enabling duplicating zones, backed by only one actual zone file.
.:2053 {
proxy . 8.8.8.8:53
file db zone2 # reads in db file and serves zone2
rewrite zone1 zone2 # duplicates all records for zone1
errors stdout
log stdout
}
So what a full bulk + duplicate zone Corefile could look like:
proxy . 8.8.8.8:53
file single/zone2.db zone2 # reads in specific file
dir bulk_records # reads in dir
rewrite zone1 zone2 # duplicates
# later even
rewrite { # enables easier duplication
zone1 zone2
zone3 zone2
}
errors stdout
log stdout
}