Serving a directory of bind formated DNS records

Our current setup has a directory (git versioned) with all our records.
Each file is named after the zone such as and etc.
A usual file usually looks like:

$TTL 1800   IN SOA (
    2012062701   ; serial
    300          ; refresh
    1800         ; retry
    14400        ; expire
    300 )        ; minimum

@                        IN NS

api                42000 IN CNAME
www                42000 IN CNAME
blog               42000 IN CNAME

@                   3600 IN MX 1
@                    300 IN TXT     "v=spf1 ~all"

If duplicate records are needed say for .de and .com we symlink the replica (with the actual zone as name) to the primary. Our custom script will overwrite the zone from the filename, if it defers.

Would it be possible to have something like this as Corefile:

.:53 {
  file records/ # use filenames as zone + serve all files in directory
  proxy . # not found zones/files are proxied
  errors stdout
  log stdout

Not sure how it would be done to specify a directory of files. Changing the Corefile to add new zone seems unnecessary.

Thanks again for your work on coredns.
Perhaps understanding more might enable me to contribute some feedback, docs etc.


1 Like

Currently the Corefile would need to look like this:

.:2053 {                                                                                                                                 
  proxy .
  file db zone2.local zone3.local zone4.local
  errors stdout
  log stdout

This would work with one db file containing all zones, which is not perfect for versioning and gets quite messy with a few more zones.

Yes, we should support this.

More commonly you can encode the origin of the zone in the file itself:

$TTL 1800
@   IN SOA dns domains (

Which means the file does not really matter, which is a nice side effect, as you then don’t have to parse the filenames and extract the origin info from there.

We could still use a regexp (or file glob?) to explicitly say which filenames should be auto added:

auto db.*

or something as a directive?

Feel free to add bug (documention or otherwise) on the issue tracker in github if you have other questions. We are working hard on docs and features :slight_smile:

The idea behind not always using the origin provided within the file is to enable duplicate records.

Basically there are records defined in, but I have two zones basically being duplicates (usually .com and .net resolving to the same IPs or so)

Using symlinks this looks something like this:

Using only the origin from within the file we would only get served instead of also having zone1 + zone2. (aka ability to overwrite origin via filenames)

I like the idea to have file as it currently is.

file zones.db zone1 zone2 zone3 # using one file with 1+ zones listed, but only serving the specified zones
file zones.db # serving all zones specified within the db file

dir records # serving all zones within directory 

complex: # is there a need to support this?
auto records/*
auto* # serving all zones valid for regexp?

Still trying to understand the inner workings, will open issues, when they provide more value.

Ah yes.

that still involves parsing the filename to make this work. Not 100% a fan of this idea, but there is merit to it.

A like idea of making it easy to bulk serve DNS zones.

As a first workaround for duplicate records one could use rewrite right?

rewrite zone1 zone2

Would this be visible externally? Or is this similar to caddy rewrites so that I would request zone1 and the redirect would just give me all records for zone2 instead.

Rewrites or duplicating records is something not as often done as changing the records so it might be alright to not have it supported via files directly. One other approach could be to use an import directive as caddy is doing.

I like your thinking. But no this is not externally visible, because the client expects a response for zone1, so this won’t expose zone2. (also this hasn’t been implemented yet).

Of course there could be a rewrite middleware that works the other way around - beauty there is that it will work for other backends as well.

rewrite2 *.com

But I need to think through the implications of this.

Not sure, if I explained my idea right.

The current rewrite middleware does not expose zone2 and responds with the records of zone2 (while the client thinks it’s zone1). Basically enabling duplicating zones, backed by only one actual zone file.

.:2053 {                                                                                                                                 
  proxy .
  file db zone2 # reads in db file and serves zone2
  rewrite zone1 zone2 # duplicates all records for zone1
  errors stdout
  log stdout

So what a full bulk + duplicate zone Corefile could look like:

  proxy .
  file single/zone2.db zone2 # reads in specific file
  dir bulk_records # reads in dir
  rewrite zone1 zone2 # duplicates
  # later even
  rewrite { # enables easier duplication
    zone1 zone2
    zone3 zone2
  errors stdout
  log stdout

Sidenote: why is there no root middleware?

'cause I wasn’t sure what it should do

setting the root for the file directive? not sure, if it’s needed, but might be cleaner and more consistent with caddy.

yes, creating a root middleware sounds like a good plan. Have to flesh out what it exactly needs to do. Can you open an issue for that?

Suggesting root middleware:
Extending file middleware:
Table syntax for rewrite middleware:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.