I’m in the process of figuring out if Bareos can replace AMANDA in my shop. What is better than setting up a test server and do as many mistakes as possible before going live?

Start by adding sysutils/bareos-client to your clients, to your storage daemons and to your director. Add sysutils/bareos-server to your storage daemons and your director. The latter package will typically pull in databases/postgresql95-client unless you have made changes to DEFAULT_VERSIONS in /etc/make.conf and built Bareos yourself. Add www/apache24 and www/bareos-webui to your director. Add databases/postgresql95-server to your director and get PostgreSQL up & running, or utilise your existing PostgreSQL server if desired. Add sysutils/smartmontools to your storage daemons, apparently smartctl can also diagnose tape drives.

Use /usr/local/lib/bareos/scripts/ddl/grants/postgresql.sql as a template, replacing @DB_USER@ with bareos and @DB_PASS@ with WITH PASSWORD something. The following sed snippet should give you a head start.

sed 's/@DB_USER@/bareos/g;s/@DB_PASS@/WITH PASSWORD something/g;' < /usr/local/lib/bareos/scripts/ddl/grants/postgresql.sql > /usr/local/lib/bareos/scripts/ddl/grants/postgresql-bareos-`hostname -s`.sql
chmod 0600 /usr/local/lib/bareos/scripts/ddl/grants/postgresql-bareos-`hostname -s`.sql

Edit the generated file and replace something with a long and random password placed in single quotes.

Log in to your database cluster, create the bareos database, connect to it, populate it, create the bareos user, and grant the bareos user the necessary permissions.

CREATE DATABASE bareos ENCODING=SQL_ASCII LC_CTYPE='C' TEMPLATE=template0;
\c bareos
\i /usr/local/lib/bareos/scripts/ddl/creates/postgresql.sql
\i /usr/local/lib/bareos/scripts/ddl/grants/postgresql-hostname.sql
\q

Update /usr/local/etc/bareos/bareos-dir.d/catalog/MyCatalog.conf to let the director access the database.

Catalog {
  Name = MyCatalog
  dbdriver = "postgresql"
  dbname = "bareos"
  dbuser = "bareos"
  dbaddress = "localhost"
  dbpassword = "somethinglongandrandom"
}

On the director, create the directory /usr/local/etc/bareos/bconsole.d/conf, and move /usr/local/etc/bareos/bconsole.d/bconsole.conf to /usr/local/etc/bareos/bconsole.d/conf. This snag is due to the bareos-client package and it should be fixed by its maintainer.

Ensure matching passwords in /usr/local/etc/bareos/bconsole.d/conf/bconsole.conf and /usr/local/etc/bareos/bareos-dir.d/director/bareos-dir.conf.

Ensure /var/log/bareos/bareos.log exists (securely) on the director. The sysutils/bareos-server package only creates the /var/log/bareos directory.

touch /var/log/bareos/bareos.log
chown bareos:bareos /var/log/bareos/bareos.log
chmod 0640 /var/log/bareos/bareos.log

For your trial run only, ensure matching passwords in /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf on the director and /usr/local/etc/bareos/bareos-fd.d/director/bareos-dir.conf on the client. Ensure matching client names in /usr/local/etc/bareos/bareos-dir.d/client/bareos-fd.conf on the director and in /usr/local/etc/bareos/bareos-fd.d/client/myself.conf on the client.

Ensure matching passwords in /usr/local/etc/bareos/bareos-dir.d/storage/File.conf on the director, and in /usr/local/etc/bareos/bareos-sd.d/director/bareos-dir.conf on the storage daemon.

Edit /usr/local/etc/bareos/bareos-sd.d/device/FileStorage.conf on your storage daemon, changing /tmp to something more appropriate such as /var/spool/bareos. /var/spool/bareos could be a dataset on a dedicated zpool on your storage daemon. Remember to set the recordsize property to a high value, like 1M.

You can use /etc/hosts.allow to secure access to the director, the storage daemons, and the clients. Remember the service names are the names as defined by the local config files on each node. For the trial run this will be bareos-dir, bareos-sd, and bareos-fd. For a real client otherwise known as hostname.example.net, this usually will be hostname-fd.

I also recommend locking down the nodes using (V)ACLs in the router/switch. The storage daemons should be the only ones able to connect to TCP port 9101 on the director. Likewise, only the director should be able to connect to TCP port 9102 on the file daemons/clients. And, the director and the file daemons/clients should be the only ones able to connect to TCP port 9103 on the storage daemons.

Edit /etc/rc.conf on each node, adding
bareos_dir_enable="YES",
bareos_sd_enable="YES", and
bareos_fd_enable="YES", as needed.

Now, everything should be set up correctly for a trial run, and you can try starting the director, storage daemon(s), and client(s).

service bareos-dir start
service bareos-sd  start
service bareos-fd  start

Try running bconsole as root, and see if everything checks out. Granted, the example below was run the day after the very first runs of the backup-bareos-fd and BackupCatalog jobs.

# bconsole
Connecting to Director localhost:9101
1000 OK: bareos-dir Version: 17.2.7 (16 Jul 2018)
Enter a period to cancel a command.
*status all
bareos-dir Version: 17.2.7 (16 Jul 2018) amd64-portbld-freebsd12.0 freebsd 12.0-SYNTH
Daemon started 14-Feb-19 20:09. Jobs: run=2, running=0 mode=0 db=postgresql
 Heap: heap=0 smbytes=651,345 max_bytes=12,404,213 bufs=2,687 max_bufs=4,746

Scheduled Jobs:
Level          Type     Pri  Scheduled          Name               Volume
===================================================================================
Incremental    Backup    10  15-Feb-19 21:00    backup-bareos-fd   *unknown*
Full           Backup    11  15-Feb-19 21:10    BackupCatalog      *unknown*
====

Running Jobs:
Console connected at 15-Feb-19 10:30
No Jobs running.
====

Terminated Jobs:
 JobId  Level    Files      Bytes   Status   Finished        Name
====================================================================
     1  Full      1,620    486.8 M  OK       14-Feb-19 21:01 backup-bareos-fd
     2  Full         99    292.6 K  OK       14-Feb-19 21:10 BackupCatalog


Client Initiated Connections (waiting for jobs):
Connect time        Protocol            Authenticated       Name
====================================================================================================
====
Connecting to Storage daemon File at bareos-sd.FQDN:9103

bareos-sd Version: 17.2.7 (16 Jul 2018) amd64-portbld-freebsd12.0 freebsd 12.0-SYNTH
Daemon started 14-Feb-19 20:09. Jobs: run=2, running=0.
 Heap: heap=0 smbytes=108,604 max_bytes=178,132 bufs=97 max_bufs=119
 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0 bwlimit=0kB/s

Running Jobs:
No Jobs running.
====

Jobs waiting to reserve a drive:
====

Terminated Jobs:
 JobId  Level    Files      Bytes   Status   Finished        Name
===================================================================
     1  Full      1,620    487.0 M  OK       14-Feb-19 21:01 backup-bareos-fd
     2  Full         99    306.6 K  OK       14-Feb-19 21:10 BackupCatalog
====

Device status:

Device "FileStorage" (/var/spool/bareos) is not open.
==
====

Used Volume status:
====

====

Connecting to Client bareos-fd at localhost:9102

bareos-fd Version: 17.2.7 (16 Jul 2018)  amd64-portbld-freebsd12.0 freebsd 12.0-SYNTH
Daemon started 14-Feb-19 20:09. Jobs: run=2 running=0.
 Heap: heap=0 smbytes=112,841 max_bytes=124,189 bufs=87 max_bufs=132
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0 bwlimit=0kB/s

Running Jobs:
bareos-dir (director) connected at: 15-Feb-19 10:30
No Jobs running.
====

Terminated Jobs:
 JobId  Level    Files      Bytes   Status   Finished        Name
======================================================================
     1  Full      1,620    486.8 M  OK       14-Feb-19 21:01 backup-bareos-fd
     2  Full         99    292.6 K  OK       14-Feb-19 21:10 BackupCatalog
====
*q
#

While waiting for the scheduled jobs to kick in, it’s time to setup the web user interface.

Edit /usr/local/etc/bareos-webui/apache-bareos-webui.conf and replace /usr/share/bareos-webui/public with /usr/local/www/bareos-webui/public. This should have been done by the package, but alas, we must do it ourselves.

Edit /usr/local/etc/apache24/httpd.conf, adding this line at the bottom of the file.

Include /usr/local/etc/bareos-webui/apache-bareos-webui.conf

Copy and change the ownership of /usr/local/etc/bareos-webui/bareos-dir.d/console/admin.conf and /usr/local/etc/bareos-webui/bareos-dir.d/profile/webui-admin.conf.

cp -p /usr/local/etc/bareos-webui/bareos-dir.d/console/admin.conf /usr/local/etc/bareos/bareos-dir.d/console/admin.conf
chown root:bareos /usr/local/etc/bareos/bareos-dir.d/console/admin.conf
cp -p /usr/local/etc/bareos-webui/bareos-dir.d/profile/webui-admin.conf /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf
chown root:bareos /usr/local/etc/bareos/bareos-dir.d/profile/webui-admin.conf

Edit the password in /usr/local/etc/bareos/bareos-dir.d/console/admin.conf to something more sensible.

Restart Apache and reload the director.

# service apache24 restart
# bconsole
Connecting to Director localhost:9101
1000 OK: bareos-dir Version: 17.2.7 (16 Jul 2018)
Enter a period to cancel a command.
*reload
reloaded
*q
#

Try accessing the web user interface at http://localhost/bareos-webui/.


For production usage, I’ve been contemplating this setup and these definitions. Constructive comments are welcome.

  • Director is one only and is named bareos-dir and registered in DNS as bareos-dir.FQDN.
  • File daemons/clients will be several and will be named clientname-fd and registered in DNS as clientname.FQDN.
  • Storage daemons can be several and should be named bareos-sdNN-sd and registered in DNS as bareos-sdNN.FQDN.
  • B2D storage will be one per storage daemon and should be named bareos-sdNN-b2d.
  • B2D storage will be placed on a dedicated ZFS pool per storage daemon.
  • B2D storage will utilise as much disk space as possible, observing ZFS’ rule of 20 % free space.
  • At least one storage daemon will have one or more LTO-7 tape devices, and these devices should be named bareos-sdNN-LTO-7-saN.
  • The user bareos will be a member of the group operator to access the tape devices on the storage daemons.
  • Backup jobs will be a combination of B2D and LTO-7.
  • In the future I hope all B2D storage can be pooled together.
  • The same goes for the LTO-7 tape devices.
  • For now, every B2D storage and every LTO-7 tape device must have their own pool.
  • Some jobs/jobdefs must override the pool specified higher up in the hierarchy.
  • The pool names will be used for autolabelling and must thus be somewhat short, yet clearly distinguishable.
  • B2D pools will be named B2D-sdNN.
  • LTO-7 pools will be named LTO-7-sdNN-saN.
  • Each pool will have their own counter for labelling their volumes.
  • The counters for B2D pools will be named Counter_B2D_sdNN.
  • The counters for LTO-7 pools will be named Counter_LTO_7_sdNN_saN.
  • B2D volumes will use ${Pool}-${Counter_B2D_saNN+:p/4/0/r} as the label format.
  • LTO-7 volumes will be pre-labeled using the names from their physical label.
  • An alternate label format when autolabelling the LTO-7 volumes is ${Pool}-${Counter_LTO_7_sdNN_saN+:p/4/0/r}.
  • Job priorities should be as follows:
    • 10, 11, …, 19 for B2D backup jobs
    • 20, 21, …, 29 for LTO-7 backup jobs
    • 30, 31, …, 39 for catalog backup jobs
    • 40, 41, …, 49 for verify jobs
  • Each Monday the next tape is loaded in the tape device and Bareos is told to mount the tape.
  • Each tape will carry a whole week of backups.
  • If each full backup amounts to no more than 856 GB, then one LTO-7 tape (∼6000 GB) a week will be sufficient.
  • Otherwise, the schedule must be changed to permit incremental backups, Tuesdays to Sundays.
  • Ideally there would be a tape changer/library with a capacity of 48+ tapes, but the cost alone prohibits such devices. What a shame.
  • A total of 110 tapes should be sufficient for a two year horizon. This actually gives us 2 years and 5-6 weeks.
  • Some extra LTO-7 tapes on hand is common sense in case of emergencies.
  • Using ZFS’ snaphots will be a huge benefit on the FreeBSD servers. Can this simply be done by creating a snapshot by way of RunBeforeJob, list the snapshot directory instead of the live filesystem, and remove the snapshot by way of RunAfterJob? Bacula Enterprise has a ZFS snapshot add-on. Maybe I should switch to Bacula.

The director bareos-dir-dir is defined in bareos-dir as

Director {
  Name = bareos-dir
  Auditing = yes
  Maximum Concurrent Jobs = 20
  Messages = Daemon
  Password = "somethinglongandrandom1"
  QueryFile = "/usr/local/lib/bareos/scripts/query.sql"
}

Local console access is given by

Director {
  Name = bareos-dir
  address = localhost
  Password = "somethinglongandrandom1"
}

The storage device bareos-sd01-LTO-7-sa0 is defined in bareos-dir as

Storage {
  Name = "bareos-sd01-LTO-7-sa0"
  Password = "somethinglongandrandom2"
  Address = bareos-sd01.FQDN
  Device = "bareos-sd01-LTO-7-sa0"
  Media Type = "LTO-7"
}

The storage device bareos-sd01-b2d is defined in bareos-dir as

Storage {
  Name = "bareos-sd01-b2d"
  Password = "somethinglongandrandom2"
  Address = bareos-sd01.FQDN
  Device = "bareos-sd01-b2d"
  Media Type = File
}

The storage device bareos-sd02-b2d is defined in bareos-dir as

Storage {
  Name = "bareos-sd02-b2d"
  Password = "somethinglongandrandom3"
  Address = bareos-sd02.FQDN
  Device = "bareos-sd02-b2d"
  Media Type = File
}

The counter Counter_B2D_sd01 is defined in bareos-dir as

Counter {
  Name = "Counter_B2D_sd01"
  Catalog = MyCatalog
  Minimum = 1
  Maximum = 9999
}

The counter Counter_B2D_sd02 is defined in bareos-dir as

Counter {
  Name = "Counter_B2D_sd02"
  Catalog = MyCatalog
  Minimum = 1
  Maximum = 9999
}

The counter Counter_LTO_7_sd01_sa0 is defined in bareos-dir as

Counter {
  Name = "Counter_LTO_7_sd01_sa0"
  Catalog = MyCatalog
  Minimum = 1
  Maximum = 9999
}

The pool B2D-sd01 is defined in bareos-dir as

Pool {
  Name = "B2D-sd01"
  Storage = bareos-sd01-b2d
  File Retention = 3 years
  Job Retention = 3 years
  Label Format = "${Pool}-${Counter_B2D_sd01+:p/4/0/r}"
  Maximum Volume Bytes = 100G
  Maximum Volumes = 400 # Assumes a total capacity of 40.0 TiB
  Volume Retention = 3 years
}

The pool LTO-7-sd01-sa0 is defined in bareos-dir as

Pool {
  Name = "LTO-7-sd01-sa0"
  Storage = "bareos-sd01-LTO-7-sa0"
  File Retention = 2 years
  Job Retention = 2 years
  #Label Format = "${Pool}-${Counter_LTO_7_sd01_sa0+:p/4/0/r}" # Alternative label format
  Maximum Volumes = 110
  Volume Retention = 2 years
  Volume Use Duration = 7 days
}

The pool Pool-B2D-sd02 is defined in bareos-dir as

Pool {
  Name = "B2D-sd02"
  Storage = bareos-sd02-b2d
  File Retention = 3 years
  Job Retention = 3 years
  Label Format = "${Pool}-${Counter_B2D_sd02+:p/4/0/r}"
  Maximum Volume Bytes = 100G
  Maximum Volumes = 400 # Assumes a total capacity of 40.0 TiB
  Volume Retention = 3 years
}

The storage daemon bareos-sd01-sd is defined in bareos-sd01-sd as

Storage {
  Name = bareos-sd01-sd
  Maximum Concurrent Jobs = 20
}

The storage daemon bareos-sd02-sd must grant access to the director bareos-dir.

Director {
  Name = bareos-dir
  Password = "somethinglongandrandom2"
}

The bareos-sd01-LTO-7-sa0 storage device is defined in bareos-sd01-sd as

Device {
  Name = "bareos-sd01-LTO-7-sa0"
  Enabled = yes
  Media Type = "LTO-7"
  Alert Command = "sh -c 'smartctl -H -l error %c'"
  Archive Device = /dev/nsa0
  AutomaticMount = yes
  AlwaysOpen = yes
  Backward Space Record = no
  BSF At EOM = yes
  Fast Forward Space File = no
  Hardware End of Medium = no
  Label Media = no
  Offline On Unmount = no
  Random Access = no
  RemovableMedia = yes
  Spool Directory = /var/spool/bareos
  Two EOF = yes
}

The bareos-sd01-b2d storage device is defined in bareos-sd01-sd as

Device {
  Name = "bareos-sd01-b2d"
  Enabled = yes
  AlwaysOpen = no
  Archive Device = /var/spool/bareos
  AutomaticMount = yes
  Label Media = yes
  Media Type = File
  Random Access = yes
  RemovableMedia = no
}

The storage daemon bareos-sd02-sd is defined in bareos-sd02-sd as

Storage {
  Name = bareos-sd02-sd
  Maximum Concurrent Jobs = 20
}

The storage daemon bareos-sd02-sd must grant access to the director bareos-dir.

Director {
  Name = bareos-dir
  Password = "somethinglongandrandom3"
}

The bareos-sd02-b2d storage device is defined in bareos-sd02-sd as

Device {
  Name = "bareos-sd02-b2d"
  Enabled = yes
  Media Type = File
  Archive Device = /var/spool/bareos
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}

The Schedule-B2D schedule is defined in bareos-dir as

Schedule {
  Name = "Schedule-B2D"
  Description = "This schedule runs a full backup every day at 6:00 and then it runs incremental backups on the hour between 8:00 and 21:00."
  Enabled = yes
  Run = Level=Full        at  6:00
  Run = Level=Incremental at  8:00
  Run = Level=Incremental at  9:00
  Run = Level=Incremental at 10:00
  Run = Level=Incremental at 11:00
  Run = Level=Incremental at 12:00
  Run = Level=Incremental at 13:00
  Run = Level=Incremental at 14:00
  Run = Level=Incremental at 15:00
  Run = Level=Incremental at 16:00
  Run = Level=Incremental at 17:00
  Run = Level=Incremental at 18:00
  Run = Level=Incremental at 19:00
  Run = Level=Incremental at 20:00
  Run = Level=Incremental at 21:00
}

Maybe we should consider incremental backups every half hour.

Schedule {
  Name = "Schedule-B2D"
  Description = "This schedule runs a full backup every day at 6:00 and then it runs incremental backups every half hour between 8:00 and 21:30."
  Enabled = yes
  Run = Level=Full        at  6:00
  Run = Level=Incremental at  8:00
  Run = Level=Incremental at  8:30
  Run = Level=Incremental at  9:00
  Run = Level=Incremental at  9:30
  Run = Level=Incremental at 10:00
  Run = Level=Incremental at 10:30
  Run = Level=Incremental at 11:00
  Run = Level=Incremental at 11:30
  Run = Level=Incremental at 12:00
  Run = Level=Incremental at 12:30
  Run = Level=Incremental at 13:00
  Run = Level=Incremental at 13:30
  Run = Level=Incremental at 14:00
  Run = Level=Incremental at 14:30
  Run = Level=Incremental at 15:00
  Run = Level=Incremental at 15:30
  Run = Level=Incremental at 16:00
  Run = Level=Incremental at 16:30
  Run = Level=Incremental at 17:00
  Run = Level=Incremental at 17:30
  Run = Level=Incremental at 18:00
  Run = Level=Incremental at 18:30
  Run = Level=Incremental at 19:00
  Run = Level=Incremental at 19:30
  Run = Level=Incremental at 20:00
  Run = Level=Incremental at 20:30
  Run = Level=Incremental at 21:00
  Run = Level=Incremental at 21:30
}

The Schedule-LTO-7 schedule is defined in bareos-dir as

Schedule {
  Name = "Schedule-LTO-7"
  Description = "This schedule runs a full backup every day at 22:00."
  Enabled = yes
  Run = Level=Full at 22:00
}

We may at some point run out of tape. We would then switch to a different schedule.

Schedule {
  Name = "Schedule-LTO-7"
  Description = "This schedule runs a full backup every Monday at 22:00 and incremental backups Tuesdays to Sundays at 22:00."
  Enabled = yes
  Run = Level=Full        mon     at 22:00
  Run = Level=Incremental tue-sun at 22:00
}

When we can’t replace the tape in an orderly fashion, then it’s better to disable the LTO-7 schedule and reload the director.

The Schedule-MyCatalog schedule is defined in bareos-dir as

Schedule {
  Name = "Schedule-MyCatalog"
  Description = "This schedule runs a full backup every day at 01:00."
  Enabled = yes
  Run = Level=Full at 01:00
}

These are the two FileSets as defined in bareos-dir.

FileSet {
  Name = "FileSet-FreeBSD"

  Include {
    Options {
      Exclude = yes
      FS Type = ufs
      FS Type = zfs
      One FS = no
      Signature = MD5

      WildDir = ".snap"
      WildDir = ".zfs"
    }

    Exclude Dir Containing = .nobackup

    File = /boot/loader.conf
    File = /etc
    File = /home
    File = /root
    File = /usr/local/certs
    File = /usr/local/etc
    File = /usr/local/var
    File = /usr/local/www
    File = /var/backups
    File = /var/db/pkg
    File = /var/db/ports
    File = /var/log
    File = /var/mail
    File = /var/spool/clientmqueue
    File = /var/spool/mqueue
  }
}
FileSet {
  Name = "FileSet-Windows"
  Enable VSS = yes

  Include {
    Options {
      Drive Type = fixed
      Exclude = yes
      FS Type = ntfs
      IgnoreCase = yes
      Signature = MD5

      WildDir  = "[A-Z]:/RECYCLER"
      WildDir  = "[A-Z]:/$RECYCLE.BIN"
      WildDir  = "[A-Z]:/System Volume Information"
      WildFile = "[A-Z]:/pagefile.sys"
    }

    Exclude Dir Containing = .nobackup

    File = /
  }
}
FileSet {
  Name = "FileSet-MyCatalog"
  Include {
    Options {
      signature = MD5
    }
    File = "/var/db/bareos/bareos.sql"
    File = "/usr/local/etc/bareos"
  }
}

The gory details of the jobs can be placed in thirteen JobDefs defined in bareos-dir.

JobDefs {
  Name = "JobDefs-Default"
  Type = Backup
  Accurate = yes
  Messages = Standard
  Priority = 10
  Write Bootstrap = "/var/db/bareos/%c-%n-%i.bsr"
}
JobDefs {
  Name = "JobDefs-B2D"
  JobDefs = "JobDefs-Default"
  Pool = "B2D-sd01"
  Schedule = "Schedule-B2D"
}
JobDefs {
  Name = "JobDefs-LTO-7"
  JobDefs = "JobDefs-Default"
  Pool = "LTO-7-sd01-sa0"
  Priority = 20
  Schedule = "Schedule-LTO-7"
  Spool Data = yes
}
JobDefs {
  Name = "JobDefs-B2D-FreeBSD"
  JobDefs = "JobDefs-B2D"
  FileSet = "FileSet-FreeBSD"
}
JobDefs {
  Name = "JobDefs-B2D-Windows-Server"
  JobDefs = "JobDefs-B2D"
  FileSet = "FileSet-Windows"
}
JobDefs {
  Name = "JobDefs-LTO-7-FreeBSD"
  JobDefs = "JobDefs-LTO-7"
  FileSet = "FileSet-FreeBSD"
}
JobDefs {
  Name = "JobDefs-LTO-7-Windows-Server"
  JobDefs = "JobDefs-LTO-7"
  FileSet = "FileSet-Windows"
}
JobDefs {
  Name = "JobDefs-MyCatalog"
  JobDefs = "JobDefs-Default"
  Client = bareos-dir
  FileSet = "FileSet-MyCatalog"
  Level = Full
  RunBeforeJob = "/usr/local/lib/bareos/scripts/make_catalog_backup.pl MyCatalog"
  RunAfterJob  = "/usr/local/lib/bareos/scripts/delete_catalog_backup"
  Schedule = "Schedule-MyCatalog"
  Write Bootstrap = "|/usr/local/bin/bsmtp -h localhost -f \"\(Bareos\) \" -s \"Bootstrap for Job %j\" root@localhost"
}
JobDefs {
  Name = "JobDefs-MyCatalog-LTO-7"
  JobDefs = "JobDefs-MyCatalog"
  Pool = "LTO-7-sd01-sa0"
  Priority = 30
  Spool Data = yes
}
JobDefs {
  Name = "JobDefs-MyCatalog-B2D"
  JobDefs = "JobDefs-MyCatalog"
  Pool = "B2D-sd01"
  Priority = 31
}
JobDefs {
  Name = "JobDefs-B2D-FreeBSD-verify"
  JobDefs = "JobDefs-B2D-FreeBSD"
  Type = Verify
  Level = VolumeToCatalog
  Priority = 40
}
JobDefs {
  Name = "JobDefs-B2D-Windows-Server-verify"
  JobDefs = "JobDefs-B2D-Windows-Server"
  Type = Verify
  Level = VolumeToCatalog
  Priority = 40
}
JobDefs {
  Name = "JobDefs-LTO-7-FreeBSD-verify"
  JobDefs = "JobDefs-LTO-7-FreeBSD"
  Type = Verify
  Level = VolumeToCatalog
  Priority = 41
}
JobDefs {
  Name = "JobDefs-LTO-7-Windows-Server-verify"
  JobDefs = "JobDefs-LTO-7-Windows-Server"
  Type = Verify
  Level = VolumeToCatalog
  Priority = 41
}

Two imaginary clients named client01-fd and client02-fd are defined in bareos-dir as

Client {
  Name = client01-fd
  Address = client01.FQDN
  Password = "somethinglongandrandom4"
}
Client {
  Name = client02-fd
  Address = client02.FQDN
  Password = "somethinglongandrandom5"
}

These clients define themselves as

Client {
  Name = client01-fd
  Maximum concurrent Jobs = 20
}
Client {
  Name = client02-fd
  Maximum concurrent Jobs = 20
}

The clients must allow the director access.

Director {
  Name = bareos-dir
  Password = "somethinglongandrandom[45]"
}

Here are eight sample jobs, four jobs for each of the two imaginary clients named client01-fd and client02-fd.

Job {
  Name = "backup-client01-fd-B2D"
  JobDefs = "JobDefs-B2D-FreeBSD"
  Client = client01-fd
}

Job {
  Name = "backup-client01-fd-LTO-7"
  JobDefs = "JobDefs-LTO-7-FreeBSD"
  Client = client01-fd
}

Job {
  Name = "backup-client01-fd-B2D-verify"
  JobDefs = "JobDefs-B2D-FreeBSD-verify"
  Client = client01-fd
}

Job {
  Name = "backup-client01-fd-LTO-7-verify"
  JobDefs = "JobDefs-LTO-7-FreeBSD-verify"
  Client = client01-fd
}
Job {
  Name = "backup-client02-fd-B2D"
  JobDefs = "JobDefs-B2D-Windows-Server"
  Client = client02-fd
}

Job {
  Name = "backup-client02-fd-LTO-7"
  JobDefs = "JobDefs-LTO-7-Windows-Server"
  Client = client02-fd
}

Job {
  Name = "backup-client02-fd-B2D-verify"
  JobDefs = "JobDefs-B2D-Windows-Server-verify"
  Client = client02-fd
}

Job {
  Name = "backup-client02-fd-LTO-7-verify"
  JobDefs = "JobDefs-LTO-7-Windows-Server-verify"
  Client = client02-fd
}

If we have two or more storage daemons, then they should store their backup at each other.

Client {
  Name = bareos-sd01-fd
  Address = bareos-sd01.FQDN
  Password = "somethinglongandrandom6"
}
Job {
  Name = "backup-bareos-sd01-fd-B2D"
  JobDefs = "JobDefs-B2D-FreeBSD"
  Client = bareos-sd01-fd
  Pool = "B2D-sd02"
}

Job {
  Name = "backup-bareos-sd01-fd-LTO-7"
  JobDefs = "JobDefs-LTO-7-FreeBSD"
  Client = bareos-sd01-fd
}

Job {
  Name = "backup-bareos-sd01-fd-B2D-verify"
  JobDefs = "JobDefs-B2D-FreeBSD-verify"
  Client = bareos-sd01-fd
  Pool = "B2D-sd02"
}

Job {
  Name = "backup-bareos-sd01-fd-LTO-7-verify"
  JobDefs = "JobDefs-LTO-7-FreeBSD-verify"
  Client = bareos-sd01-fd
}
Client {
  Name = bareos-sd02-fd
  Address = bareos-sd02.FQDN
  Password = "somethinglongandrandom7"
}
Job {
  Name = "backup-bareos-sd02-fd-B2D"
  JobDefs = "JobDefs-B2D-FreeBSD"
  Client = bareos-sd02-fd
}

Job {
  Name = "backup-bareos-sd02-fd-LTO-7"
  JobDefs = "JobDefs-LTO-7-FreeBSD"
  Client = bareos-sd02-fd
}

Job {
  Name = "backup-bareos-sd02-fd-B2D-verify"
  JobDefs = "JobDefs-B2D-FreeBSD-verify"
  Client = bareos-sd02-fd
}

Job {
  Name = "backup-bareos-sd02-fd-LTO-7-verify"
  JobDefs = "JobDefs-LTO-7-FreeBSD-verify"
  Client = bareos-sd02-fd
}
Job {
  Name = "backup-MyCatalog-LTO-7"
  JobDefs = "JobDefs-MyCatalog-LTO-7"
}

Job {
  Name = "backup-MyCatalog-B2D-sd01"
  JobDefs = "JobDefs-MyCatalog-B2D"
}

#Job {
#  Name = "backup-MyCatalog-B2D-sd02"
#  JobDefs = "JobDefs-MyCatalog-B2D"
#  Pool = "B2D-sd02"
#  Priority = 32
#}

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>