Bacula 1: the open source backup software

Learn how to build a 100% free open source reliable backup system with Bacula Community Software
3.9 (8 ratings) Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
125 students enrolled
$45
Take This Course
  • Lectures 63
  • Contents Video: 7.5 hours
    Other: 11 mins
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 8/2014 English

Course Description

The course presents the theory and practice of backups, covering installation, recovery, backup specific tools, disaster recovery and many other important aspects of safe and reliable backups. Real situations recording and retrieval data will be simulated.

The course is very practical: 10% theory / 90% hands-on, and all the commands are availale on the training documentation.

Some topics:

* Types, myths, topologies and backup strategies (GFS).

* Bacula Key Features and Architeture

* Bacula Server Installation and Configuration on Linux

* Installing and Configuring the Bacula Client (Linux and Windows)

* Data Compression

* Configuration and Operation of: disk storages, magnetic tape drives and autochangers

* Bacula Commands

* Full and partial data restoration

* Graphical Interfaces

* Bacula Server disasters recovery

* Run and After Backup scripts

* Specific applications backup: virtual machines, databases, mail servers etc.

* Copying and Migration Volumes

* Deduplication File

* Exercises

Everyone who works with backup or need corporate data protection should do this training in order to know more about the inner workings of this wonderful open source backup tool. It's also great for IT university students and teachers, since it allows it's code studying and modification.

What are the requirements?

  • 2 virtual or physical Linux machines for Bacula Director and Client (at least one Debian 7.6 recommended) with 10 Gb free space on disk
  • 1 virtual or physical Windows Machine (server recommended) with 10 Gb free space on disk

What am I going to get from this course?

  • Make the student ready to fully plan, install, configure and operate a Bacula backup system
  • Understand the most used backup strategies
  • Be able to configure the backup for several specific applications (databases, virtual machines etc.)

What is the target audience?

  • System Administrators
  • IT Technicians
  • IT Managers

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Theoretical Introduction
10:46

And at the first day God made.... Backup! =)

My website: http://www.bacula.us

Bacula full Manual: http://www.bacula.us/?page_id=1880

Be my friend at Facebook: https://www.facebook.com/heitor.faria

Remark 1: If you need professional help with Bacula I'm looking for a permanent job in USA or Europe. Please recommend me. =)

Remark 2: If you need in-company Bacula training or implementation worldwide send a email to heitor@bacula.com.br

05:56

Why having a backup software is important and will still be for a long time.

13:54

Backup concepts for everyone. Notice that my english is getting better along the next lessons!

05:12

Can you distinguish differential and incremental backups? I bet you wont forget. =)

04:50

General recommendations to not have your blood sucked. Muahhhhh

09:42

A worldwide champion strategy to waste less backup storage for maximum backup data retention!

07:45

It was good for you? For me it was great! Buy my course and be passionated to Bacula as I am!

Theoretical Section
5 questions
Section 2: Installing Bacula
12:32

*This days it is more advisible to start installing from source code (Section 7), and take advantage of latest Bacula features. Complementar guide: http://bacula.us/compilation/

Just for the records I'm using a 7.6 Debian. You can make the same installation on Red Rat based systems using yum. The package names are basically the same.

If you still don't have a virtual machine you can download a VBox one from the complementar material.

20:34

The first time we never forget. =)

Installation
3 questions
Section 3: Bacula Configuration Overview
17:31

This is the main Bacula configuration file. It manages most of your backup system behavior. Just pay attention to its structure and the most important features.

baucula-dir.conf: FileSet, Schedule, Client, Storage
18:03
baucula-dir.conf: Catalog, Messages, Pool, APPLY CHANGES
14:05
16:02

Examples of how to work with disk backups.

WARNING! For different Virtual Disk Autochangers devices you must use different Media Types for every different Archive Devices (mount points).

bacula-fd.conf
05:11
03:37

The Bacula bconsole and BAT (GUI) desktop consoles can be installed on Bacula Director, but it's much more elegant and secure to have them at your administrator workstation (Linux or Windows), so you don't have to SSH or remote terminal to your server.

Section 4: Installing Linux Clients
Installation
08:45
Backup Job Configuration and Running
08:46
Restore Job Running
02:45
Section 5: Installing Windows Clients
Installation
12:06
Backup Job Configuration and Running
04:37
Restore Job Running
02:34
12:16

It's important to recover Windows disasters purposes if you don't do another types of OS backup (e. g. virtual machine full image backup).

Section 6: Director Core Configurations
07:01

"The tape said it can fit 400 Gb, but it hardly holds 300 Gb! HP is stealing for me." - Silly boy.

11:33

This is the real deal. Remember that Volume Retention only starts to run if volume is used or full.

18:24

Daily, Weekly and Monthly pools examples. But if you are smart you can make any strategy you want with my punk rock lessons.

1. Some companies also uses Yearly backups if they need to keep backup for years (one per year).

2. And don't forget! Don't mix volumes from different storages on the same pool!

Volume Lifecycle: Schedule (GFS)
09:42
03:48

This option can be nasty: the vampire may be too hungry for blood!

05:52

File Retention: retention of file indexes on Bacula database. Without that information you can't restore just a few files from a whole backup job.

Job Retention: retention of job index on Bacula database. Without that, the only way to restore a job is rebuilding its indexes on Bacula database using the bscan command, what can take some time and be painful.

04:41

Are you a forever alone guy? Bacula will send you emails every night! =)

04:22

1. Edit postfix configuration file:

vi /etc/postfix/main.cf

2. Remove the following lines:

smtp_sasl_auth_enable = yes

and

relayhost =

3. Add the following lines:

smtp_sasl_auth_enable = yes

smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

smtp_sasl_security_options = noanonymous

relayhost = [smtp.gmail.com]:587

smtp_use_tls=yes

4. Save and exit.

5. Create and edit a new file:

vi /etc/postfix/sasl_passwd

6. Add this line, replacing the mail address and password with your mail credential:

[smtp.gmail.com]:587 user@gmail.com:mail_password

7. Run the commands:

postmap /etc/postfix/sasl_passwd

postfix reload

8. Run a job and wait for the email. You can check mail sending error messages at: /var/log/mail.log

Section 7: Bacula 7.x Compilation Install / Upgrade
03:04

*In my website there are always updated steps for Bacula source code deploy than bellow: http://bacula.us/compilation

===========================================================================

Direct link to Bacula Source Forge project files: http://sourceforge.net/projects/bacula/files/

And/or just use the commands:

cd /usr/src

wget http://downloads.sourceforge.net/project/bacula/bacula/7.0.5/bacula-7.0.5.tar.gz

tar -xzvf bacula-7.0.5.tar.gz

cd ./bacula-7.0.5

02:39

Install:

apt-get install build-essential pkg-config libmysql++-dev libssl-dev libreadline6-dev

03:50

./configure --with-mysql --with-readline=/usr/include/readline --disable-conio --bindir=/usr/bin --sbindir=/usr/sbin --with-scriptdir=/etc/bacula/scripts --with-working-dir=/var/lib/bacula

01:47

apt-get remove bacula-director-mysql bacula-sd-mysql bacula-fd bacula-console

02:47

1. Just do a minor ajust at the Makefile (only for 32 bits Debian):

vi /usr/src/bacula-7.0.5/src/cats/Makefile

2. Change the following line value from nothing (only for 32 bits Debian):

MYSQL_LIBS =

to:

MYSQL_LIBS = /usr/lib/i386-linux-gnu/libmysqlclient.a

3. Save and exit.

4. Procede with compilation and installation (the make -j8 speeds up the compilation since it may use up to 8 processors in parallel processing). The make install-autostart install the scripts to automatic start Bacula daemons at boot time:

make -j8 && make install && make install-autostart

5. Restart all Bacula daemons:

/etc/init.d/bacula-fd restart

/etc/init.d/bacula-sd restart

/etc/init.d/bacula-dir restart

6. Finish with a little permission fix:

chown -R bacula /var/lib/bacula/

7. Done! Rock with your new 7.0.x Bacula.

Section 8: Bacula Service Disaster Recovery
06:58

1. Disable the Bacula Backup Catalog Job RunAfter script, so you can always have a dump of your database handy.

2. Run a Bacula Backup Catalog Job.

3. You can simulate a disaster deleting your bacula database at the mysql (or pgsql) console:

mysql -u root -p(password)

drop database bacula;

4. To restore your dump, go to your database dump directory:

cd /var/lib/bacula

5. And then mysql console (e. g.):

mysql -u root -p(password)

create database bacula;

use bacula;

\. bacula.sql

6. Your database should be restored.

06:38

1. You can simulate a disaster using the delete command on bconsole to delete some volumes.

2. bscan example (disk):

bscan -s -m -c /etc/bacula/bacula-sd.conf -v -V "daily-0|daily-1" /backup -u root -P (db_password)

3. bscan example (tape):

bscan -s -m -c /etc/bacula/bacula-sd.conf -v -V daily-0 /dev/nst0 -u root -P (db_password)

06:05

1. bls to list contents from a volume:

bls /backup/daily-1

2. bextract restoring everything from a volume to /tmp:

bextract -c /etc/bacula/bacula-sd.conf -V daily-0 /backup /tmp

3. Creating a specific list of files to restore (e. g.: Bacula Director essential files):

vi /tmp/list

Insert:

/etc/bacula/*.conf

*bacula.sql

4. bextract to restore only listed files on /tmp/list:

bextract -i /tmp/list -c /etc/bacula/bacula-sd.conf -V daily-1 /backup /tmp

Section 9: Bacula Core Commands
09:15

Status is one of the most important Bacula commands. You can use to gather information about running or terminated backups where director, storage ou client are involved.

You MUST also use status everytime you configure a new Bacula client or storage, in order to test the connection between director and the new daemon (network, password etc.). If everything is OK you should receive the client or storage name, version and the information about the jobs run.

disable, enable, estimate
06:05
list, llist
07:02
label, label barcodes
04:37
restore
09:29
reload, status
05:55
show, update
05:06
shell redirection, @output, @input
05:06
Section 10: Amazon S3 Off-site Mounted Backup (not recommended)
03:04

================================================================================

This is not the fastest way to do remote backups because every mounting protocol have limitations. It's best to rent a EC2 Amazon machine (e.g.) and install a bacula-sd there to off-site backups.

Always try to use the newer versions of the softwares on this chapter.

================================================================================

CENTOS:

1. Remove same software packages, since we are compiling them:

yum remove fuse fuse-s3fs

2. Instalall dependencies:

yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap

DEBIAN:

1. Remove same software packages, since we are compiling them:

sudo apt-get remove fuse

2. Install dependencies:

sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support

================================================================================

*Source: http://tecadmin.net/mount-s3-bucket-centosrhel-ubuntu-using-s3fs/

02:00

1. Download, extract, configure and make:

cd /usr/src/

wget http://downloads.sourceforge.net/project/fuse/fuse-2.X/2.9.3/fuse-2.9.3.tar.gz

tar xzf fuse-2.9.3.tar.gz

cd fuse-2.9.3

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

./configure --prefix=/usr/local

make && make install

vim /etc/ld.so.conf

2. Add those lines:

/usr/local/lib
include ld.so.conf.d/*.con

3. Load module:

ldconfig

modprobe fuse

01:15

1. Download, extract, configure and make (PLEASE USE THE MOST RECENT VERSION AVAILABLE FOR S3FS):

cd /usr/src/

wget https://s3fs.googlecode.com/files/s3fs-1.77.tar.gz

tar xzf s3fs-1.77.tar.gz

cd s3fs-1.77

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

./configure --prefix=/usr/local

make && make install

02:23

echo AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY > ~/.passwd-s3fs

chmod 600 ~/.passwd-s3fs

04:48

1. bacula-sd.conf sample:

Device {
Name = S3
Media Type = File
Archive Device = /mnt/bucket
LabelMedia = yes; # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
}

2. bacula-dir.conf sample:

Storage {
Name = StorageAmazon
# Do not use "localhost" here
Address = 192.168.0.120 # N.B. Use a fully qualified name here
SDPort = 9103
Password = storage_password
Device = S3
Media Type = File

}

04:20

1. Create paths:

mkdir /tmp/cache

mkdir /mnt/bucket

chmod 777 /tmp/cache /mnt/bucket

2. Mounting command:

s3fs -o url=http://s3-us-west-2.amazonaws.com baculaheitor -o use_cache=/tmp/cache /mnt/s3fs

Where baculaheitor is the bucket name; URL is the address of your bucket => can be fetch from AWS Console properties.

3. It's very good idea to test and debug it adding the -d -f options to the mounting command, at the first time.

4. To unmount:

fusermount -u /mnt/s3fs

Section 11: 2nd Remote Storage Daemon for Off-Site Backups
07:19

I'll deploy a second Storage daemon on a remote hosting provider linux box - Debian.

1. Deps:

apt-get install libmysql++-dev build-essential

(or postgresql-server-dev-9.3 if your director is running on Postgresql)

2. Get the code:

cd /usr/src

wget http://downloads.sourceforge.net/project/bacula/bacula/7.0.5/bacula-7.0.5.tar.gz

tar -xzvf bacula-7.0.5.tar.gz

cd ./bacula-7.0.5

3. Configure:

./configure --enable-build-dird=no --enable-build-stored=yes --bindir=/usr/bin --sbindir=/usr/sbin --with-scriptdir=/etc/bacula/scripts --with-working-dir=/var/lib/bacula --with-logdir=/var/log --enable-smartalloc

4. Make & Install:

make -j8 && make install && make install-autostart

10:30

1. Edit bacula-sd.conf:

Director {

Name = debian-dir # change to your local director specifications

Password = "storage_password"

}


Autochanger {

Name = FileChgr1

Device = FileChgr1-Dev1, FileChgr1-Dev2

Changer Command = ""

Changer Device = /dev/null

}

Device {

Name = FileChgr1-Dev1

Media Type = File1

Archive Device = /mnt/backup

LabelMedia = yes; # lets Bacula label unlabeled media

Random Access = Yes;

AutomaticMount = yes; # when device opened, read it

RemovableMedia = no;

AlwaysOpen = no;

Maximum Concurrent Jobs = 5

}

Device {

Name = FileChgr1-Dev2

Media Type = File1

Archive Device = /mnt/backup

LabelMedia = yes; # lets Bacula label unlabeled media

Random Access = Yes;

AutomaticMount = yes; # when device opened, read it

RemovableMedia = no;

AlwaysOpen = no;

Maximum Concurrent Jobs = 5

}

2. On Shell:

mkdir /mnt/backup

chown bacula /mnt/backup

3. On local bacula-dir.conf:

Storage {

Name = Off-Site-Chicago-VPS

Address = 172.245.35.120

SDPort = 9103

Password = "storage_password"

Device = FileChgr1

Media Type = File1

Maximum Concurrent Jobs = 10

Autochanger = yes

}

03:33

Pool {

Name = Off-site-pool

Pool Type = Backup

Recycle = yes # Bacula can automatically recycle Volumes

AutoPrune = yes # Prune expired volumes

...

Label Format = "Vol-" # Auto label

Storage = Off-Site-Chicago-VPS

}

2. Add to your schedule as desired:

Schedule {

Name = "WeeklyCycle"

Run = Full Pool=Off-site-pool 1st sun at 23:05

}

3. Backup will be stored on the off-site storage daemon.

4. You can also use Copy or Migrate Jobs (topic ahead) to store backups on you off-site storage daemon.

Section 12: Copy and Migrate Jobs
03:52

Don't forget to create and give permissions to Bacula to the new Archive Device directory, if applicable:

mkdir /mnt/backup3

chown bacula /mnt/backup3

03:31

Just an exemple:

Pool { # Copy Source Pool

Name = File

Pool Type = Backup

Recycle = yes # Bacula can automatically recycle Volumes

AutoPrune = yes # Prune expired volumes

Volume Retention = 365 days # one year

Maximum Volume Bytes = 50G # Limit Volume size to something reasonable

Maximum Volumes = 100 # Limit number of Volumes in Pool

Label Format = "Vol-" # Auto label

Next Pool = Copy # Copy Destination Pool

Storage = File1 # Storage Source Pool

}

Pool {

Name = Copy

Pool Type = Backup

Recycle = yes # Bacula can automatically recycle Volumes

AutoPrune = yes # Prune expired volumes

Volume Retention = 365 days # one year

Maximum Volume Bytes = 50G # Limit Volume size to something reasonable

Maximum Volumes = 100 # Limit number of Volumes in Pool

Label Format = "${Pool}-${NumVols}" # Auto label

Storage = File2

}

08:09

Job {

Name = "BackupCopy"

JobDefs = "DefaultJob"

Type = Copy # or Migrate

Pool = File # Source Pool

Storage = File2 # Destination Storage

Selection Type = PoolUncopiedJobs

}

Migration and Copy Job Resource Directives*

The following directives can appear in a Director's Job resource, and they are used to define a Migration job.

Pool = Pool-name

The Pool specified in the Migration control Job is not a new directive for the Job resource, but it is particularly important because it determines what Pool will be examined for finding JobIds to migrate. The exception to this is when Selection Type = SQLQuery, and although a Pool directive must still be specified, no Pool is used, unless you specifically include it in the SQL query. Note, in any case, the Pool resource defined by the Pool directove must contain a Next Pool = ... directive to define the Pool to which the data will be migrated.

Type = Migrate

Migrate is a new type that defines the job that is run as being a Migration Job. A Migration Job is a sort of control job and does not have any Files associated with it, and in that sense they are more or less like an Admin job. Migration jobs simply check to see if there is anything to Migrate then possibly start and control new Backup jobs to migrate the data from the specified Pool to another Pool. Note, any original JobId that is migrated will be marked as having been migrated, and the original JobId can nolonger be used for restores; all restores will be done from the new migrated Job.

Type = Copy

Copy is a new type that defines the job that is run as being a Copy Job. A Copy Job is a sort of control job and does not have any Files associated with it, and in that sense they are more or less like an Admin job. Copy jobs simply check to see if there is anything to Copy then possibly start and control new Backup jobs to copy the data from the specified Pool to another Pool. Note that when a copy is made, the original JobIds are left unchanged. The new copies can not be used for restoration unless you specifically choose them by JobId. If you subsequently delete a JobId that has a copy, the copy will be automatically upgraded to a Backup rather than a Copy, and it will subsequently be used for restoration.

Selection Type = Selection-type-keyword

The Selection-type-keyword determines how the migration job will go about selecting what JobIds to migrate. In most cases, it is used in conjunction with a Selection Pattern to give you fine control over exactly what JobIds are selected. The possible values for Selection-type-keyword are:

SmallestVolume

This selection keyword selects the volume with the fewest bytes from the Pool to be migrated. The Pool to be migrated is the Pool defined in the Migration Job resource. The migration control job will then start and run one migration backup job for each of the Jobs found on this Volume. The Selection Pattern, if specified, is not used.

OldestVolume

This selection keyword selects the volume with the oldest last write time in the Pool to be migrated. The Pool to be migrated is the Pool defined in the Migration Job resource. The migration control job will then start and run one migration backup job for each of the Jobs found on this Volume. The Selection Pattern, if specified, is not used.

Client

The Client selection type, first selects all the Clients that have been backed up in the Pool specified by the Migration Job resource, then it applies the Selection Pattern (defined below) as a regular expression to the list of Client names, giving a filtered Client name list. All jobs that were backed up for those filtered (regexed) Clients will be migrated. The migration control job will then start and run one migration backup job for each of the JobIds found for those filtered Clients.

Volume

The Volume selection type, first selects all the Volumes that have been backed up in the Pool specified by the Migration Job resource, then it applies the Selection Pattern (defined below) as a regular expression to the list of Volume names, giving a filtered Volume list. All JobIds that were backed up for those filtered (regexed) Volumes will be migrated. The migration control job will then start and run one migration backup job for each of the JobIds found on those filtered Volumes.

Job

The Job selection type, first selects all the Jobs (as defined on the Name directive in a Job resource) that have been backed up in the Pool specified by the Migration Job resource, then it applies the Selection Pattern (defined below) as a regular expression to the list of Job names, giving a filtered Job name list. All JobIds that were run for those filtered (regexed) Job names will be migrated. Note, for a given Job named, they can be many jobs (JobIds) that ran. The migration control job will then start and run one migration backup job for each of the Jobs found.

SQLQuery

The SQLQuery selection type, used the Selection Pattern as an SQL query to obtain the JobIds to be migrated. The Selection Pattern must be a valid SELECT SQL statement for your SQL engine, and it must return the JobId as the first field of the SELECT.

PoolOccupancy

This selection type will cause the Migration job to compute the total size of the specified pool for all Media Types combined. If it exceeds the Migration High Bytes defined in the Pool, the Migration job will migrate all JobIds beginning with the oldest Volume in the pool (determined by Last Write time) until the Pool bytes drop below the Migration Low Bytes defined in the Pool. This calculation should be consider rather approximative because it is made once by the Migration job before migration is begun, and thus does not take into account additional data written into the Pool during the migration. In addition, the calculation of the total Pool byte size is based on the Volume bytes saved in the Volume (Media) database entries. The bytes calculate for Migration is based on the value stored in the Job records of the Jobs to be migrated. These do not include the Storage daemon overhead as is in the total Pool size. As a consequence, normally, the migration will migrate more bytes than strictly necessary.

PoolTime

The PoolTime selection type will cause the Migration job to look at the time each JobId has been in the Pool since the job ended. All Jobs in the Pool longer than the time specified on Migration Time directive in the Pool resource will be migrated.

PoolUncopiedJobs

This selection which copies all jobs from a pool to an other pool which were not copied before is available only for copy Jobs.

Selection Pattern = Quoted-string

The Selection Patterns permitted for each Selection-type-keyword are described above.

For the OldestVolume and SmallestVolume, this Selection pattern is not used (ignored).

For the Client, Volume, and Job keywords, this pattern must be a valid regular expression that will filter the appropriate item names found in the Pool.

For the SQLQuery keyword, this pattern must be a valid SELECT SQL statement that returns JobIds.

Purge Migration Job = yes/no

This directive may be added to the Migration Job definition in the Director configuration file to purge the job migrated at the end of a migration.

Migration Pool Resource Directives

The following directives can appear in a Director's Pool resource, and they are used to define a Migration job.

Migration Time = time-specification

If a PoolTime migration is done, the time specified here in seconds (time modifiers are permitted - e.g. hours, ...) will be used. If the previous Backup Job or Jobs selected have been in the Pool longer than the specified PoolTime, then they will be migrated.

Migration High Bytes = byte-specification

This directive specifies the number of bytes in the Pool which will trigger a migration if a PoolOccupancy migration selection type has been specified. The fact that the Pool usage goes above this level does not automatically trigger a migration job. However, if a migration job runs and has the PoolOccupancy selection type set, the Migration High Bytes will be applied. Bacula does not currently restrict a pool to have only a single Media Type, so you must keep in mind that if you mix Media Types in a Pool, the results may not be what you want, as the Pool count of all bytes will be for all Media Types combined.

Migration Low Bytes = byte-specification

This directive specifies the number of bytes in the Pool which will stop a migration if a PoolOccupancy migration selection type has been specified and triggered by more than Migration High Bytes being in the pool. In other words, once a migration job is started with PoolOccupancy migration selection and it determines that there are more than Migration High Bytes, the migration job will continue to run jobs until the number of bytes in the Pool drop to or below Migration Low Bytes.

Next Pool = pool-specification

The Next Pool directive specifies the pool to which Jobs will be migrated. This directive is required to define the Pool into which the data will be migrated. Without this directive, the migration job will terminate in error.

Storage = storage-specification

The Storage directive specifies what Storage resource will be used for all Jobs that use this Pool. It takes precedence over any other Storage specifications that may have been given such as in the Schedule Run directive, or in the Job resource. We highly recommend that you define the Storage resource to be used in the Pool rather than elsewhere (job, schedule run, ...).*Source: http://www.bacula.com.br/manual/Migration_Copy.html

Section 13: Tape Libraries Configuration
09:20

For tests, a nice Tape Library Emulator - mhvtl:

http://www.bacula.com.br/instalar-emulador-de-robos-de-fita-mhvtl/?lang=en

Printing new Barcode Labels to your tapes:

http://www.bacula.com.br/capitulo-9-capitulo-x-storage-em-fitas-magneticas/?lang=en


11:09

[Remark] If you want Bacula to spread the writing to more than one tapes insert the following directive into all the drives stanzas, where more than x simultaneous jobs will force it to load a new tape in the next drive:

Maximum Concurrent Jobs = x

bacula-sd sample:

Autochanger {

Name = RoboIBM

Device = Drive-1

Device = Drive-2

Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"

Changer Device = /dev/tape/by-id/scsi-SSTK_L700_XYZZY_A

}

Device {

Name = Drive-1 #

Drive Index = 0

Media Type = LTO

Archive Device = /dev/tape/by-id/scsi-350223344ab000100-nst

AutomaticMount = yes; # when device opened, read it

AlwaysOpen = yes;

RemovableMedia = yes;

RandomAccess = no;

AutoChanger = yes

}

Device {

Name = Drive-2 #

Drive Index = 1

Media Type = LTO

Archive Device = /dev/tape/by-id/scsi-350223344ab000200-nst

AutomaticMount = yes; # when device opened, read it

AlwaysOpen = yes;

RemovableMedia = yes;

RandomAccess = no;

AutoChanger = yes

}

08:48

bacula-dir sample:

Storage {

Name = MeuRobo

Address = 192.168.1.101 # N.B. Use a fully qualified name here

SDPort = 9103

Password = "3rcZXX2Sbjm9k/zMwN/v4+2VESdYjdmQwisX0fdb3Fpi"

Device = RoboIBM

Media Type = LTO

Maximum Concurrent Jobs = 10 # run up to 10 jobs a the same time

Autochanger =yes

}

Section 14: Bacula TLS Communication Encryption
Documentation
8 pages
Section 15: Backup Data encryption
Documentation
3 pages

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Heitor Medrado De Faria, Bacula Backup Software Book Author and Teacher

"Bacula: Open Source Backup Tool" and "Open Licenses & Fundamental Rights" books author (both in portuguese). Bacula Training instructor at Udemy. Law Graduated. IT Service Manager and Project Management extension degrees. Works on a brazilian governmental company called SERPRO and for Neocode Software (Canada). Bacula brazilian community founder. Has plenty of experience as a server / backup systems administrator (Windows, Linux, Netware, directory services) and as IT / Project manager. ITIL-F, TOEIC (Golden Certificate) and LPIC-III certificated professional.

Ready to start learning?
Take This Course