Affa

From SME Server
Jump to navigationJump to search

Automated Remote Disk Archiver for SME 7 Server

Maintainer

Michael Weinberger

Affa was contributed on Thu Apr 05, 2007


Description

The main purpose of this affa package is to make a SME 7 Server a dedicated backup box in a few minutes. Affa backups as many as you like SME servers or any other servers which have sshd running and rsync installed. Once it was configured, Affa runs reliable and unattended and send warning messages in case of any errors.

All backup archive are full backups. As Affa make use of the hardlink technique. Therefore a new full backup only needs disk space for the differences plus the filesystem overhead for the hardlinks and directories (which is typically 2-3%).

Affa is based on the rsync program and supports the rsync --compress option. This allows you to run backups over the internet or VPN. A typical setup is one or more Affa backup servers placed in different locations, which backup the production server(s) over the VPN.

A special feature is the rise option, which allows you to rise the backup server to your production server from a backup archive in case of a dead loss of your production server. The rise is executed within a extremly short time, even with huge amount of data. The rise feature uses hardlinks and therefore does not use up additional disk space.

Affa is a command line tool for system administrators and is intentionally designd without a GUI. Therefore it can be efficiently managed on the console and over slow internet connections.

Affa features at a glance

  • Makes full backups on every scheduled run
  • Keeps a configurable number of scheduled, daily, weekly, monthly and yearly full backup archives
  • Using rsync with optional compression for low traffic allows backups over the internet/VPN
  • Uses hardlink technique, i.e. physical disk space only needed for the differences between two full backups
  • Backup jobs are started by the cron daemon
  • Backups the default e-smith directories/files, when property SMEServer is set to yes
  • Additional directories/files can be included
  • Directories/files can be excluded from the backup
  • Non-SME server linuxes can be backuped by setting the SMEServer property to no and using a include list
  • Configurable nice level for rsync processes on the backup and source server
  • Optional run of custom programs before and after a job run (e.g. running tape backup)
  • Checks the disk space left after a job run with warning levels strict, normal or risky
  • Extensive checking of failure conditions
  • Sends failure messages to a configurable list of email addresses
  • Sends a warning message, if the backup server run out of disk space
  • Installs an optional watchdog on the source server for the case the backupserver fails
  • Watchdog sends warning, if an expected backup did not run
  • Watchdog sends a daily reminder message, if the error continues unchecked
  • Option to display current status of all jobs showing times of last and next run, size and disk usage
  • Status can be mailed on a daily, weekly or monthly schedule
  • Option to display all exiting archives of a job shown date, size, nbr of files and disk usage
  • Option to send the public key to the source server
  • Option to rise the backup server to a production server from a backup. For SME 7 only.
  • The rise feature does not physically move data and therefore is extremly fast and needs (almost) no disk space
  • Rise option can be run remotely as the ethernet drivers of the backup server are preserved
  • Compares installed RPMs on source with backup server. Sends warning message, if not in sync
  • Undo rise option to restore the backup server
  • Configurable via a e-smith style db, with one record for each job and a default record for all jobs
  • Logs in /var/log/affa with optional debug switch for more verbosity
  • Log files are rotated weekly, with 5 logs kept

Download and Installation

Download the smeserver-affa package from one of the SME Server contrib mirrors.
Download the perl-Filesys-DiskSpace package from DAG or from one of the mirrors above.
Install the RPMs.


Quick start example

You have a SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1.
Set up a second SME 7 box as your backupserver with hostname 'affabox‘ and IP 10.200.48.2.

  1. log into the 'affabox' and install the packages as described above.
  2. copy the config helper script sample
    # cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl
  3. edit /root/prodbox-job.pl and set
    my $jobname='prodbox';
    and
    'remoteHostName‘=>'10.200.48.1',
  4. write the configuration (this makes the database entries and sets up the cronjobs)
    # /root/prodbox-job.pl
  5. generate the DSA keys and send the public key to the 'prodbox'
    # affa --send-keys prodbox
  6. run the job manually
    # affa --backup prodbox

Configuration

The configuration is stored in an e-smith style database. Use the db command to configure Affa. The jobname is the record key with the type 'job'.
To setup a new job with the name 'prodbox' enter:
# db affa set prodbox job
then set the properties
# db affa setprop prodbox remoteHostName 192.168.1.1
# db affa setprop prodbox TimeSchedule '0030,0730,1130,1330,1730,2030'
# db affa setprop prodbox Description 'My Production Server'
# db affa setprop prodbox status enable
and so on...

Alternatively you can you use a script as described above in the 'Quick start' chapter.

To verify your work, type:
# db affa show prodbox

Finally set up the cronjobs:
# affa --make-cronjobs


Job configuration properties

Property Value Description
remoteHostName FQHN or IP of the source host
TimeSchedule HHMM,HHMM,... doesn't need to be ordered. At least one time is mandatory
Description text string
scheduledKeep integer >= 2 how many of the scheduled should be kept
dailyKeep
weeklyKeep
monthlyKeep
yearlyKeep
integer >= 1 how many of the daily, weekly, monthly or yearly backups should be kept
SMEServer yes or no when set to yes the default e-smith directories are automatically included and the properties RPMCheck and Watchdog can be used
Include[0]
Include[1]
...
full path additional files or directories to include
Exclude[0]
Exclude[1]
...
full path additional files or directories exclude from backup
RPMCheck yes or no compares the packages installation of the source host with this affa backup host. Sends a message with diff list if not in sync. This check is usefull, if you want have the option to rise the backup server to a production server from a backup.
DiskSpaceWarn strict or normal or risky or none run a disk space check after a job has been completed. With level 'strict' a warning message will be sent, if the available space is less then the size of the just completed backup. With level 'normal'/'risky' the message is sent, if less than 50%/10% of the backup size is still available.
localNice -19...+19 run rsync local process niced.
remoteNice -19...+19 run rsync process on source niced.
Watchdog yes or no when a job is started, affa installs a watchdog script on the source in /etc/cron.d/, which sends a warning message, if the next scheduled job (taken from the TimeSchedule property + 10 minutes) did not run. This guarantees, that you will be notfied even in case of a affa server outage. The watchdog script send a daily reminder message, if the error continues. The next run job replaces the watchdog script with a new trigger time.
sshPort service port Default is 22. When sshd on the source host or your firewall listen on a non-standard port set the port here.
ConnectionCheckTimeout seconds before the rsync is started on the remote source host, affa check the ssh conncetion and exits with an error after the configured time, if the host does not response.
rsyncTimeout seconds Rsync exits, if no data is transferred for the configured time. This avoids infinitely hanging in case of a network error.
rsyncCompress yes or no compress the tranferred data. May be useful with slow internet connections. Increases CPU load on source and backup host.
EmailAddresses name@domain.com,name@domain.com,... comma separated list of mail addresses, where the messages should be sent to
Note: By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).
chattyOnSuccess integer >= 0 when set to a value>0, Affa sends a message on a sucessfully completed job run and decrements the chattyOnSuccess value. When the value has reached zero, Affa falls back to the default and only sends messages on errors.
AutomountDevice
AutomountPoint
full path Device and mountpoint of backup device (e.g. USB disk). Device is automounted before a job starts and unmounted after job completion. With both properties empty no automount is done.
preJobCommand
postJobCommand
full path programs (local on the affa server) to be executed before/after a job run. The jobname and type (scheduled, daily etc.) are passed as arguments to the program. The exit code is additionally passed to the post job command program. See /usr/lib/affa/ for sample perl scripts.
RootDir full path where to store the backup archives, Do not use /home/e-smith or /root as these are included in the backup and therefore the rise otpion will not work! Recommended: /var/affa
Debug yes or no set to yes to increase log verbosity
status enabled or disabled with set to disabled, no cron entries will made. You can still run a job manually.
rsync--inplace yes or no set to no, if the rsync versionon the source does not support this option (like rsync on SME6)


Default configuration properties

All properties can be set as defaults in the DefaultAffaConfig record. This is useful, when you set up many similar jobs.
Example: You want set the property 'localNice' to 19 for all jobs. The run
# db affa setprop DefaultAffaConfig localNice 19
and don't set this property for the jobs.
Poperties set in the job record overides the defaults.

The special property 'sendStatus' is only applicable to the DefaultAffaConfig record. It controls the status report sent by mail and can be set to the values none, daily, weekly or monthly. To setup a weekly status report run:
# db affa setprop DefaultAffaConfig sendStatus weekly
then setup the cronjob:
# affa --make-cronjobs


Global disable

All jobs can be disabled with setting the AffaGlobalDisable record type to 'yes'.
# db affa set AffaGlobalDisable yes
# affa --make-cronjobs

to re-enable run:
# db affa set AffaGlobalDisable no
# affa --make-cronjobs


Usage and command line options

affa --run JOB
Starts a job run. Usually done by the cronjob.

affa --make-cronjobs
Configures the cronjobs as schedule in the jobs records.

affa --send-keys JOB
Sends the public key to the host 'remoteHostName' as configure in the record of job JOB. Generates the DSA key, ff not already done.

affa --send-keys --host=TARGETHOST [--port=PORT]
Sends the public key to the TARGETHOST. TARGETHOST is a FQHN or an IP address. Give PORT, if sshd on the TARGETHOST listen to another than the standard port 22. Generates the DSA key, ff not already done.

affa --full-restore JOB [ARCHIVE]
Does a full restore from the backup ARCHIVE on the remote source server as defined in the JOB record. If ARCHIVE is not given, the archive 'scheduled.0' is used as the default. The full restore reconstruct the server as it was at the time of the backup. After the restore the source host reboots.

affa --rise JOB [ARCHIVE]
Runs a full restore from the backup ARCHIVE on the Affa server (!) from the backup ARCHIVE of job JOB. In oter words: After completion, the Affa box reboots as a clone of the source server. Ensure, that the source server has been powered off before you reboot the Affa box, otherwise the network interface will not come up. This is important, when you run --rise remotely. The --rise feature only works with SME 7 servers.

affa --undo-rise
This feature reverts a risen Affa box to a backup server. After a reboot, all configured jobs will work again.

affa --list-archives [--csv] JOB
Displays a table of all present archives of job JOB with date, number of files, size and disk usage. See chapter 'Restore' for an output example. With --csv, the output is in machine readable colon separared format.

affa --status [--csv]
Displays a table of all configured jobs with enable status, time of last and next run, size, disk usage and the number of scheduled (s), daily (d), weekly (w), monthly (m) and yearly (y) archives. Last time shows 'failed', if a job did not run in the last 24h. For disabled jobs 'Last' always shows 'failed' after 24 h. To see the date and time of the last run of those jobs use the --list-archives option. Column 'Next' shows the time when the next run will be started, if 'Enabled' is 'yes'. Column 'Disk usage' shows the usage of the partion, where the RootDir of the job is located on. If all jobs are located in the same RootDir, identical disk usage is shown for all jobs. If RootDir is /var/affa, the usage of the SME server system partition is shown.

Affa version 0.4.2 on backup.mydomain.de (10.204.48.2)
+------------+---------+--------+-------+--------+------------+----------------+
| Job        | Enabled | Last   | Next  |   Size | Disk usage | N of s,d,w,m,y |
+------------+---------+--------+-------+--------+------------+----------------+
| bookkeep   | yes     | 00:02  | 23:30 |  7.9GB |  491GB/37% | 2,7,4,1,0      |
| crm        | no      | failed | 07:20 |   47MB |  491GB/37% | 7,7,4,1,0      |
| fespdc     | yes     | 18:46  | 21:45 |   34GB |  491GB/37% | 6,7,4,1,0      |
| helpdesk   | yes     | 19:40  | 07:40 |   68MB |  491GB/37% | 7,7,4,1,0      |
| imageserv  | yes     | 23:08  | 23:00 |   17GB |  491GB/37% | 2,7,1,0,0      |
| intraweb   | yes     | 19:31  | 23:30 |  1.4GB |  491GB/37% | 7,7,4,1,0      |
| pdcaus2    | yes     | 12:15  | 23:00 |  4.9GB |  491GB/37% | 2,7,4,1,0      |
| persoff    | yes     | running (pid 13229)     |  491GB/37% | 2,7,4,1,0      |
| primmail   | yes     | 19:09  | 23:00 |   43GB |  491GB/37% | 7,7,4,1,0      |
| rayofhope  | yes     | 23:01  | 22:30 |   19GB |  491GB/37% | 2,7,4,0,0      |
| sozserv    | yes     | 22:30  | 22:30 |  8.0GB |  491GB/37% | 2,7,4,1,0      |
+------------+---------+--------+-------+--------+------------+----------------+

With --csv, the output is in machine readable colon separated format.


affa --send-status
Sends the status table to the email adresses configured in the 'DefaultAffaConfig' record. Used by the cronjob 'affa-status'.

affa --mailtest JOB
Sends a test email to the email adresses configured in the JOB record. Use this to verify, that your mail processing is functional.
Note: By default Affa only sends messages on errors, never on success (see property chattyOnSuccess).

affa --cleanup JOB
After you have lowered a keep value, e.g. scheduledKeep, then archives with a higher indices will no longer be shifted and will exist for ever. This option find these archives and delete them.

affa --rename-job JOB NEWNAME
Renames the job JOB to NEWNAME including all database records and archive directories.

affa --move-archive JOB NEWROOTDIR
Moves the archive directory of job JOB to the rootdir NEWROOTDIR and adjusts the property RootDir. NEWROOTDIR must be a full path starting with a slash. As moving across filesystems (e.g. from an external USB drive to the local disk) is not possible, Affa uses a copy command in this case and deletes the source directory after that. Depending on the archive size, copying across filesystems can take a long time.

affa --delete-job JOB
Irreversibly deletes a job including all archives, configuration and report databases.

Note: Options can be abbreviated to uniqueness, e.g. --mak is equal to --make-cronjobs

Example setups

[Todo] Standard

Dedicated Affa server to backup all production servers
...

[Todo] Local Affa server plus a Affa server in remote location

  1. Standard setup
    ...
  2. Chained setup
    ...

Backup single ibays

Suppose you want to backup the ibays 'staff1' and 'staff2' on your production server with WAN IP 82.123.1.1 to an Affa server in a different location over the internet every night at 2:30am.

  1. log into the Affa server and install the packages as described above.
  2. copy the config helper script sample
    # cp /usr/lib/affa/jobconfig-sample.pl /root/ibay-staff-job.pl
  3. edit /root/ibay-staff-job.pl and set
    my $jobname='ibay-staff';
    and
    'remoteHostName‘=>'82.123.1.1',
    'TimeSchedule'=>'0230',
    'SMEServer'=>'no',
    'Include[0]'=>'/home/e-smith/files/ibays/staff1',
    'Include[1]'=>'/home/e-smith/files/ibays/staff2',
  4. write the configuration
    # /root/ibay-staff-job.pl
  5. send the public key to the production server
    # affa --send-keys ibay-staff
  6. check next morning
    # affa --list-archives ibay-staff
    # affa --status
    # ls /var/affa/ibay-staff

[Todo] Two production servers backup each other

...

Use Affa to backup to a NFS-mounted NAS or a local attached USB drive

You want to backup your SME 7 production server with hostname 'prodbox‘ and IP 10.200.48.1 on a mounted filesystem instead of setting up a dedicated Affa box.

Setup NAS

You have a FreeNAS box with IP 10.200.48.2 up and running with NFS service enabled for your network 10.200.48.0/22. The RAID array is mounted to /mnt/affashare.

  1. log into the 'prodbox' and install the NFS packages
    yum --enablerepo=base install nfs-utils
    You don't need to signal post-upgrade or reboot event.
  2. mount the NFS share
    mkdir -p /mnt/affadevice
    mount 10.200.48.2:/mnt/affashare /mnt/affadevice
Alternatively setup a USB drive
  1. log into the 'prodbox'
  2. Connect a USB hard disk to the USB Bus. Now you must determine what device the kernel has assigned to the drive. View the /var/log/message and search for Initializing USB Mass Storage driver. A few lines below you'll find the name of the device. In this example it is sdc. Replace /dev/sdc by your device in following instructions.
  3. Use the fdisk program to create a linux partition
    # fdisk /dev/sdc
    You'll most likely find an existing vfat dos partition, which you have to delete first. In the following we assume, that you have created a single partition /dev/sdc1.
  4. Now format the drive with an ext3 filesystem
    mkfs.ext3 /dev/sdc1
  5. Make the mount point
    # mkdir -p /mnt/affadevice
  6. Add the following line to the /etc/fstab
    /dev/sdc1 /mnt/affadevice ext3 defaults
  7. Mount the drive
    mount /mnt/affadevice
  8. Crosscheck your work using the df command
    # df
Setup Affa

You want to run backups on 11:30 h, 15:30 h and 19:30 h and you want to keep the last 3 scheduled backups, 7 daily, 5 weekly, 12 monthly and 1 yearly backups.

  1. log into the 'prodbox' and install the Affa packages as described above.
  2. copy the config helper script sample
    # cp /usr/lib/affa/jobconfig-sample.pl /root/prodbox-job.pl
  3. edit /root/prodbox-job.pl and set
    my $jobname='prodbox';
    and
    'remoteHostName‘=>'localhost',
    'TimeSchedule'=>'1130,1530,1930',
    'scheduledKeep'=>3,
    'dailyKeep'=>7,
    'weeklyKeep'=>5,
    'monthlyKeep'=>12,
    'yearlyKeep'=>1,
    'RootDir=>'/mnt/affadevice',
    Review the other properties and change them to your needs.
  4. write the configuration
    # /root/prodbox-job.pl
  5. run the job manually
    # affa --backup prodbox
Limitations

With this kind of setup you cannot use the affa rise feature, as it requires the backup archive to be located on the same fileystem as the server installation. The rise option uses hardlinks, which are not working across filesystems.

Automount

Having the backup archives in the same filesystem is always a risk, which can be minimized by using the automount feature. Then the external filesystem is only mounted during a job run.

In the NAS example set
'AutomountDevice=>'10.200.48.2:/mnt/affashare',
'AutomountPoint =>'mnt/affadevice',
and skip the step 2.

In the USB drive example set
'AutomountDevice=>'/dev/sdc1',
'AutomountPoint =>'mnt/affadevice',
and skip the steps 5 to 8.

The mount point will be automatically created, if it does not exist.
For access to the archive directory, you need to mount it manually.

Restore

Restore single files or directories

Example 1: It's May, 9th 20:00, when user 'briedlin' ask you to restore the messages of his mailbox 'orders' he has inadvertently deleted today at 14:00 h.

1. You first must check what backup archives are available. The jobname of this server backup ist 'prodserv'.

# affa --list-archives prodserv

Affa version 0.4.2 on affa1.mydomain.de (10.204.48.2)
+-----------------------------------------------------------------------------+
| Job: prodserv                                                               |
| Description: File- and Mailserver Frankfurt 2                               |
| Directory: /var/affa/prodserv/                                              |
| Hostname: 10.204.48.1                                                       |
+-----------------------+----------------+--------------+--------+------------+
| Date                  | Archive        |        Files |   Size | Disk usage |
+-----------------------+----------------+--------------+--------+------------+
| Sun 2007 Apr 01 04:06 | monthly.0      |       410510 |   40GB |  407GB/31% |
+-----------------------+----------------+--------------+--------+------------+
| Sun 2007 Apr 08 04:05 | weekly.3       |       410670 |   40GB |  408GB/31% |
| Sun 2007 Apr 15 04:06 | weekly.2       |       410595 |   40GB |  429GB/32% |
| Sun 2007 Apr 22 04:07 | weekly.1       |       415987 |   40GB |  514GB/39% |
| Sun 2007 Apr 29 04:07 | weekly.0       |       428916 |   41GB |  554GB/42% |
+-----------------------+----------------+--------------+--------+------------+
| Wed 2007 May 02 04:07 | daily.6        |       434362 |   41GB |  562GB/43% |
| Thu 2007 May 03 04:07 | daily.5        |       430567 |   42GB |  563GB/43% |
| Fri 2007 May 04 04:08 | daily.4        |       433874 |   42GB |  562GB/43% |
| Sat 2007 May 05 04:08 | daily.3        |       435321 |   42GB |  550GB/42% |
| Sun 2007 May 06 04:08 | daily.2        |       435977 |   42GB |  534GB/41% |
| Mon 2007 May 07 04:05 | daily.1        |       435952 |   42GB |  522GB/40% |
| Tue 2007 May 08 04:08 | daily.0        |       434987 |   42GB |  526GB/40% |
+-----------------------+----------------+--------------+--------+------------+
| Wed 2007 May 09 23:08 | scheduled.6    |       435783 |   42GB |  517GB/39% |
| Wed 2007 May 09 04:08 | scheduled.5    |       436505 |   42GB |  517GB/39% |
| Wed 2007 May 09 07:04 | scheduled.4    |       436531 |   42GB |  517GB/39% |
| Wed 2007 May 09 10:09 | scheduled.3    |       436097 |   42GB |  518GB/39% |
| Wed 2007 May 09 13:13 | scheduled.2    |       436447 |   42GB |  518GB/39% |
| Wed 2007 May 09 16:19 | scheduled.1    |       436684 |   43GB |  518GB/39% |
| Wed 2007 May 09 19:10 | scheduled.0    |       437318 |   43GB |  519GB/39% |
+-----------------------+----------------+--------------+--------+------------+

2. Choose the scheduled.2 archive, which was created less than an hour before the accident. Now restore the mailbox 'orders' using the rsync command.


3. Now run the rsync command:

# export RDIR=/home/e-smith/users/briedlin/Maildir/.orders/
Note the leading slash!

# rsync -av --numeric-ids -e /usr/bin/ssh /var/affa/prodserv/scheduled.2/$RDIR 10.204.48.1:$RDIR


Example 2: A user has deleted the file orderform.pdf from ibay 'docs' and ask you to restore it.

1. You searched and found the latest version of this file in archive weekly.1

2. Copy it back to the server:

# export RFILE=/home/e-smith/files/ibays/docs/files/orderform.pdf

# scp /var/affa/prodserv/weekly.1/$RFILE 10.204.48.1:$RFILE

Full restore

To run a full restore of user and configuration data run on the Affa server
# affa --full-restore <JOB> [<ARCHIVE>]
This rsyncs the data from the backup ARCHIVE back to the 'remoteHostname' defined in the configuration of the job JOB.

Example:
You have backuped your production server 'prodsrv' as job 'prodbox'. To restore from the latest backup run
# affa --full-restore prodbox

To restore from the older archive daily.2 run
# affa --full-restore prodbox daily.2

Important note: A full restore reconstruct the server as it was at the time of the backup. That means, files created or server configuration changes after the backup run will be lost. After the restore is done, the restored server reboots automatically.

Moving a SME 7 server installation to a new hardware using the Affa rise feature

The following example describes a method to move a production server to a new hardware with a minimized down time.

You have a SME 7 production server with hostname 'prodbox‘ running and want to move it to a new hardware.

  1. Connect the new hardware to your local net, install SME Server and all the contribs package you have installed on the 'prodbox‘. You can assign any unused IP address and hostname, as these are only temporary used. Ensure, that both servers have identical SME version installed. Run 'yum update' on both servers, if necessary.
  2. Install and configure Affa (on the new hardware) as described in the examples above and configure a job 'prodbox' to backup your 'prodbox‘ using the 'jobconfig-sample.pl.' file
    Important is to set these properties;
    'SMEServer'=>'yes',
    'RPMCheck '=>'yes',
    'status'=>'disabled',
  3. Now run the job 'prodbox' manually, while your users can still work on the server 'prodbox'
    # affa --run prodbox
    The run may take a long time, depending of the size of the backup.
  4. When the run has finished, check the file /var/affa/prodbox/rpms-missing.txt. As the filename indicates, all RPMS installed on 'prodbox‘, but not on this Affa server, are listed there. Install the missing RPMs.
  5. Ask all your users to log off. To ensure, that no data will be modified from now on, you may want to stop a couple of services on the 'prodbox‘, e.g. qpstmpd, qmail, crond (because of fetchmail), smb, atalk, httpd. Don't stop mysqld, as this service is required by mysqldump in the pre-backup event.
  6. Run the job again
    # affa --run prodbox
    This run should be completed quickly, as only the differences compared to the last run are backuped.
  7. When this final run has finished, powerdown the 'prodbox‘ (old hardware) and rise the Affa box (new hardware) to a 'prodbox‘ clone.
    # affa --rise prodbox
  8. Reboot the server. Your users can now re-logon.

With this method you should be able to move even a typical 50 Gbyte sized server to a new hardware with downtime less than 20 minutes. The rise time does not really depend on the total files size, but on the number of files and directories.


Files

/etc/e-smith/events/actions/affa-make-cronjobs
/etc/e-smith/events/post-upgrade/S90affa-make-cronjobs
/etc/e-smith/templates/etc/cron.d/affa-status/00run
/etc/e-smith/templates/etc/cron.d/affa/00jobs
/etc/logrotate.d/affa
/sbin/e-smith/affa
/sbin/e-smith/affa-rpmlist.sh
/usr/lib/affa/jobconfig-sample.pl
/usr/lib/affa/postJobCommand-sample.pl
/usr/lib/affa/preJobCommand-sample.pl
/usr/lib/affa/watchdog.template
/usr/man/man1/affa.1.gz

Additional information

Backup a old SME 6 server

To backup a SME 6 server set the property 'rsync--inplace' to 'no' and install the perl-TimeDate package on the SME 6 box. The perl-TimeDate package is needed by the watchdog script running on the SME 6. Use the RPM from DAG: perl-TimeDate-1.16-0.rh73.dag.noarch.rpm

Performance

It is hardly to predict how much time a backup job needs to complete. It depends on the number of files, the total file size, the file changes since last run, the network speed and not least on the CPU power, disk speed and RAM of the source and backup server. The following table of measured values will give you an idea of what you can expect.

Backup server Source server Data on source server Transferred Data Connection Compression Affa run time
2x3.2GHz Xeon
2 GB RAM, 1.5 TB RAID6
2x3.2GHz Dual Core Xeon
4 GB RAM, RAID5, SME 7.1
Intranet Web Server + MySQL
1.4 GB, 12,000 files 300 MB, 16 files Internet 2 Mbit yes 2 minutes
2x3.2GHz Xeon
2 GB RAM, 1.5 TB RAID6
2x3.2GHz Dual Core Xeon
4 GB RAM, RAID5
SME 7.1 Mailserver
43 GB, 410,000 files 140 MB, 2,700 files Internet 2 Mbit yes 10 minutes
2x2GHz Dual Core Xeon 5130
6 GB RAM, 1 TB RAID 5
2x2GHz Dual Core Xeon 5130
6 GB RAM, 1 TB RAID 5
SME 7.1 File- and Mailserver
125 GB, 98,000 files 3,2 GB, 3,000 files Gbit LAN no 25 minutes
2x2GHz Dual Core Xeon 5130
6 GB RAM, 874 GB RAID 5
2x2GHz Dual Core Xeon 5130
6 GB RAM, 1 TB RAID 5
SME 7.1 File- and Mailserver
125 GB, 98,000 files 3.5 GB, 2000 files Internet 2 Mbit yes 17 minutes
2x800MHz Pentium 3
1 GB RAM, 300 GB RAID1
2x2.8GHz Xeon,
1GB RAM, 140 GB RAID5
SME 7.1 File- and Mailserver
39 GB, 370,000 files 12 GB, 4,000 files 100Mbit LAN no 52 minutes
1xP4 2.4GHz
256 MB RAM
SME 7.1
2xP4 1GHz
1 GB RAM
SME 6
7.4 GB, 134,790 files 7.4 GB, 134,790 files 100Mbit LAN no 35 minutes

Note: The last action of a job run is to remove the oldest backup, e.g. if archive scheduled.11 exists and you have set the scheduledKeep property to 12, then it must be deleted. This can take a significant long time, which increases the total job execution time.

Changelog

* Tue May 29 2007 Michael Weinberger
  Version 0.4.3
- Minor bugfixes:
  calculation of RootDirFilesystemUsage in .AFFA-REPORT
  improved error handling with rsync status and df in DiskspaceWarn()


* Tue May 29 2007 Michael Weinberger
- Version 0.4.2
  Infinite loop in execPostJobCommand() if command could not be executed:
  Don't call execPostJobCommand() in affaErrorExit() if err==115
- RPMCheck property was ignored
- added Property chattyOnSuccess
- modified jobconfig-sample.pl to preserve 'doneDaily','doneWeekly','doneMonthly','doneYearly' and 'chattyOnSuccess'
- write error codes of affaErrorExit() to log


* Thu May 24 2007 Michael Weinberger
- Version 0.4.1
- fixed bug in disk usage calculation

* Mon May 21 2007 Michael Weinberger
- Version 0.4.0
- added option --rename-job
- changed syntax of --send-keys (!)
- added option --move-archive


* Sun May 20 2007 Michael Weinberger
- Version 0.3.3
- rebuild cronjobs after rise/undorise
- ignore job of own backup, when creating cronjobs (job appears after a rise run)


* Sun May 20 2007 Michael Weinberger
- Version 0.3.2
- man: added sshPort propperty
- chdir /tmp to avoid cwd warnings when the cwd disappears while running rise or undo rise
- bugfix: undorise() did not found own backup archive. Was searching for a wrong name


* Wed May 16 2007 Michael Weinberger
- Version 0.3.1 minor bugfixes
- checkCrossFS() did not work (used in --rise) 
- jobconfig-sample.pl: deleting record before setting props
- Perl errors with --status before a job run


* Thu May 10 2007 Michael Weinberger
- Version 0.3.0
- man page completed
- mark archives with indices > keep setting with '*' in --list-acrchive output
- Option --delete-job
- Option --cleanup
- added --job=JOB alternative to --send-keys


* Wed May 09 2007 Michael Weinberger
- Version 0.2.0
- added --mailtest option


* Tue May 08 2007 Michael Weinberger
- Version 0.1.5
- improved --status output
- removed options --report and --send-report
- added option --show-archives
- added --csv for status and show-archives output in CSV format
- added property 'sshPort'


* Mon May 07 2007 Michael Weinberger
- Version 0.1.4
- don't install the remote watchdog, when remotehost is eq localhost
- improved check for remoteHostName eq localhost using DNS
- ssh -o PasswordAuthentication=no in checkConnection()
- added --full-restore
- missing check for HOSTNAME argument in --send-keys added
  improved error check
- prevent run of --rise of localhost from own backup


* Mon Apr 30 2007 Michael Weinberger
- Version 0.1.3
  modified 'use constant* syntax in watchdog script for compatibility with perl 5.6 on SME6


* Fri Apr 27 2007 Michael Weinberger
- Version 0.1.2
  Bugfix: Preserve of ethernet driver setting with --rise did not work.
  Also preserve NIC bonding.


* Mon Apr 23 2007 Michael Weinberger
- Version 0.1.1
  scheduledKeep must be>=2 for --link-dest
  set scheduledKeep to 2 if <2
- get lastrun date from affa-report rather than from report file
- added auto mount function
- added AutomountDevice and AutomountPoint to jobconfig-sample.pl


* Wed Apr 18 2007 Michael Weinberger
- Version 0.0.8
  dont die if report db does not exist


* Wed Apr 18 2007 Michael Weinberger
- Version 0.0.7
  run checkConnection() only for scheduled backups
  added Size and Disk usage information to --status


* Thu Apr 12 2007 Michael Weinberger
- Version 0.0.5
  fixed calculation of lastrun-now


* Thu Apr 12 2007 Michael Weinberger
- Version 0.0.4
- added --send-status plus templates 
- fixed format error of times in affa --status
- show 'failed', if lastrun is older 1 day in affa --status 
- fixed typo. default status=disabled (was disable)


* Fri Apr 06 2007 Michael Weinberger
- Version 0.0.3
- watchdog reminder was not deleted on soure
- wrong version mismatch list in  rpm compare


* Thu Apr 05 2007 Michael Weinberger
- added 'rsync--inplace' property


* Mon Apr 02 2007 Michael Weinberger
- initial release

Source RPM

Affa SRPM


Acronym

Affa stands for Automatische Festplatten Fernarchivierung