Replace Failed Disk on NetApp FAS

After physically replacing the disk, the new disk might not be assigned to the controller on which the old disk was assigned, depending on the value of the disk.auto_assign option, which you can check with:

options disk.auto_assign

Even if the value of that option is on, the disk might still remain unassigned, in which case you will see a message for unassigned disks in the end of the output of the command:

disk show

You can see which disks are unassinged with

disk show -n

To assign a disk to a controller, SSH to that controller and do:

disk assign XX.YY.ZZ

... where XX.YY.ZZ is the name of the disk, as obtained by disk show -n. Example output:

FAS> disk assign 01.23.45
Fri May 13 00:00:02 [FAS:diskown.changingOwner:info]: changing ownership for disk 12.34.45 (S/N ABCDEF) from unowned (ID 1234567890) to FAS (ID 0987654321)

Clone a remote repository with GitPython

Example code:

{% highlight python %} import git

remote = '' local = '/home/marios/Tests/gitpython'

git.Repo.clone_from(remote, local) {% endhighlight %}


mysqlhotcopy is a Perl script for backing up MySQL tables stored in either the MyISAM or ARCHIVE engines. It works very fast because it doesn't dump the contents of the tables. Instead it takes advantage of the fact that MyISAM tables are contained in separate files, and simply locks the tables and copies the flat files.

Example Setup

  1. Create a user in MySQL for running mysqlhotcopy. The user will need to be granted the SELECT, RELOAD and LOCK TABLES privileges on the databases that will be backed up. In this example, I want all databases to be backed up:

    mysql> CREATE USER `mysqlhotcopy`;
    mysql> GRANT SELECT, RELOAD, LOCK TABLES ON *.* TO `mysqlhotcopy`@`localhost`;
  2. Create a daily cron job to backup every database, except for information_schema, which is a dynamic schema created my the MySQL server itself and does not exist as files on the filesystem. An example script is:

    for database in $(mysql --user mysqlhotcopy           \
                            --batch                       \
                            --skip-column-names           \
                            --execute 'SHOW DATABASES;' | \
                      grep -v '^information_schema$')
        mysqlhotcopy $database /root/mysql-dumps/ --allowold --keepold --user=mysqlhotcopy

    This script maintains one previous copy of the databases, by renaming the directory, appending the suffix _old to the name.

See also



This is the nicest guide for streaming replication that I've found: Zero to PostgreSQL streaming replication in 10 mins. It is written for Ubuntu, but it only needs a couple of steps done differently on CentOS, which I have noted in the comments of that article. This guide is not as clear (for my preference), but it includes some commands to verify that the replication is running: How To Setup PostgreSQL Replication On CentOS.


Random tips for BackupPC:

  1. On your client systems (those that will be backed up by BackupPC), rotate your logs (whether compressed or not) with dates in the filenames, instead of appending prefixes such as .1, .2, .3, etc. The benefit from this is that BackupPC will ignore old logs on new runs, since they will have the same name and the same checksum. If you rotate logs with numbered names, BackupPC will transfer them again, since the name will have changed. This configuration is achieved with the dateext parameter set in logrotate configuration file, which on CentOS 6 is at /etc/logrotate.conf, by default.

  2. If mlocate is installed on the BackupPC system, you should exclude the backup directory from being indexed by the nightly run of updatedb, otherwise /var/lib/mlocate/mlocate.db will become enormous. To exclude the backup directory, edit /etc/updatedb.conf and append the directory path to the end of the line for the PRUNEPATHS variable.

See Also


Hello, I'm Marios Zindilis and this is my website. Opinions are my own. You also find me on LinkedIn and GitHub.

Unless otherwise specified, content is licensed under CC0.