Cacti not graphing, snmpwalk works

So I recently ran into an issue where I was upgrading Cacti and ran into this issue.  I was able to snmpwalk with my credentials and get a response but when I entered those same credentials adding a device to Cacti, there was an SNMP error.

Turns out Cacti is very sensitive to the version of PHP you are using and whether you are using SNMP v2 or v3.

I upgraded PHP to version 5.3.x and all was right with the world for the newer version of Cacti, 0.8.7g

Note: A collegue of mine noticed that Cacti 0.8.7a didn’t work well with PHP 5.3 and he was able to get working with PHP 5.2.x


Update multiple svn checkouts

Recently setup a test environment which had more than 30 svn checkouts as websites. Since it was a test server, it wasn’t updated as frequently as production.  Created this simple script to update all directories some of which were not svn checkouts.

for i in `find . -mindepth 1 -maxdepth 3 -type d | grep -v .svn`; do svn up $i; done

scponly on RHEL5


  1. Downloaded scponly from here:
  2. Copy it over to the server: scp scponly-YYYYMMDD.tgz username@serverName:~/
  3. On the SFTP server untar the tarball in /usr/local:
    1. cd /usr/local
    2. cd scponly-YYYYMMDD
    3. ./configure –enable-chrooted-binary
    4. make
    5. sudo make install
  4. This will create the necessary files for scponly under /usr/local

Add SFTP chrooted user

  1. I downloaded the script from here: and modified it for our environment.
  2. Run the script. This will create the user if one doesn’t exist, create the directory structure and make a writeable directory for the user to upload files or pull files from.
  3. Add SFTP user to sshd_config AllowUsers, restart SSHD
  4. Test with a SFTP client NOTE: You will not be able to test with SSH!!

Convert MySQL Slave to Master

Found this interesting the other day. Our production master SQL server was having connection issues.  So we needed to cut over from the master to the slave to try to resolve the issues.

What we did

# Shutdown the IP address for the master

# Shut off the mysql service on the master

# Bring up the IP address on the slave

This was all that was REQUIRED to get MySQL connections pointing to the slave.

What was WRONG with this was what happens when we want to cut back over once the issues with the master have been resolved. Turned out to be a ethernet cable. Once the cable was replaced we were able to verify the issue no longer existed.

What we SHOULD have done

# Log into the MySQL slave and run STOP SLAVE; RESET SLAVE;

# Add log-bin to the slave’s /etc/my.cnf file

# Stop the IP interface on the master (optionally the MySQL service)

# Add the IP address to the slave system

# Restart the MySQL service on the slave server. It should now be running as the master.

The reason this is a better solution is that now when the broken system is fixed, we can set it up as the slave and have the binary log replicate and catch up the slave to the new master and then cut over to the fixed system as the master in the same steps as above.