The following info. was taken from here:http://mysqlha.blogspot.com/2008/07/what-exactly-does-flush-tables-with.html
FLUSH TABLES WITH READ LOCK can do wonders. But you should understand what it does to avoid problems. The manual describes it with this:
Closes all open tables and locks all tables for all databases with a read lock until you explicitly release the lock by executing
UNLOCK TABLES. This is very convenient way to get backups if you have a filesystem such as Veritas that can take snapshots in time.
FLUSH TABLES WITH READ LOCKacquires a global read lock and not table locks, so it is not subject to the same behavior as
UNLOCK TABLESwith respect to table locking and implicit commits:
The implementation does this:
- set the global read lock – after this step, insert/update/delete/replace/alter statements cannot run
- close open tables – this step will block until all statements started previously have stopped
- set a flag to block commits
If long-running statements were started prior to step 2, then the FLUSH TABLES WITH READ LOCK command will block until they complete. This is a bad state to get stuck in as the server is in read-only mode at this point and this statement is frequently run on a primary. It might be a good idea to kill long-running statements prior to running the FLUSH command or when it gets stuck. Note that commit can still be done until step 3 finishes.
Here is the code:
return 1; // Killed
result=close_cached_tables(thd,(options & REFRESH_FAST) ? 0 : 1,
if (make_global_read_lock_block_commit(thd)) // Killed
/* Don’t leave things in a half-locked state */