For my notes which do a better job of covering SQL in general, see my SQL Notes. SQLite is always recommended over MySQL if it is at all plausible. That said, sometimes you need MySQL. These notes will focus more on MySQL’s access and connection advantages rather than on basic SQL.

Database Files IRL

Where are files the contain the data on the file system? On at least one CentOS box, a good place to look is here.


To be sure do this.

:->[][/var/lib]$ grep datadir /etc/my.cnf


I think .opt files are settings diffs (i.e. not the data). Tables tend to have sets of files like this:

Table 1. MyISAM


table’s data


table’s indexes


table’s format


For InnoDB, look for all tables in one file, usually here.


Or maybe ibdata1 or something like that.


To bind the server to any address rather than just the local address, set this in /etc/mysql/my.cnf (Gentoo’s location anyway).

bind-address =

(Worked on mariadb but should be the same.) The normal MySQL/MariaDB port is 3306. If for some god-forsaken reason you need to change it adjust this in the same place as before.

port = 3306

Database Operations

To show current active databases:

mysql> show databases;

To find out which database is currently selected (blank if none are selected):

mysql> select database();

To set up all the right files (case sensitive) to spawn a brand-new database:

mysql> create database DATABASENAME;

To activate a particular database:

mysql> use DATABASENAME;

Need to get rid of one?

mysql> drop database DATABASENAME;
This predictably gets rid of everything and anything in that database. Don’t pull this trigger unless you mean it.

To rename a database, well, it’s kind of a pain in the ass. Some say it can’t be done. Others think it kind of can be done by making a new database with the desired name and dropping the previous one. Some people think you can simply rename the data files if they are MyISAM but not InnoDB. It does look like you’ll need enough drive room to have double the space of the database to be renamed. Very annoying. You could try dumping it to bz2, dropping the original, and then restoring to the new name.

Command Line Queries

mysql -u dbuser -p${MYPASS} -h \
      -e "SELECT somefield FROM mytable;" \
mydatabase > /tmp/output_report

Note that entering password in the clear on the command line is not ideal. You can minimize exposure by keeping them out of history. Start by putting this in your ~/.bashrc.

export HISTCONTROL=ignoredups:ignorespace

This allows you to enter a leading space that will not get recorded in the ~/.bash_history file. This allows something like this.

$  P=some_pass    # <-- Note leading space.
$ pv bigdata.sql | mysql -u myuser --password=${P} -h localhost bigdata

Note the leading space. Another good way that handles bystanders is to use the Bash read builtin.

$ read -s P
<silent typing not show>

Note that to use a password on the command line you can’t use -p but rather must use --password=<passwd>. Also note that logging in with a password over the wild internet may be a terrible idea. Better to SSH in first if possible.

For huge operations such as dumps or restorations, the pv pipe viewer program is almost essential, especially if there are problems.

Dump And Restore

To make a dump of your database that can be used to restore it later and/or elsewhere use:

mysqldump -u ${USER} -p ${DATABASENAME} > file.sql

Or how about something sensible like:

mysqldump -u ${USER} -p ${DATABASENAME} | gzip > file.sql.gz

If you have a bunch of SQL that you want applied to a MySQL database, do the following:

mysql -u ${USER} -p ${DATABASENAME} < file.sql

Note that -p is to prompt for password. Also USER may be different to MySQL than your shell; often you want to just use the MySQL root account.

To output specific data from your database from a command line or script do something like:

mysql -u ${USER} -p -e 'SELECT * FROM atable;' thebigdb

Over SSH? Sure…

mysqldump -u root -p'xxxxxxxx' TheDB | ssh 'cat - > /archive/TheDB-20160122.sql'

Or better.

mysqldump -uroot -pxxxxxxxx ${MYDBNAME} | gzip | \
ssh -l root 'cat - > /data/${MYDBNAME}-$(date +%Y%m%d)-sqldump.gz'

Here is an example of doing a complete database clone from one server to another. Start by reading in the passwords for local and remote.

$ read -s Pl
$ read -s Pr
$ mysql -u root --password=${Pl} -h localhost -e 'CREATE DATABASE the_dev;'
$ mysqldump -u xed --password=${Pr} -h the_dev \
  | mysql -u root --password=${Pl} -h localhost isbd_dev

If you don’t want the (required on the command line) password going into the history file, use unset HISTFILE.

If you have a huge amount of data that is pouring in continuously, it can be quite tricky to make sensible backups. One way is to do some kind of query referencing a field that contains dates (dump everything newer than the last back up). But sometimes that’s not even possible or manageable. In that case it might be best to set up a master/slave replication. Here’s a good description of that process.

User Management

MySQL keeps it’s own database (literally stored in a native MySQL database) of users, passwords, and permissions.

To have a look at the users currently established, change to the system database:

mysql> use mysql
mysql> select user,host,password from user where user="xed";
| user | host | password                                  |
| xed  | %    | *42693AD34DACEBADA55BFAA2EAF4FCEBADA55BB8 |

Change Password

Looks like there are two kinds of password hashes MySQL could be using, the old kind (16 chars) and the newer kind (many more than 16 starting with a *). This apparently happens because of a parameter called old_passwords which lives in /etc/my.cnf. If you change this, then you have to make sure all your passwords are "long". You can make this transition safer by using this session variable: SET SESSION old_passwords=0; This will help create new style passwords before making major system wide commitments that could lock you out upon a restart.

Here’s one way:

mysql> UPDATE user SET password=PASSWORD('secret') WHERE user='xed';

Or this might be a smoother way:

mysql> SET PASSWORD FOR xed@localhost=PASSWORD('newpass');

Create A New User

mysql> CREATE USER 'xed'@'localhost' IDENTIFIED BY 'some_pass';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'xed'@'localhost' WITH GRANT OPTION;
mysql> CREATE USER 'xed'@'%' IDENTIFIED BY 'some_pass';

These look pretty much the same, but they are both needed or else user xed will invoke the anonymous user on local host.

mysql> CREATE USER 'admin'@'localhost';
mysql> GRANT RELOAD,PROCESS ON *.* TO 'admin'@'localhost';

To open permissions to the database:

mysql> GRANT ALL ON database.* TO your_mysql_name;

Delete User

To delete a user from the MySQL access list:

mysql> DROP USER 'retiredguy'@'localhost';

Neater USER Table

Here’s a nice way to show the USER table so there isn’t a bazillion fields making it unreadable and useless:

SELECT password, host, user,
CONCAT(Select_priv, Create_tmp_table_priv, Create_view_priv, Show_view_priv, Lock_tables_priv) AS view,
CONCAT(Insert_priv, Update_priv, Delete_priv, Create_priv, Drop_priv) AS edit,
CONCAT(Grant_priv, References_priv, Index_priv, Alter_priv) AS meta,
CONCAT(Create_routine_priv, Alter_routine_priv, Execute_priv) AS exec,
CONCAT(Repl_slave_priv, Repl_client_priv) AS rep,
CONCAT(Super_priv, Shutdown_priv, Process_priv, File_priv, Show_db_priv, Reload_priv) AS admin
FROM user ORDER BY user, host;

You can also set privileges by database (see mysql.db table), by host ( and even finer (see mysql.{tables_priv, columns_priv, procs_priv}).

SELECT Host, Db, User,
CONCAT(Select_priv, Create_view_priv, Show_view_priv) AS sel,
CONCAT(Insert_priv, Create_priv, Update_priv, Alter_priv, Lock_tables_priv) AS edit,
CONCAT(Delete_priv, Drop_priv) AS del,
CONCAT(Grant_priv, References_priv, Index_priv, Create_tmp_table_priv) AS misc,
CONCAT(Create_routine_priv, Alter_routine_priv, Execute_priv, Event_priv, Trigger_priv) AS exec
FROM mysql.db ORDER by User,Host;

Opening Access To Other Hosts

Check what your user situation is as described above. Find the explicit host that is allowed (usually the one you’re on) and replace it with a %.

mysql> select user,host,password from user;
| user | host                | password                                  |
| root | localhost           | *DB4C847E46thehashishere64003A18CEFB61685 |
| root | | *DB4C847E46thehashishere64003A18CEFB61685 |
| root |           | *DB4C847E46thehashishere64003A18CEFB61685 |
| root | ::1                 | *DB4C847E46thehashishere64003A18CEFB61685 |
|      | localhost           |                                           |
|      | |                                           |
6 rows in set (0.00 sec)

mysql> update user set host='%' where user='root' and host='';

Table Operations

To find out the names of the tables of the active database:


Prepares a new structure for a database table (also case sensitive):

mysql> CREATE TABLE tablename (field1 VARCHAR(20), field2 INT(5),
-> field3 VARCHAR(20));
mysql> CREATE TABLE point (pid INT auto_increment not null primary key , x

float(15,8), Y float(15,8), Z float(15,8));

To find out the structure of a specific table (field structure etc):

mysql> DESCRIBE tablename;

Loads a prepared database table with data from a tab/CR delimited text file:

mysql> LOAD DATA LOCAL INFILE "data.txt" INTO TABLE tablename;

Insert a new single record into a database table:

mysql> INSERT INTO table

Deleting tables:

mysql> DROP TABLE testtable;


There are several different Python modules which can interact with MySQL databases.

Note that in general when working with any database, Python likes to conform to the Python Database API. This API has been defined to encourage similarity between the Python modules that are used to access databases. This is a good place to start to ensure that designs support a very smooth migration from one database engine to another.


MySQL has a basic type for timestamps. You just need to send it a string with the correct format. This format is shown with the current time being generated into a suitable string.

import datetime
now="%Y-%m-%d %H:%M:%S")
s= 'INSERT INTO places VALUES ("49","AK","%s","Alaska");' % now

Note the interesting quotes around the %s. This is usually not needed in most Python work, but since this SQL syntax needs quotes for MySQL, they are needed here.

import MySQLdb

MySQLdb, is a C extension module that has a reputation of being difficult to compile, especially if you’re on a Mac. In truth it seems to be a wrapper for the module called _mysql. Full documentation seems to be here.

Here’s an example of creating a completely new database.

def test_setup_db(dbname):
    p= "RUSureUWantThisHere?",
    dbcheck= os.popen('mysqlcheck -u thedbuser --password=p '+dbname+' 2> /dev/null').read()
    if 'OK' in dbcheck:
        print '!! Database Exists'
    else: # Going to create a database with correct tables, etc.
        os.system('mysqladmin -u thedbuser --password=p create '+dbname)

    # Check connection and created db tables.
    import MySQLdb
    connection= MySQLdb.connect(host= "",
                                user= "thedbuser",
                                passwd= p,
                                db= dbname)
    cursor= connection.cursor()
    cursor.arraysize= 50
        cursor.execute('CREATE TABLE keywords ('
                     + 'pid MEDIUMINT UNSIGNED NOT NULL,'
                     + 'word VARCHAR(255)'
                     + ')')
        print "!! Problem creating table 'keywords', maybe already exists."
        raise SystemExit
    cursor.execute('SHOW TABLES')

    print "* Tables created:"
    for column_1 in cursor.fetchall():
        print "-- Values:", column_1

Here’s an example (from my web-based photo management software, Pixiepoo) that shows how records are added.

pw= "somepass" # Obviously INSECURE.
p= {'host':'',

import datetime
now="%Y-%m-%d %H:%M:%S")
s= 'INSERT INTO sbdData VALUES ("a0k","654321","%s","something_new");' % now

connection= MySQLdb.connect(**p)
cursor= connection.cursor()
connection.commit()  # <= Important or nothing changes. Connection method.

Here’s an example that queries to find a certain kind of record.

def find_rec_by_filename(self, fn):
    """Try to find records that have an original filename that
    contains the input (search) string. This is useful for finding
    out where in the db files just deposited went."""
    import MySQLdb
    connection= MySQLdb.connect(host= "",
                user= "dbuser", passwd= p, db= self.dbname)
    cursor= connection.cursor()
    cursor.arraysize= 50
    sql= 'SELECT pid FROM photo WHERE orig_filename LIKE "%'+fn+'%";'
        print sql
        cursor.execute( sql )
        tup_of_tuples= cursor.fetchall()
        return map( lambda x:x[0], tup_of_tuples)
        print "Content Type: text/plain\n\n"
        print "!! Problem finding record by filename."
        print sql
        raise SystemExit

Note that just sending a string to cursor.execute() may not work out. If you are supplying new data to insert or even a value to compare, it might be better to do something like

SQL= """SELECT spam, eggs, sausage FROM breakfast WHERE price < %s"""
cursor.execute( SQL , (max_price,))

By letting the execute function deal with placing the values (supplied by the second argument tuple), the library can do the necessary conversions and protections (especially helpful for blobs and SQL injection problems).

import mysql

I believe that the import mysql module is written in pure Python which allows maximal portability to anything that can run Python. Performance may be worse. It seems like it can be half as fast as the C lib extension. Choose wisely.

Excellent, perhaps even official Python/MySQL examples.


I got the "The table mytable is full" error. The first thing to check is what kind of back end storage scheme is being used:

show table status from mydatabase;

or more specifically to the problem table:

show table status from mydatabase where Name="mytable";
show table status from mydatabase like "mytable";

Look for "MyISAM" or "InnoDB" in the "Engine" field. If the type is MyISAM then look for the "Create_options" field. In my case it said "max_rows=132036 avg_row_length=5782".

Figure out about how big the row length should be with the existing records:

select avg(octet_length(myblobfield)) from mytable;

In my case I had a blob that was about 2100 bytes and about 6 for its ID field. I wanted to be able to have 100,000,000 records so I use the following:

ALTER TABLE mytable MAX_ROWS=100000000 AVG_ROW_LENGTH=2500;


While importing I got this.

ERROR 1153 (08S01) at line 1134:
Got a packet bigger than 'max_allowed_packet' bytes

I solved it by first going to an MySQL console and running these commands.

SET GLOBAL net_buffer_length=1000000;
SET GLOBAL max_allowed_packet=1000000000;

Note that you can find the current values with this.

SHOW GLOBAL VARIABLES LIKE "net_buffer_length";
SHOW GLOBAL VARIABLES LIKE "max_allowed_packet";

For these my defaults were 16384 and 1048576 respectively.

Then add this option to the client command.

mysql --max_allowed_packet=100M