[go: up one dir, main page]

0% found this document useful (0 votes)
29 views65 pages

Mysql Guide

Uploaded by

Sri kanth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views65 pages

Mysql Guide

Uploaded by

Sri kanth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

MYSQL ADMIN GUIDE

MYSQL ADMINISTRATION Page 1


INDEX

1 .MYSQL HISTORY …………………………… ……………………….. 3-4

2. MYSQL ARCHITECTURE……………………………………………...5-9

3 .MYSQL STORAGE ENGINES ……………………………………….10-18

4 .TYPES OF LOGS IN MYSQL ……………………………………….. 19-20

5 .TYPES OF BUFFERS………………………………………………… 21-25

6 .MYSQL SERVER ADMINISTRATION………………………………. 26-30

7 .SECURITY IN MYSQL………………………………………………….31-39

8 .USER ADMINISTRATION IN MYSQL …………………………….. 40-46

9 .BACKUPS AND RECOVERY………………………………………. 47-62

10.LOCKING IN MYSQL ……………………………………………….. 63-65

MYSQL ADMINISTRATION Page 2


1. MYSQL HISTORY

MySQL history goes back to 1979 when Monty Widenius, working for a small company
Called TcX, created a reporting tool written in BASIC that ran on a 4 MHz computer with 16 KB
RAM. Over time, the tool was rewritten in C and ported to run on Unix. It was still just a low-
level storage engine with a reporting front end. The tool was known by the name of Unireg.
Working under the adverse conditions of little computational resources, and perhaps building on
his God-given talent, Monty developed a habit and ability to write very efficient code naturally.
He also developed, or perhaps was gifted from the start, with an unusually acute vision of what
needed to be done to the code to make it useful in future development—without knowing in
advance much detail about what that future development would be. In addition to the above, with
TcX being a very small company and Monty being one of the owners, he had a lot of say in what
happened to his code. While there are perhaps a good number of programmers out there with
Monty’s talent and ability, for a number of reasons, few get to carry their code around for more
than 20 years. Monty did. Monty’s work, talents, and ownership of the code provided a
foundation upon which the Miracle of MySQL could be built. Sometime in the 1990s, TcX
customers began to push for an SQL interface to their data. Several possibilities were considered.
One was to load it into a commercial database. Monty was not satisfied with the speed. He tried
borrowing mysql code for the SQL part and integrating it with his low-level storage engine. That
did not work well, either. Then came the classic move of a talented, driven programmer: “I’ve
had enough of those tools that somebody else wrote that don’t work! I’m writing my own!” Thus
in May of 1996 MySQL version 1.0 was released to a limited group, followed by a public release
in October 1996 of version 3.11.1. The initial public release provided only a binary distribution
for Solaris. A month later, the source and the Linux binary were released.In the next two years,
MySQL was ported to a number of other operating systems as the feature set gradually increased.

MySQL was originally released under a special license that allowed commercial use to
those who were not redistributing it with their software. Special licenses were available for sale
to those who wanted to bundle it with their product. Additionally, commercial support was also
being sold. This provided TcX with some revenue to justify the further development of MySQL,
although the purpose of its original creation had already been fulfilled. During this period
MySQL progressed to version 3.22. It supported a decent subset of the SQL language, had an
optimizer a lot more sophisticated than one would expect could possibly be written by one
person, was extremely fast, and was very stable. Numerous APIs were contributed, so one could
write a client in pretty much any existing programming language. However, it still lacked
support for transactions, sub queries, foreign keys, stored procedures, and views. The locking
happened only at a table level, which in some cases could slow it down to a grinding halt. Some
programmers unable to get around its limitations still considered it a toy, while others were more
than happy to dump their Oracle or SQL Server in favor of MySQL, and deal with the limitations
in their code in exchange for improvement in performance and licensing cost savings. Around
1999–2000 a separate company named MySQL AB was established. It hired several developers
and established a partnership with Sleepycat to provide an SQL interface for the Berkeley DB
data files. Since Berkeley DB had transaction capabilities, this would give MySQL support for

MYSQL ADMINISTRATION Page 3


transactions, which it previously lacked. After some changes in the code in preparation for
integrating Berkeley DB, version 3.23 was released.

Although the MySQL developers could never work out all the quirks of the Berkeley DB
interface and the Berkeley DB tables were never stable, the effort was not wasted. As a result,
MySQL source became equipped with hooks to add any type of storage engine, including a
transactional one. By April of 2000, with some encouragement and sponsorship from Slashdot,
master slave replication capability was added. The old non transactional storage engine, ISAM,
was reworked and released as MyISAM. Among a number of improvements, full-text search
capabilities were now supported. A short-lived partnership with NuSphere to add Gemini, a
transactional engine with row-level locking, ended in a lawsuit toward the end of 2001.
However, around the same time, Heikki Tuuri approached MySQL AB with a proposal to
integrate his own storage engine, InnoDB, which was also capable of transactions and row-level
locking.

MYSQL ADMINISTRATION Page 4


2. MySQL Architecture

For the large part, MySQL architecture defies a formal definition or specification. When most of
the code was originally written, it was not done to be a part of some great system in the future,
but rather to solve some very specific problems. However, it was written so well and with
enough insight that it reached the point where there were enough quality pieces to assemble a
database server. I make an attempt in this section to identify the core modules in the system.
However, let me add a disclaimer that this is only an attempt to formalize what exists.
MySQL developers rarely think in those terms. Rather, they tend to think of files,
directories, classes, structures, and functions. It is much more common to hear “This happens in
mi_open( )” than to hear “This happens on the MyISAM storage engine level.” MySQL
developers know the code so well that they are able to think conceptually on the level of
functions, structures, and classes. They will probably find the abstractions in this section rather
useless. However, it would be helpful to a person used to thinking in terms of modules and
managers. With regard to MySQL, I use the term “module” rather loosely. Unlike what one
would typically call a module, in many cases it is not something you can easily pull out and
replace with another implementation. The code from one module might be spread across several
files, and you often find the code from several different modules in the same file. This is
particularly true of the older code. The newer code tends to fit into the pattern of modules better.
So in our definition, a module is a piece of code that logically belongs together in some way, and
performs a certain critical function in the server.

One can identify the following modules in the server:


• Server Initialization Module

• Connection Manager

• Thread Manager

• Connection Thread

• User Authentication Module

• Access Control Module

• Parser

• Command Dispatcher

• Query Cache Module

• Optimizer

MYSQL ADMINISTRATION Page 5


• Table Manager

• Table Modification Modules

• Table Maintenance Module

• Status Reporting Module

• Abstracted Storage Engine Interface (Table Handler)

• Storage Engine Implementations (MyISAM, InnoDB, MEMORY, Berkeley DB)

• Logging Module

• Replication Master Module

• Replication Slave Module

• Client/Server Protocol API

• Low-Level Network I/O API

• Core API

Interaction of the Core Modules :


When the server is started on the command line, the Initialization Module takes control. It parses
the configuration file and the command-line arguments, allocates global memory buffers,
initializes global variables and structures, loads the access control tables, and performs a number
of other initialization tasks. Once the initialization job is complete, the Initialization Module
passes control to the Connection Manager, which starts listening for connections from clients in
a loop. When a client connects to the database server, the Connection Manager performs a
number of low-level network protocol tasks and then passes control to the Thread Manager,
which in turn supplies a thread to handle the connection (which from now on will be referred to
as the Connection Thread). The Connection Thread might be created a new, or retrieved from the
thread cache and called to active duty. Once the Connection Thread receives control, it first
invokes the User Authentication Module.The credentials of the connecting user are verified, and
the client may now issue requests. The Connection Thread passes the request data to the
Command Dispatcher. Some requests, known in the MySQL code terminology as commands,
can be accommodated by the Command Dispatcher directly, while more complex ones need to
be redirected to another module. A typical command may request the server to run a query,
change the active database, report the status, send a continuous dump of the replication updates,
close the connection, or perform some other operation. In MySQL server terminology, there are
two types of client requests: a query and a command. A query is anything that has to go through
the parser. A command is a request that can be executed without the need to invoke the parser.
MYSQL ADMINISTRATION Page 6
We will use the term query in the context of MySQL internals. Thus, not only a SELECT but
also a DELETE or INSERT in our terminology would be called a query. What we would call a
query is sometimes called an SQL statement. If full query logging is enabled, the Command
Dispatcher will ask the Logging Module to log the query or the command to the plain-text log
prior to the dispatch. Thus in the full logging configuration all queries will be logged, even the
ones that are not syntactically correct and will never be executed, immediately returning an error.

The Command Dispatcher forwards queries to the Parser through the Query Cache
Module. The Query Cache Module checks whether the query is of the type that can be cached,
and if there exists a previously computed cached result that is still valid.In the case of a hit, the
execution is short-circuited at this point, the cached result is returned to the user, and the
Connection Thread receives control and is now ready to process another command. If the Query
Cache Module reports a miss, the query goes to the Parser, which will make a decision on how to
transfer control based on the query type. One can identify the following modules that could
continue from that point: the Optimizer, the Table Modification Module, the Table Maintenance
Module, the Replication Module, and the Status Reporting Module. Select queries are forwarded
to the Optimizer; updates, inserts, deletes, and table-creation and schema-altering queries go to
the respective Table Modification Modules; queries that check, repair, update key statistics, or
defragment the table go to the Table Maintenance module; queries related to replication go to the
Replication Module; and status requests go to the Status Reporting Module. There also exist a
number of Table Modification Modules: Delete Module, Create Module, Update Module, Insert
Module, and Alter Module. At this point, each of the modules that will receive control from the
Parser passes the list of tables involved in the query to the Access Control Module and then,
upon success, to the Table Manager, which opens the tables and acquires the necessary locks.
Now the table operation module is ready to proceed with its specific task and will issue a number
of requests to the Abstracted Storage Engine Module for low-level operations such as inserting
or updating a record, retrieving the records based on a key value, or performing an operation on
the table level, such as repairing it or updating the index statistics. The Abstracted Storage
Engine Module will automatically translate the calls to the corresponding methods of the specific
Storage Engine Module via object polymorphism. In other words, when dealing with a Storage
Engine object, the caller thinks it is dealing with an abstracted one, when in fact the object is of a
more specific type: it is the Storage Engine object corresponding to the given table type. The
interface methods are virtual, which creates the effect of transparency. The correct method
will be called, and the caller does not need to be aware of the exact object type of the
Storage Engine object.

MYSQL ADMINISTRATION Page 7


Detailed Look at the Core Modules
Connection Manager: The Connection Manager listens for incoming connections from clients,
and dispatches the requests to the Thread Manager.

Connection Thread: The Connection Thread is the heart of the client processing which
establishes connection.

MYSQL ADMINISTRATION Page 8


Thread manager: The Thread Manager is responsible for keeping track of threads and for
making sure a thread is allocated to handle the connection from a client.

User Authentication Module: The User Authentication Module authenticates the connecting
user and initializes the structures and variables containing the information on his level of
privileges.

Command Dispatcher: The Command Dispatcher is responsible for directing requests to the
lower-level modules that will know how to resolve them.

Parser: The Parser is responsible for parsing queries and generating a parse tree.

Query Cache Module : The Query Cache Module caches query results, and tries to short-circuit
the execution of queries by delivering the cached result whenever possible.

Optimizer : The Optimizer is responsible for creating the best strategy to answer the query, and
executing it to deliver the result to the client.

Table Modification Modules : This collection of modules is responsible for operations such as
creating, deleting, renaming, dropping, updating, or inserting into a table.

Table Maintenance Module : The Table Maintenance Module is responsible for table
maintenance operations such as check, repair, back up, restore, optimize and analyze. This
module comes into picture when a table is corrupted.

Status Reporting Module : The Status Reporting Module is responsible for answering queries
about server configuration settings, performance tracking variables, table structure information,
replication progress, condition of the table cache, and other things.

Replication Master Module : The Replication Master Module is responsible for the replication
functionality on the master. The most common operation for this module is to deliver continuous
feed of replication log events to the slave upon request.

Replication Slave Module : The Replication Slave Module is responsible for the replication
functionality of the slave. The role of the slave is to retrieve updates from the master, and apply
them on the slave.

Table Manager : The Table Manager is responsible for creating, reading, and modifying the
table definition files (.frm extension), maintaining a cache of table descriptors called table cache,
and managing table-level locks.

MYSQL ADMINISTRATION Page 9


3. TYPES OF ENGINES IN MYSQL

Data in MySQL is stored in files (or memory) using a variety of different techniques. Each of
these techniques employ different storage mechanisms, indexing facilities, locking levels and
ultimately provide a range of different functions and capabilities. By choosing a different
technique you can gain additional speed or functionality benefits that will improve the overall
functionality of your application.

For example, if you work with a large amount of temporary data, you may want to make use of
the MEMORY storage engine, which stores all of the table data in memory. Alternatively, you
may want a database that supports transactions (to ensure data resilience).

Each of these different techniques and suites of functionality within the MySQL system is
referred to as a storage engine (also known as a table type). By default, MySQL comes with a
number of different storage engines pre-configured and enabled in the MySQL server. You can
select the storage engine to use on a server, database and even table basis, providing you with the
maximum amount of flexibility when it comes to choosing how your information is stored, how
it is indexed and what combination of performance and functionality you want to use with your
data.

There are a number of ways you can specify the storage engine to use. The simplest method, if
you have a preference for a engine type that fits most of your database needs to set the default
engine type within the MySQL configuration file (using the option storage_engine or when
starting the database server

MyISAM

The MyISAM engine is the default engine in most MySQL installations and is a derivative of the
original ISAM engine type supported in the early versions of the MySQL system. The engine
provides the best combination of performance and functionality, although it lacks transaction
capabilities (use the InnoDB or BDB engines) and uses table-level locking.

Unless you need transactions, there are few databases and applications that cannot effectively be
stored using the MyISAM engine. However, very high-performance applications where there are
large numbers of data inserts/updates compared to the number of reads can cause performance
probolems for the MyISAM engine. It was originally designed with the idea that more than 90%
of the database access to a MyISAM table would be reads, rather than writes.

With table-level locking, a database with a high number of row inserts or updates becomes a
performance bottleneck as the table is locked while data is added. Luckily this limitation also
works well within the restrictions of a non-transaction database.

MYSQL ADMINISTRATION Page 10


MyISAM Summary

Name MyISAM
Introduced v3.23
Default install Yes
Data limitations None
Index limitations 64 indexes per table (32 pre 4.1.2); Max 16 columns per index
Transaction support No
Locking level Table

MERGE

 Email Article
 Print Article
 Share Articles

The MERGE engine type allows you to combine a number of identical tables into a single table.
You can then execute queries that return the results from multiple tables as if they were just one
table. Each table merged must have the same table definition.

The MERGE table is particularly effective if you are logging data directly or indirectly into a
MySQL database and create an individual table per day, week or month and want to be able to
produce aggregate queries from multiple tables. There are limitations to this however, you can
only merge MyISAM tables and the identical table definition restriction is strictly enforced.
Although this seems like a major issue, if you had used one of the other table types (for example
InnoDB) then the merge probably wouldn't be required.

MERGE Summary

Name MERGE
Introduced v3.23.25
Default install Yes
Data limitations Underlying tables must be MyISAM
Index limitations N/A
Transaction support No
Locking level Table

MYSQL ADMINISTRATION Page 11


MEMORY

The MEMORY storage engine (previously known as the HEAP storage engine) stores all data in
memory; once the MySQL server has been shut down any information stored in a MEMORY
database will have been lost. However, the format of the individual tables is kept and this enables
you to create temporary tables that can be used to store information for quick access without
having to recreate the tables each time the database server is started.

Long term use of the MEMORY storage engine is not generally a good idea, because the data
could so easily be lost. However, providing you have the RAM to support the databases you are
working on, use of MEMORY based tables is an efficient way of running complex queries on
large data sets and benefitting from the performance gains.

The best way to use MEMORY tables is to use a SELECT statement to select a larger data set
from your original, disk-based, tables and then sub-analyze that information for the specific
elements you want. I've used this technique in the past to extract a month worth of web log data,
actually from tables using the ARCHIVE storage engine, and then run the queries on specific
URLs, sites and other focus points.

MEMORY Summary

Name MEMORY (HEAP, deprecated)


Introduced 1.0 (only known as MEMORY since 4.1)
Default install Yes
Data limitations BLOB and TEXT types not supported
Index limitations None
Transaction support No
Locking level Table

EXAMPLE

The EXAMPLE engine is actually a programming example of a storage engine that can be used
as the basis for other engines within the MySQL system. It does not support data inserts and isn't
a practical engine for any form of database access. It is, however, a good guide to how to
develop your own storage engine, and is therefore an effective guide for programmers.

EXAMPLE Summary

Name EXAMPLE
Introduced v4.1.3
Default install No
Data limitations N/A
Index limitations N/A

MYSQL ADMINISTRATION Page 12


Transaction support N/A
Locking level N/A

FEDERATED

The FEDERATED storage engine (added in MySQL 5.03) enables you to access data from
remote MySQL database (other databases may be supported in the future) as if it were a local
database. In effect, the MySQL server acts as a proxy to the remote server, using the MySQL
client access library to connect to the remote host, execute queries and then reformat the data
into the localized format.

In essence, it is a way for a server, rather than a client, to access a remote database and can be an
effective way of combining data from multiple hosts or of copying specific data from remote
databases into local tables without the use of data exports and imports.

FEDERATED Summary

Name FEDERATED
Introduced v5.0
Default install No
Data limitations Limited by remote database
Index limitations N/A
Transaction support No
Locking level No

ARCHIVE

The ARCHIVE storage engine supports only the INSERT and SELECT statements, but does
support most of the MySQL field types. Information stored in an ARCHIVE storage engine table
is compressed and cannot be modified and so ARCHIVE tables are perfect for storing log data
(which you don't want to be able to change) or information that is no longer in active use (for
example, old invoicing or sales data).

 Email Article
 Print Article
 Share Articles

While the information is stored very efficient, care should be taken when accessing data stored in
the ARCHIVE tables. Because the information is compressed, selects have to read the entire
table, and that also means decompressing the information. This can obviously increase the time
taken to perform complex searches and retrievals. If you are performing a large number of
queries on the information in these tables it may be easier to temporarily copy your data to
another, uncompressed, data type such as MyISAM.

MYSQL ADMINISTRATION Page 13


ARCHIVE Summary

Name ARCHIVE
Introduced v4.1.3
Default install No
Data limitations Data can only be inserted (no updates)
Index limitations N/A
Transaction support No
Locking level N/A

CSV

The CSV storage engine stores data not in a binary format, but in the form a CSV (Command
Separated Values) file. Because of this, there are limitations to the data stored. It is not an
efficient method for storing large volumes of data, or larger data types like BLOB, although such
types are supported. There is also no indexing. However, because the data is stored in the CSV
format it is exceedingly portable; these CSV files generated can easily be imported into many
different software packages, including Excel, Open Office and database systems like Access or
FileMaker.

In general, the CSV engine is impractical as a general database engine. It is, however, probably
the most effective and easiest method for data exchange. What makes it so convenient is that we
can use SELECT and INSERT statements to create the database, which in turn means that we
can easily produce CSV files based on queries of other data.

With some careful work, the CSV storage engine can also be used as an effective way of getting
information into MySQL. Here, you can create the tables first, shutdown the MySQL server,
copy over CSV files that you have exported from Excel, Access or another database, and you can
then import the data and copy it over to MyISAM or InnoDB tables.

CSV Summary

Name CSV
Introduced v4.1.4
Default install No
Data limitations None
Index limitations Indexing is not supported
Transaction support No
Locking level Table

MYSQL ADMINISTRATION Page 14


BLACKHOLE

Strange though it may seem, the BLACKHOLE engine does not actually store any data.
Although you can create tables and indexes, all SQL statements that would add or update
information to the database are executed without actually writing any data. The database
structure is retained, however, and you can create any indexes on the (non-existent) information
that you want.

Although this seems like a futile exercise, it does allow you to test out database structures and
play with table definitions without actually creating any data. Even more useful, however, is that
SQL statements on BLACKHOLE databases are written to the binary log, and therefore are
replicated to slave databases.

You can use this functionality to update one or more slaves directly without writing any local
data. There are a number of potential uses for this functionality. One such use I have employed
in past is to write log data to a BLACKHOLE table, which is then echoed to two slaves. Because
the write is instantaneous (there are no local disk files or indexes to update), I can maintain a
high logging rate, and rely on the binary logging and slave replication to distribute the data. An
extension of this, as suggested in the MySQL manual, is to use filtering to control the
distribution of records to the slaves.

BLACKHOLE Summary

Name BLACKHOLE
Introduced 4.1.11
Default install No
No data is stored, but statements are written to the binary log (and therefore
Data limitations
distributed to slave databases)
Index limitations N/A
Transaction
No
support
Locking level N/A

ISAM

The ISAM storage engine was the original engine type available with versions of MySQL up
until MySQL 3.23, when the MyISAM storage engine was introduced. ISAM has a number of
different limitations that make it impractical as a database engine. These include the storage
format, which is native to the platform (and therefore not portable between systems), a maximum
table size of just 4GB and limited text searching facilities. Indexes are also more limited. Since
MyISAM is supported on the same platforms as ISAM, and provides better compatibility,
portability and performance.

MYSQL ADMINISTRATION Page 15


ISAM is included for backwards compatibility, you certainly shouldn't use ISAM for new
databases, use MyISAM instead.

ISAM Summary

Name ISAM
Introduced v1.0
Default install Yes
Data limitations Limited maximum database size (4GB)
Index limitations Maximum 16 indexes per table, 16 parts per key
Transaction support No
Locking level Table

Berkeley DB (BDB)

The Berkeley DB (or BDB) engine is based on the technology provided by the Berkeley DB
storage system developed by SleepyCat software. BDB is a hash based storage mechanism, and
the keys to the hash values are stored very efficiently. This makes the recovery of information--
especially when accessed directly using a unique key incredibly quick, and by far the quickest of
the available database types. Recovering full records is even quicker if you the data is short
enough to be stored with the unique key (i.e., under 1024 bytes long). BDB is also one of only
two types of storage engine that support transactions.

 Email Article
 Print Article
 Share Articles

BDB is, however, limited in other ways. Although it uses page locking, locking only 8192 bytes
of a table, rather than the entire table, during an update this can cause problems if you are
performing a large number of updates in the same page (for example, inserting many rows).
There is unfortunately no way round this. Sequential data access--for example a large quantity of
rows matching non-indexed data--can be a lot slower because the data needs to be scanned row
by row.

Recovery of information with BDB tables can also be a problem. Data in BDB is stored in a
combination of the key index, the data file and binary data logs. A loss of data in any of these
sections, even just one of the data logs, can make the data in the database totally unrecoverable.

Where BDB shines therefore is in locations where you can access specific blocks of data by a
unique key that does not frequently change. I've successfully used BDB tables in the past to store
look up information for data like categories or option lists where the small size and unique key
structure make it quick and easy to recover information that is not often changed from its initial
definition.

MYSQL ADMINISTRATION Page 16


Berkeley DB (BDB) Summary

Name BDB
Introduced v3.23.34a
Default install No
Data limitations None
Index limitations Max 31 indexes per table, 16 columns per index;max key size 1024 bytes
Transaction support Yes
Locking level Page (8192 bytes)

InnoDB

The InnoDB Engine is provided by Innobase and supports all of the database functionality (and
more) of MyISAM engine and also adds full transaction capabilities (with full ACID (Atomicity,
Consistency, Isolation, and Durability) compliance) and row level locking of data.

The key to the InnoDB system is a database, caching and indexing structure where both indexes
and data are cached in memory as well as being stored on disk. This enables very fast recovery,
and works even on very large data sets. By supporting row level locking, you can add data to an
InnoDB table without the engine locking the table with each insert and this speeds up both the
recovery and storage of information in the database.

As with MyISAM, there are few data types that cannot effectively be stored in an InnoDB
database. In fact, there are no significant reasons why you shouldn't always use an InnoDB
database. The management overhead for InnoDB is slightly more onerous, and getting the
optimization right for the sizes of in-memory and on disk caches and database files can be
complex at first. However, it also means that you get more flexibility over these values and once
set, the performance benefits can easily outweigh the initial time spent. Alternatively, you can let
MySQL manage this automatically for you.

If you are willing (and able) to configure the InnoDB settings for your server, then I would
recommend that you spend the time to optimize your server configuration and then use the
InnoDB engine as the default.

InnoDB Summary

Name InnoDB
Introduced v3.23 (source only), v4.0 (source and binary)
Default install No
Data limitations None
Index limitations None
Transaction support Yes (ACID compliant)
Locking level Row

MYSQL ADMINISTRATION Page 17


As you may have been able to conclude from the above summary of the different storage engines
available, there are few reasons not to use either the MyISAM or InnoDB engine types.
MyISAM will do in most situations, but if you have a high number of updates or inserts
compared to your searches and selects then you will get better performance out of the InnoDB
engine

But if both MyISAM and InnoDB are so great, why even consider using the other engine
types? Simply because they provide specific functionality that is not otherwise available. The
MERGE engine is an exceedingly effective way of querying data from multiple, identically
defined, tables. The MEMORY engine is the best way to perform a large number of complex
queries on data that would be inefficient to search on a disk based engine. The CSV engine is a
great way to export data that could be used in other applications. BDB is excellent for data that
has a unique key that is frequently accessed.

MYSQL ADMINISTRATION Page 18


4. TYPES OF LOGS IN MYSQL

MySQL Server has several logs that can help you find out what activity is taking place.

Log Type Information Written to Log


Error log Problems encountered starting, running, or stopping mysqld
General query log Established client connections and statements received from clients
Binary log Statements that change data (also used for replication)
Relay log Data changes received from a replication master server
Slow query log Queries that took more than long_query_time seconds to execute

By default, no logs are enabled (except the error log on Windows). The following log-specific
sections provide information about the server options that enable logging.

The Error Log


The error log contains information indicating when mysqld was started and stopped and also any
critical errors that occur while the server is running. If mysqld notices a table that needs to be
automatically checked or repaired, it writes a message to the error log.

On some operating systems, the error log contains a stack trace if mysqld dies. The trace can be
used to determine where mysqld died. See MySQL Internals: Porting to Other Systems.

In addition, on Windows, events and error messages are written to the Windows Event Log
within the Application log. Entries marked as Warning and Note are written to the Event Log,
but informational messages (such as information statements from individual storage engines) are
not copied to the Event Log. The log entries have a source of MySQL. You cannot disable
writing information to the Windows Event Log.

If you flush the logs using FLUSH LOGS or mysqladmin flush-logs and mysqld is writing the
error log to a file (for example, if it was started with the --log-error option), it renames the
current log file with the suffix -old, then creates a new empty log file. Be aware that a second
log-flushing operation thus causes the original error log file to be lost unless you save it under a
different name.

The General Query Log

The general query log is a general record of what mysqld is doing. The server writes information
to this log when clients connect or disconnect, and it logs each SQL statement received from
clients. The general query log can be very useful when you suspect an error in a client and want
to know exactly what the client sent to mysqld.

MYSQL ADMINISTRATION Page 19


mysqld writes statements to the query log in the order that it receives them, which might differ
from the order in which they are executed. This logging order contrasts to the binary log, for
which statements are written after they are executed but before any locks are released. (Also, the
query log contains all statements, whereas the binary log does not contain statements that only
select data.)

The Binary Log


The binary log contains “events” that describe database changes such as table creation operations
or changes to table data. It also contains events for statements that potentially could have made
changes (for example, a DELETE which matched no rows). The binary log also contains
information about how long each statement took that updated data. The binary log has two
important purposes:

 For replication, the binary log on a master replication server provides a record of the data
changes to be sent to slave servers. The master server sends the events contained in its
binary log to its slaves, which execute those events to make the same data changes that
were made on the master. See Section 16.2, “Replication Implementation”.
 Certain data recovery operations require use of the binary log. After a backup has been
restored, the events in the binary log that were recorded after the backup was made are
re-executed. These events bring databases up to date from the point of the backup. See
Section 7.5, “Point-in-Time (Incremental) Recovery Using the Binary Log”.

The binary log has replaced the old update log, which is no longer available as of MySQL 5.0.
The binary log contains all information that is available in the update log in a more efficient
format and in a manner that is transaction-safe. If you are using transactions, you must use the
MySQL binary log for backups instead of the old update log.

The binary log is not used for statements such as SELECT or SHOW that do not modify data

The Slow Query Log

The slow query log consists of SQL statements that took more than long_query_time seconds to
execute. The minimum and default values of long_query_time are 1 and 10, respectively.

The time to acquire the initial table locks is not counted as execution time. mysqld writes a
statement to the slow query log after it has been executed and after all locks have been released,
so log order might differ from execution order.

MYSQL ADMINISTRATION Page 20


5. TYPES OF BUFFERS IN MYSQL

KEY BUFFER (OR ) KEY CACHE:


To minimize disk I/O, the MyISAM storage engine exploits a strategy that is used by many
database management systems. It employs a cache mechanism to keep the most frequently
accessed table blocks in memory:

 For index blocks, a special structure called the key cache (or key buffer) is maintained.
The structure contains a number of block buffers where the most-used index blocks are
placed.
 For data blocks, MySQL uses no special cache. Instead it relies on the native operating
system file system cache.

This section first describes the basic operation of the MyISAM key cache. Then it discusses
features that improve key cache performance and that enable you to better control cache
operation:

 Multiple sessions can access the cache concurrently.


 You can set up multiple key caches and assign table indexes to specific caches.

To control the size of the key cache, use the key_buffer_size system variable. If this variable is
set equal to zero, no key cache is used. The key cache also is not used if the key_buffer_size
value is too small to allocate the minimal number of block buffers (8).

When the key cache is not operational, index files are accessed using only the native file system
buffering provided by the operating system. (In other words, table index blocks are accessed
using the same strategy as that employed for table data blocks.)

An index block is a contiguous unit of access to the MyISAM index files. Usually the size of an
index block is equal to the size of nodes of the index B-tree. (Indexes are represented on disk
using a B-tree data structure. Nodes at the bottom of the tree are leaf nodes. Nodes above the leaf
nodes are nonleaf nodes.)

All block buffers in a key cache structure are the same size. This size can be equal to, greater
than, or less than the size of a table index block. Usually one these two values is a multiple of the
other.

When data from any table index block must be accessed, the server first checks whether it is
available in some block buffer of the key cache. If it is, the server accesses data in the key cache
rather than on disk. That is, it reads from the cache or writes into it rather than reading from or
writing to disk. Otherwise, the server chooses a cache block buffer containing a different table

MYSQL ADMINISTRATION Page 21


index block (or blocks) and replaces the data there by a copy of required table index block. As
soon as the new index block is in the cache, the index data can be accessed.

If it happens that a block selected for replacement has been modified, the block is considered
“dirty.” In this case, prior to being replaced, its contents are flushed to the table index from
which it came.

Usually the server follows an LRU (Least Recently Used) strategy: When choosing a block for
replacement, it selects the least recently used index block. To make this choice easier, the key
cache module maintains all used blocks in a special list (LRU chain) ordered by time of use.
When a block is accessed, it is the most recently used and is placed at the end of the list. When
blocks need to be replaced, blocks at the beginning of the list are the least recently used and
become the first candidates for eviction.

BUFFER CACHE:

InnoDB maintains a buffer pool for caching data and indexes in memory. InnoDB manages the
pool as a list, using a least recently used (LRU) algorithm incorporating a midpoint insertion
strategy. When room is needed to add a new block to the pool, InnoDB evicts the least recently
used block and adds the new block to the middle of the list. The midpoint insertion strategy in
effect causes the list to be treated as two sub lists:

 At the head, a sub list of “new” (or “young”) blocks that have been recently used.
 At the tail, a sub list of “old” blocks that are less recently used.

As a result of the algorithm, the new sub list contains blocks that are heavily used by queries.
The old sub list contains less-used blocks, and candidates for eviction are taken from this sub list.

The LRU algorithm operates as follows by default:

 3/8 of the buffer pool is devoted to the old sub list.


 The midpoint of the list is the boundary where the tail of the new sub list meets the head
of the old sub list.
 When InnoDB reads a block into the buffer pool, it initially inserts it at the midpoint (the
head of the old sub list). A block can be read in as a result of two types of read requests:
Because it is required (for example, to satisfy query execution), or as part of read-ahead
performed in anticipation that it will be required.
 The first access to a block in the old sub list makes it “young”, causing it to move to the
head of the buffer pool (the head of the new sub list). If the block was read in because it
was required, the first access occurs immediately and the block is made young. If the
block was read in due to read-ahead, the first access does not occur immediately (and
might not occur at all before the block is evicted).
 As long as no accesses occur for a block in the pool, it “ages” by moving toward the tail
of the list. Blocks in both the new and old sub lists age as other blocks are made new.
Blocks in the old sub list also age as blocks are inserted at the midpoint. Eventually, a

MYSQL ADMINISTRATION Page 22


block that remains unused for long enough reaches the tail of the old sub list and is
evicted.

In the default operation of the buffer pool, a block when read in is loaded at the midpoint and
then moved immediately to the head of the new sub list as soon as an access occurs. In the case
of a table scan (such as performed for a mysqldump operation), each block read by the scan
ends up moving to the head of the new sub list because multiple rows are accessed from each
block. This occurs even for a one-time scan, where the blocks are not otherwise used by other
queries. Blocks may also be loaded by the read-ahead background thread and then moved to the
head of the new sub list by a single access. These effects can be disadvantageous because they
push blocks that are in heavy use by other queries out of the new sub list to the old sub list where
they become subject to eviction.

Several InnoDB system variables control the size of the buffer pool and let you tune the LRU
algorithm:

 innodb_buffer_pool_size

Specifies the size of the buffer pool. If your buffer pool is small and you have sufficient
memory, making the pool larger can improve performance by reducing the amount of
disk I/O needed as queries access InnoDB tables.

 innodb_old_blocks_pct

Specifies the approximate percentage of the buffer pool that InnoDB uses for the old
block sub list. The range of values is 5 to 95. The default value is 37 (that is, 3/8 of the
pool).

 innodb_old_blocks_time

Specifies how long in milliseconds (ms) a block inserted into the old sub list must stay
there after its first access before it can be moved to the new sub list. The default value is
0: A block inserted into the old sub list moves immediately to the new sub list the first
time it is accessed, no matter how soon after insertion the access occurs. If the value is
greater than 0, blocks remain in the old sub list until an access occurs at least that many
ms after the first access. For example, a value of 1000 causes blocks to stay in the old sub
list for 1 second after the first access before they become eligible to move to the new sub
list.

innodb_old_blocks_pct and innodb_old_blocks_time are available as of MySQL 5.1.41, but only


for InnoDB Plugin, not the built-in version of InnoDB.

By setting innodb_old_blocks_time greater than 0, you can prevent one-time table scans from
flooding the new sub list with blocks used only for the scan. Rows in a block read in for a scan
are accessed rapidly many times in succession, but the block is unused after that. If
innodb_old_blocks_time is set to a value greater than the block scan time, the block is not moved

MYSQL ADMINISTRATION Page 23


to the new sub list during the table scan. Instead, it remains in the old sub list and ages to the tail
of the list to be evicted quickly. This way, blocks used only for a one-time scan do not act to the
detriment of heavily used blocks in the new sub list.

innodb_old_blocks_time can be set at runtime, so you can change it temporarily while


performing operations such as table scans and dumps to prevent them from flooding the new sub
list:

SET GLOBAL innodb_old_blocks_time = 1000;... perform queries that scan tables ...
SET GLOBAL innodb_old_blocks_time = 0;

This strategy does not apply if your intent is to fill the buffer pool with a table's content. For
example, you might perform a table or index scan at server startup or during benchmarking or
testing specifically to “warm up” the buffer pool. In this case, leaving innodb_old_blocks_time
set to 0 accomplishes the goal of loading the scanned blocks into the new sub list.

The output from the InnoDB Standard Monitor contains several fields in the BUFFER POOL
AND MEMORY section that pertain to operation of the buffer pool LRU algorithm:

 Old database pages: The number of pages in the old sub list of the buffer pool.
 Pages made young, not young: The number of old pages that were moved to the head of
the buffer pool (the new sub list), and the number of pages that have remained in the old
sub list without being made new.
 young’s non-young’s: The number of accesses to old pages that have resulted in making
them young or not. This metric differs from that of the previous item in two ways. First, it
relates only to old pages. Second, it is based on number of accesses to pages and not the
number of pages. (There can be multiple accesses to a given page, all of which are
counted.)
 young-making rate: Hits that cause blocks to move to the head of the buffer pool.
 not: Hits that do not cause blocks to move to the head of the buffer pool (due to the delay
not being met).

The young-making rate and not rate will not normally add up to the overall buffer pool hit rate.
Hits for blocks in the old sub list cause them to move to the new sub list, but hits to blocks in the
new sub list cause them to move to the head of the list only if they are a certain distance from the
head.

The preceding information from the Monitor can help you make LRU tuning decisions:

 If you see very low young’s values when you do not have large scans going on, that
indicates that you might need to either reduce the delay time, or increase the percentage
of the buffer pool used for the old sub list. Increasing the percentage makes the old sub
list larger, so blocks in that sub list take longer to move to the tail and be evicted. This
increases the likelihood that they will be accessed again and be made young.
 If you do not see a lot of non-young’s when you are doing large table scans (and lots of
young’s), you will want to tune your delay value to be larger.

MYSQL ADMINISTRATION Page 24


The MySQL Query Cache

The query cache stores the text of a SELECT statement together with the corresponding result
that was sent to the client. If an identical statement is received later, the server retrieves the
results from the query cache rather than parsing and executing the statement again. The query
cache is shared among sessions, so a result set generated by one client can be sent in response to
the same query issued by another client.

The query cache can be useful in an environment where you have tables that do not change very
often and for which the server receives many identical queries. This is a typical situation for
many Web servers that generate many dynamic pages based on database content.

The query cache does not return stale data. When tables are modified, any relevant entries in the
query cache are flushed.

Note

The query cache does not work in an environment where you have multiple mysqld servers
updating the same MyISAM tables.

Note

As of MySQL 5.1.17, the query cache is used for prepared statements under the conditions
described in Section 8.6.3.1, “How the Query Cache Operates”. Before 5.1.17, the query cache is
not used for prepared statements.

Note

As of MySQL 5.1.63, the query cache is not supported for partitioned tables, and is
automatically disabled for queries involving partitioned tables. The query cache cannot be
enabled for such queries.

MYSQL ADMINISTRATION Page 25


6. MYSQL SERVER ADMINISTRATION

MySQL Server (mysqld) is the main program that does most of the work in a MySQL
installation. This section provides an overview of MySQL Server and covers topics that deal
with administering a MySQL installation:

 Server configuration
 The server log files
 Security issues and user-account management
 Management of multiple servers on a single machine

mysqld is the MySQL server. The following discussion covers these MySQL server
configuration topics:

 Startup options that the server supports


 Server system variables
 Server status variables
 How to set the server SQL mode
 The server shutdown process

Server Command Options

When you start the mysqld server, you can specify program. The most common methods are to
provide options in an option file or on the command line. However, in most cases it is desirable
to make sure that the server uses the same options each time it runs. The best way to ensure this
is to list them in an option file.

mysqld reads options from the [mysqld] and [server] groups. mysqld_safe reads options from
the [mysqld], [server], [mysqld_safe], and [safe_mysqld] groups. mysql.server reads options
from the [mysqld] and [mysql.server] groups.

An embedded MySQL server usually reads options from the [server], [embedded], and
[xxxxx_SERVER] groups, where xxxxx is the name of the application into which the server is
embedded.

mysqld accepts many command options. For a list, execute mysqld --help. Before MySQL
4.1.1, --help prints the full help message.

MYSQL ADMINISTRATION Page 26


Server System Variables

The MySQL server maintains many system variables that indicate how it is configured. Each
system variable has a default value. System variables can be set at server startup using options on
the command line or in an option file. As of MySQL 4.0.3, most of them can be changed
dynamically while the server is running by means of the SET statement, which enables you to
modify operation of the server without having to stop and restart it. You can refer to system
variable values in expressions.

There are several ways to see the names and values of system variables:

 To see the values that a server will use based on its compiled-in defaults and any option
files that it reads, use this command

mysqld --verbose --help

 To see the values that a server will use based on its compiled-in defaults, ignoring the
settings in any option files, use this command
 mysqld --no-defaults --verbose --help
 To see the current values used by a running server, use the SHOW VARIABLES
statement.

Using System Variables

The MySQL server maintains many system variables that indicate how it is configured. Each
system variable has a default value. System variables can be set at server startup using options on
the command line or in an option file. As of MySQL 4.0.3, most of them can be changed
dynamically while the server is running by means of the SET statement, which enables you to
modify operation of the server without having to stop and restart it. You can refer to system
variable values in expressions.

Beginning with MySQL 4.0.3, the server maintains two kinds of system variables. Global
variables affect the overall operation of the server. Session variables affect its operation for
individual client connections. A given system variable can have both a global and a session
value. Global and session system variables are related as follows:

 When the server starts, it initializes all global variables to their default values. These
defaults can be changed by options specified on the command line or in an option file.
 The server also maintains a set of session variables for each client that connects. The
client's session variables are initialized at connect time using the current values of the
corresponding global variables. For example, the client's SQL mode is controlled by the
session sql_mode value, which is initialized when the client connects to the value of the
global sql_mode value.

MYSQL ADMINISTRATION Page 27


 The server maintains many status variables that provide information about its operation.
You can view these variables and their values by using the SHOW STATUS statement

 mysql> SHOW STATUS;


 +--------------------------+------------+
 | Variable_name | Value |
 +--------------------------+------------+
 | Aborted_clients |0 |
 | Aborted_connects |0 |
 | Bytes_received | 155372598 |
 | Bytes_sent | 1176560426 |
 | Connections | 30023 |
 ...

Server SQL Modes

The MySQL server can operate in different SQL modes, and (as of MySQL 4.1) can apply these
modes differentially for different clients. This capability enables each application to tailor the
server's operating mode to its own requirements.

Modes define what SQL syntax MySQL should support and what kind of data validation checks
it should perform. This makes it easier to use MySQL in different environments and to use
MySQL together with other database servers.

You can set the default SQL mode by starting mysqld with the --sql-mode="modes" option, or
by using sql-mode="modes" in my.cnf (Unix operating systems) or my.ini (Windows). modes is
a list of different modes separated by comma (“,”) characters. The default value is empty (no
modes set). The modes value also can be empty (--sql-mode="" on the command line, or sql-
mode="" in my.cnf on Unix systems or in my.ini on Windows) if you want to clear it explicitly.

Beginning with MySQL 4.1, you can change the SQL mode at runtime by using a SET
[GLOBAL|SESSION] sql_mode='modes' statement to set the sql_mode system value. Setting the
GLOBAL variable requires the SUPER privilege and affects the operation of all clients that
connect from that time on. Setting the SESSION variable affects only the current client. Any
client can change its own session sql_mode value at any time.

You can retrieve the current global or session sql_mode value with the following statements:

SELECT @@GLOBAL.sql_mode;
SELECT @@SESSION.sql_mode;

MYSQL ADMINISTRATION Page 28


The Shutdown Process

The server shutdown process takes place as follows:

1. The shutdown process is initiated.

Server shutdown can be initiated several ways. For example, a user with the
SHUTDOWN privilege can execute a mysqladmin shutdown command. mysqladmin
can be used on any platform supported by MySQL. Other operating system-specific
shutdown initiation methods are possible as well: The server shuts down on Unix when it
receives a SIGTERM signal. A server running as a service on Windows shuts down when
the services manager tells it to. (On Windows, a user with Administrator rights can also
shut down the server using NET STOP service_name, where service_name is the name of
the MySQL service. By default, this is MySQL.)

2. The server creates a shutdown thread if necessary.

Depending on how shutdown was initiated, the server might create a thread to handle the
shutdown process. If shutdown was requested by a client, a shutdown thread is created. If
shutdown is the result of receiving a SIGTERM signal, the signal thread might handle
shutdown itself, or it might create a separate thread to do so. If the server tries to create a
shutdown thread and cannot (for example, if memory is exhausted), it issues a diagnostic
message that appears in the error log:

Error: Can't create thread to kill server

3. The server stops accepting new connections.

To prevent new activity from being initiated during shutdown, the server stops accepting
new client connections. It does this by closing the network connections to which it
normally listens for connections: the TCP/IP port, the Unix socket file, the Windows
named pipe, and shared memory on Windows.

4. The server terminates current activity.

For each thread that is associated with a client connection, the connection to the client is
broken and the thread is marked as killed. Threads die when they notice that they are so
marked. Threads for idle connections die quickly. Threads that currently are processing
statements check their state periodically and take longer to die. For additional
information about thread termination, For threads that have an open transaction, the
transaction is rolled back. Note that if a thread is updating a nontransactional table, an
operation such as a multiple-row UPDATE or INSERT may leave the table partially
updated, because the operation can terminate before completion.

MYSQL ADMINISTRATION Page 29


If the server is a master replication server, threads associated with currently connected
slaves are treated like other client threads. That is, each one is marked as killed and exits
when it next checks its state.

If the server is a slave replication server, the I/O and SQL threads, if active, are stopped
before client threads are marked as killed. The SQL thread is permitted to finish its
current statement (to avoid causing replication problems), and then stops. If the SQL
thread was in the middle of a transaction at this point, the transaction is rolled back.

5. Storage engines are shut down or closed.

At this stage, the table cache is flushed and all open tables are closed.

Each storage engine performs any actions necessary for tables that it manages. For
example, MyISAM flushes any pending index writes for a table. InnoDB flushes its
buffer pool to disk, writes the current LSN to the tablespace, and terminates its own
internal threads.

6. The server exits.

MYSQL ADMINISTRATION Page 30


7. MAINTAINING SECURITY IN MYSQL
When thinking about security within a MySQL installation, you should consider a wide range of
possible topics and how they affect the security of your MySQL server and related applications:

 General factors that affect security. These include choosing good passwords, not granting
unnecessary privileges to users, ensuring application security by preventing SQL
injections and data corruption, and others. See
 Security of the installation itself. The data files, log files, and the all the application files
of your installation should be protected to ensure that they are not readable or writable by
unauthorized parties
 Access control and security within the database system itself, including the users and
databases granted with access to the databases, views and stored programs in use within
the database.
 Network security of MySQL and your system. The security is related to the grants for
individual users, but you may also wish to restrict MySQL so that it is available only
locally on the MySQL server host, or to a limited set of other hosts.
 Ensure that you have adequate and appropriate backups of your database files,
configuration and log files. Also be sure that you have a recovery solution in place and
test that you are able to successfully recover the information from your backups.

When running MySQL, follow these guidelines:

 Do not ever give anyone (except MySQL root accounts) access to the user table in the
mysql database! This is critical.

Learn how the MySQL access privilege system works . Use the GRANT and REVOKE
statements to control access to MySQL. Do not grant more privileges than necessary. Never
grant privileges to all hosts.

Checklist:

o Try mysql -u root. If you are able to connect successfully to the server without being
asked for a password, anyone can connect to your MySQL server as the MySQL
root user with full privileges
o Use the SHOW GRANTS statement to check which accounts have access to what.
Then use the REVOKE statement to remove those privileges that are not necessary.
 Do not store clear text passwords in your database. If your computer becomes
compromised, the intruder can take the full list of passwords and use them..
 Invest in a firewall. This protects you from at least 50% of all types of exploits in any
software. Put MySQL behind the firewall or in a demilitarized zone (DMZ).

MYSQL ADMINISTRATION Page 31


Checklist:

o Try to scan your ports from the Internet using a tool such as nmap. MySQL uses
port 3306 by default. This port should not be accessible from untrusted hosts. As a
simple way to check whether your MySQL port is open, try the following
command from some remote machine, where server_host is the host name or IP
address of the host on which your MySQL server runs:

written using proper defensive programming techniques.

plain (unencrypted) data over the Internet. This information is accessible to


everyone who has the time and ability to intercept it and use it for their own purposes. Instead,
use an encrypted protocol such as SSL or SSH. MySQL supports internal SSL connections.
Another technique is to use SSH port-forwarding to create an encrypted (and compressed) tunnel
for the communication.

End-User Guidelines for Password Security :

MySQL users should use the following guidelines to keep passwords secure.

When you run a client program to connect to the MySQL server, it is inadvisable to specify your
password in a way that exposes it to discovery by other users. The methods you can use to
specify your password when you run client programs are listed here, along with an assessment of
the risks of each method. In short, the safest methods are to have the client program prompt for
the password or to specify the password in a properly protected option file.

 Use a -pyour_pass or --password=your_pass option on the command line. For example:


 shell> mysql -u francis -pfrank db_name

This is convenient but insecure. On some systems, your password becomes visible to
system status programs such as ps that may be invoked by other users to display
command lines. MySQL clients typically overwrite the command-line password
argument with zeros during their initialization sequence. However, there is still a brief
interval during which the value is visible.

If your operating environment is set up to display your current command in the title bar
of your terminal window, the password remains visible as long as the command is
running, even if the command has scrolled out of view in the window content area.

 Use the -p or --password option on the command line with no password value specified.
In this case, the client program solicits the password interactively:
 shell> mysql -u francis -p db_name
 Enter password: ********

MYSQL ADMINISTRATION Page 32


The “*” characters indicate where you enter your password. The password is not
displayed as you enter it.

It is more secure to enter your password this way than to specify it on the command line
because it is not visible to other users. However, this method of entering a password is
suitable only for programs that you run interactively. If you want to invoke a client from
a script that runs non interactively, there is no opportunity to enter the password from the
keyboard. On some systems, you may even find that the first line of your script is read
and interpreted (incorrectly) as your password.

 Store your password in an option file. For example, on Unix, you can list your password
in the [client] section of the my.cnf file in your home directory:

[client]
password=your_pass

To keep the password safe, the file should not be accessible to anyone but yourself. To
ensure this, set the file access mode to 400 or 600. For example:

shell> chmod 600 .my.cnf

To name from the command line a specific option file containing the password, use the --
defaults-file=file_name option, where file_name is the full path name to the file. For
example:

shell> mysql --defaults-file=/home/francis/mysql-opts

 Store your password in the MYSQL_PWD environment variable. This method of


specifying your MySQL password must be considered extremely insecure and should not
be used. Some versions of ps include an option to display the environment of running
processes. On some systems, if you set MYSQL_PWD, your password is exposed to any
other user who runs ps. Even on systems without such a version of ps, it is unwise to
assume that there are no other methods by which users can examine process
environments.

On Unix, the mysql client writes a record of executed statements to a history file By default, this
file is named .mysql_history and is created in your home directory. Passwords can appear as
plain text in SQL statements such as CREATE USER, GRANT, and SET PASSWORD, so if
you use these statements, they are logged in the history file. To keep this file safe, use a
restrictive access mode, the same way as described earlier for the my.cnf file.

If your command interpreter is configured to maintain a history, any file in which the commands
are saved will contain MySQL passwords entered on the command line. For example, bash uses
~/.bash_history. Any such file should have a restrictive access mode.

MYSQL ADMINISTRATION Page 33


Administrator Guidelines for Password Security

Database administrators should use the following guidelines to keep passwords secure.

MySQL stores passwords for user accounts in the mysql.user table. Access to this table should
never be granted to any nonadministrative accounts.

A user who has access to modify the plugin directory (the value of the plugin_dir system
variable) or the my.cnf file that specifies the location of the plugin directory can replace plugins
and modify the capabilities provided by plugins.

Files such as log files to which passwords might be written should be protected.

Passwords and Logging

Passwords can appear as plain text in SQL statements such as CREATE USER, GRANT, and
SET PASSWORD, or statements that invoke the PASSWORD() function. If these statements are
logged by the MySQL server, the passwords become available to anyone with access to the logs.
This applies to the general query log, the slow query log, and the binary log (see Section 5.2,
“MySQL Server Logs”). To guard against unwarranted exposure to log files, they should be
located in a directory that restricts access to only the server and the database administrator. If
you log to tables in the mysql database, access to the tables should never be granted to any non
administrative accounts. Replication slaves store the password for the replication master in the
master.info file. Retrict this file to be accessible only to the database administrator. Database
backups that include tables or log files containing passwords should be protected using a
restricted access mode.

Privileges Provided by MySQL


MySQL provides privileges that apply in different contexts and at different levels of operation:

 Administrative privileges enable users to manage operation of the MySQL server. These
privileges are global because they are not specific to a particular database.
 Database privileges apply to a database and to all objects within it. These privileges can
be granted for specific databases, or globally so that they apply to all databases.
 Privileges for database objects such as tables, indexes, views, and stored routines can be
granted for specific objects within a database, for all objects of a given type within a
database (for example, all tables in a database), or globally for all objects of a given type
in all databases).

Information about account privileges is stored in the user, db, host, tables_priv, columns_priv,
and procs_priv tables in the mysql database. The MySQL server reads the contents of these
tables into memory when it starts and reloads them .Access-control decisions are based on the in-
memory copies of the grant tables.

MYSQL ADMINISTRATION Page 34


Some releases of MySQL introduce changes to the structure of the grant tables to add new access
privileges or features. Whenever you update to a new version of MySQL, you should update
your grant tables to make sure that they have the current structure so that you can take advantage
of any new capabilities.

The following list provides a general description of each privilege available in MySQL.
Particular SQL statements might have more specific privilege requirements than indicated here.
If so, the description for the statement in question provides the details.

 The ALL or ALL PRIVILEGES privilege specifier is shorthand. It stands for “all
privileges available at a given privilege level” (except GRANT OPTION). For example,
granting ALL at the global or table level grants all global privileges or all table-level
privileges.
 The ALTER privilege enables use of ALTER TABLE to change the structure of tables.
ALTER TABLE also requires the CREATE and INSERT privileges. Renaming a table
requires ALTER and DROP on the old table, ALTER, CREATE, and INSERT on the
new table.
 The ALTER ROUTINE privilege is needed to alter or drop stored routines (procedures
and functions).
 The CREATE privilege enables creation of new databases and tables.
 The CREATE ROUTINE privilege is needed to create stored routines (procedures and
functions).
 The CREATE TEMPORARY TABLES privilege enables the creation of temporary
tables using the CREATE TEMPORARY TABLE statement.

However, other operations on a temporary table, such as INSERT, UPDATE, or


SELECT, require additional privileges for those operations for the database containing
the temporary table, or for the non temporary table of the same name.

To keep privileges for temporary and non temporary tables separate, a common
workaround for this situation is to create a database dedicated to the use of temporary
tables. Then for that database, a user can be granted the CREATE TEMPORARY
TABLES privilege, along with any other privileges required for temporary table
operations done by that user.

 The CREATE USER privilege enables use of CREATE USER, DROP USER, RENAME
USER, and REVOKE ALL PRIVILEGES.
 The CREATE VIEW privilege enables use of CREATE VIEW.
 The DELETE privilege enables rows to be deleted from tables in a database.
 The DROP privilege enables you to drop (remove) existing databases, tables, and views.
Beginning with MySQL 5.1.10, the DROP privilege is also required to use the statement
ALTER TABLE ... DROP PARTITION on a partitioned table. Beginning with MySQL
5.1.16, the DROP privilege is required for TRUNCATE TABLE (before that,
TRUNCATE TABLE requires the DELETE privilege). If you grant the DROP privilege
for the mysql database to a user, that user can drop the database in which the MySQL
access privileges are stored.

MYSQL ADMINISTRATION Page 35


 The EVENT privilege is required to create, alter, drop, or see events for the Event
Scheduler. This privilege was added in MySQL 5.1.6.
 The EXECUTE privilege is required to execute stored routines (procedures and
functions).
 The FILE privilege gives you permission to read and write files on the server host using
the LOAD DATA INFILE and SELECT ... INTO OUTFILE statements and the
LOAD_FILE() function. A user who has the FILE privilege can read any file on the
server host that is either world-readable or readable by the MySQL server. (This implies
the user can read any file in any database directory, because the server can access any of
those files.) The FILE privilege also enables the user to create new files in any directory
where the MySQL server has write access. This includes the server's data directory
containing the files that implement the privilege tables. As a security measure, the server
will not overwrite existing files.

To limit the location, in which files can be read and written, set the secure_file_priv
system to a specific directory.

 The GRANT OPTION privilege enables you to give to other users or remove from other
users those privileges that you yourself possess.
 The INDEX privilege enables you to create or drop (remove) indexes. INDEX applies to
existing tables. If you have the CREATE privilege for a table, you can include index
definitions in the CREATE TABLE statement.
 The INSERT privilege enables rows to be inserted into tables in a database. INSERT is
also required for the ANALYZE TABLE, OPTIMIZE TABLE, and REPAIR TABLE
table-maintenance statements.
 The LOCK TABLES privilege enables the use of explicit LOCK TABLES statements to
lock tables for which you have the SELECT privilege. This includes the use of write
locks, which prevents other sessions from reading the locked table.
 The PROCESS privilege pertains to display of information about the threads executing
within the server (that is, information about the statements being executed by sessions).
The privilege enables use of SHOW PROCESSLIST or mysqladmin processlist to see
threads belonging to other accounts; you can always see your own threads.
 The REFERENCES privilege currently is unused.
 The RELOAD privilege enables use of the FLUSH statement. It also enables
mysqladmin commands that are equivalent to FLUSH operations: flush-hosts, flush-
logs, flush-privileges, flush-status, flush-tables, flush-threads, refresh, and reload.

The reload command tells the server to reload the grant tables into memory. flush-
privileges is a synonym for reload. The refresh command closes and reopens the log files
and flushes all tables. The other flush-xxx commands perform functions similar to
refresh, but are more specific and may be preferable in some instances. For example, if
you want to flush just the log files, flush-logs is a better choice than refresh.

 The REPLICATION CLIENT privilege enables the use of SHOW MASTER STATUS
and SHOW SLAVE STATUS. In MySQL 5.1.64 and later, it also enables the use of the
SHOW BINARY LOGS statement.

MYSQL ADMINISTRATION Page 36


 The REPLICATION SLAVE privilege should be granted to accounts that are used by
slave servers to connect to the current server as their master. Without this privilege, the
slave cannot request updates that have been made to databases on the master server.
 The SELECT privilege enables you to select rows from tables in a database. SELECT
statements require the SELECT privilege only if they actually retrieve rows from a table.
Some SELECT statements do not access tables and can be executed without permission
for any database. For example, you can use SELECT as a simple calculator to evaluate
expressions that make no reference to tables:
 SELECT 1+1;
 SELECT PI()*2;

The SELECT privilege is also needed for other statements that read column values. For
example, SELECT is needed for columns referenced on the right hand side of
col_name=expr assignment in UPDATE statements or for columns named in the
WHERE clause of DELETE or UPDATE statements.

 The SHOW DATABASES privilege enables the account to see database names by
issuing the SHOW DATABASE statement. Accounts that do not have this privilege see
only databases for which they have some privileges, and cannot use the statement at all if
the server was started with the --skip-show-database option. Note that any global
privilege is a privilege for the database.
 The SHOW VIEW privilege enables use of SHOW CREATE VIEW.
 The SHUTDOWN privilege enables use of the mysqladmin shutdown command. There
is no corresponding SQL statement.
 The SUPER privilege enables an account to use CHANGE MASTER TO, KILL or
mysqladmin kill to kill threads belonging to other accounts (you can always kill your
own threads), PURGE BINARY LOGS, configuration changes using SET GLOBAL to
modify global system variables, the mysqladmin debug command, enabling or disabling
logging, performing updates even if the read_only system variable is enabled, starting
and stopping replication on slave servers, specification of any account in the DEFINER
attribute of stored programs and views, and enables you to connect (once) even if the
connection limit controlled by the max_connections system variable is reached.

To create or alter stored functions if binary logging is enabled, you may also need the
SUPER privilege.

 The TRIGGER privilege enables trigger operations. You must have this privilege for a
table to create, drop, or execute triggers for that table. This privilege was added in
MySQL 5.1.6. (Prior to MySQL 5.1.6, trigger operations required the SUPER privilege.)
 The UPDATE privilege enables rows to be updated in tables in a database.
 The USAGE privilege specifier stands for “no privileges.” It is used at the global level
with GRANT to modify account attributes such as resource limits or SSL characteristics
without affecting existing account privileges.

It is a good idea to grant to an account only those privileges that it needs. You should exercise
particular caution in granting the FILE and administrative privileges:

MYSQL ADMINISTRATION Page 37


 The FILE privilege can be abused to read into a database table any files that the MySQL
server can read on the server host. This includes all world-readable files and files in the
server's data directory. The table can then be accessed using SELECT to transfer its
contents to the client host.
 The GRANT OPTION privilege enables users to give their privileges to other users. Two
users that have different privileges and with the GRANT OPTION privilege are able to
combine privileges.
 The ALTER privilege may be used to subvert the privilege system by renaming tables.
 The SHUTDOWN privilege can be abused to deny service to other users entirely by
terminating the server.
 The PROCESS privilege can be used to view the plain text of currently executing
statements, including statements that set or change passwords.
 The SUPER privilege can be used to terminate other sessions or change how the server
operates.
 Privileges granted for the mysql database itself can be used to change passwords and
other access privilege information. Passwords are stored encrypted, so a malicious user
cannot simply read them to know the plain text password. However, a user with write
access to the user table Password column can change an account's password, and then
connect to the MySQL server using that account.

Access Control Connection Verification:

When you attempt to connect to a MySQL server, the server accepts or rejects the connection
based on your identity and whether you can verify your identity by supplying the correct
password. If not, the server denies access to you completely. Otherwise, the server accepts the
connection, and then enters Stage 2 and waits for requests.

Your identity is based on two pieces of information:

 The client host from which you connect


 Your MySQL user name

Identity checking is performed using the three user table scope columns (Host, User, and
Password). The server accepts the connection only if the Host and User columns in some user
table row match the client host name and user name and the client supplies the password
specified in that row.

If the User column value is nonblank, the user name in an incoming connection must match
exactly. If the User value is blank, it matches any user name. If the user table row that matches
an incoming connection has a blank user name, the user is considered to be an anonymous user
with no name, not a user with the name that the client actually specified. This means that a blank
user name is used for all further access checking for the duration of the connection (that is,
during Stage 2).

The Password column can be blank. This is not a wildcard and does not mean that any password
matches. It means that the user must connect without specifying a password.

MYSQL ADMINISTRATION Page 38


Nonblank Password values in the user table represent encrypted passwords. MySQL does not
store passwords in plaintext form for anyone to see. Rather, the password supplied by a user who
is attempting to connect is encrypted (using the PASSWORD() function). The encrypted
password then is used during the connection process when checking whether the password is
correct. This is done without the encrypted password ever traveling over the connection.

From MySQL's point of view, the encrypted password is the real password, so you should never
give anyone access to it. In particular, do not give non administrative users read access to tables
in the mysql database.

When multiple matches are possible, the server must determine which of them to use. It resolves
this issue as follows:

 Whenever the server reads the user table into memory, it sorts the rows.
 When a client attempts to connect, the server looks through the rows in sorted order.
 The server uses the first row that matches the client host name and user name.

If you are able to connect to the server, but your privileges are not what you expect, you
probably are being authenticated as some other account. To find out what account the server used
to authenticate you, use the CURRENT_USER() function. It returns a value in
user_name@host_name format that indicates the User and Host values from the matching user
table row. Suppose that jeffrey connects and issues the following query:

Mysql> SELECT CURRENT_USER();


+----------------+
| CURRENT_USER() |
+----------------+
| @localhost |
+----------------+

The result shown here indicates that the matching user table row had a blank User column value.
In other words, the server is treating Jeffrey as an anonymous user.

Another way to diagnose authentication problems is to print out the user table and sort it by hand
to see where the first match is being made.

MYSQL ADMINISTRATION Page 39


8. USER ADMINISTRATION

MySQL stores accounts in the user table of the mysql database. An account is defined in
terms of a user name and the client host or hosts from which the user can connect to the server.
The account may also have a password. For information about account representation in the user
table.

There are several distinctions between the way user names and passwords are used by MySQL
and the way they are used by your operating system:

 User names, as used by MySQL for authentication purposes, have nothing to do with user
names (login names) as used by Windows or Unix. On Unix, most MySQL clients by
default try to log in using the current Unix user name as the MySQL user name, but that
is for convenience only. The default can be overridden easily, because client programs
permit any user name to be specified with a -u or --user option. Because this means that
anyone can attempt to connect to the server using any user name, you cannot make a
database secure in any way unless all MySQL accounts have passwords. Anyone who
specifies a user name for an account that has no password is able to connect successfully
to the server.
 MySQL user names can be up to 16 characters long. Operating system user names,
because they are completely unrelated to MySQL user names, may be of a different
maximum length. For example, Unix user names typically are limited to eight characters.

Warning

The limit on MySQL user name length is hard-coded in the MySQL servers and clients,
and trying to circumvent it by modifying the definitions of the tables in the mysql
database does not work.

You should never alter any of the tables in the mysql database in any manner whatsoever
except by means of the procedure. Attempting to redefine MySQL's system tables in any
other fashion results in undefined (and unsupported!) behavior.

 The server uses MySQL passwords stored in the user table to authenticate client
connections using MySQL built-in authentication. These passwords have nothing to do
with passwords for logging in to your operating system. There is no necessary connection
between the “external” password you use to log in to a Windows or Unix machine and
the password you use to access the MySQL server on that machine.
 MySQL encrypts passwords stored in the user table using its own algorithm. This
encryption is the same as that implemented by the PASSWORD() SQL function but
differs from that used during the Unix login process. Unix password encryption is the
same as that implemented by the ENCRYPT() SQL function.

MYSQL ADMINISTRATION Page 40


From version 4.1 on, MySQL employs a stronger authentication method that has better
password protection during the connection process than in earlier versions. It is secure
even if TCP/IP packets are sniffed or the mysql database is captured. (In earlier versions,
even though passwords are stored in encrypted form in the user table, knowledge of the
encrypted password value could be used to connect to the MySQL server.)

 It is possible to connect to the server regardless of character set settings if the user name
and password contain only ASCII characters. To connect when the user name or
password contain non-ASCII characters, the client should call the mysql_options() C API
function with the MYSQL_SET_CHARSET_NAME option and appropriate character set
name as arguments. This causes authentication to take place using the specified character
set. Otherwise, authentication will fail unless the server default character set is the same
as the encoding in the authentication defaults.
 Standard MySQL client programs support a --default-character-set option that causes
mysql_options() to be called as just described. For programs that use a connector that is
not based on the C API, the connector may provide an equivalent to mysql_options() that
can be used instead. Check the connector documentation. The preceding notes do not
apply for ucs2, which is not permitted as a client character set.

Adding User Accounts :

After connecting to the server as root, you can add new accounts. The following statements use
GRANT to set up four new accounts:

mysql> CREATE USER 'monty'@'localhost' IDENTIFIED BY 'some_pass';


mysql> GRANT ALL PRIVILEGES ON *.* TO 'monty'@'localhost'
-> WITH GRANT OPTION;
mysql> CREATE USER 'monty'@'%' IDENTIFIED BY 'some_pass';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'monty'@'%'
-> WITH GRANT OPTION;
mysql> CREATE USER 'admin'@'localhost';
mysql> GRANT RELOAD,PROCESS ON *.* TO 'admin'@'localhost';
mysql> CREATE USER 'dummy'@'localhost';

The accounts created by these statements have the following properties:

 Two of the accounts have a user name of monty and a password of some_pass. Both
accounts are superuser accounts with full privileges to do anything. The
'monty'@'localhost' account can be used only when connecting from the local host. The
'monty'@'%' account uses the '%' wildcard for the host part, so it can be used to connect
from any host.

It is necessary to have both accounts for monty to be able to connect from anywhere as
monty. Without the localhost account, the anonymous-user account for localhost that is
created by mysql_install_db would take precedence when monty connects from the local
host. As a result, monty would be treated as an anonymous user. The reason for this is

MYSQL ADMINISTRATION Page 41


that the anonymous-user account has a more specific Host column value than the
'monty'@'%' account and thus comes earlier in the user table sort order. (user table
sorting is discussed in Section 6.2.4, “Access Control, Stage 1: Connection
Verification”.)

 The 'admin'@'localhost' account has no password. This account can be used only by
admin to connect from the local host. It is granted the RELOAD and PROCESS
administrative privileges. These privileges enable the admin user to execute the
mysqladmin reload, mysqladmin refresh, and mysqladmin flush-xxx commands, as
well as mysqladmin processlist . No privileges are granted for accessing any databases.
You could add such privileges later by issuing other GRANT statements.
 The 'dummy'@'localhost' account has no password. This account can be used only to
connect from the local host. No privileges are granted. It is assumed that you will grant
specific privileges to the account later.

The statements that create accounts with no password will fail if the
NO_AUTO_CREATE_USER SQL mode is enabled. To deal with this, use an IDENTIFIED BY
clause that specifies a nonempty password.

To check the privileges for an account, use SHOW GRANTS:

mysql> SHOW GRANTS FOR 'admin'@'localhost';


+-----------------------------------------------------+
| Grants for admin@localhost |
+-----------------------------------------------------+
| GRANT RELOAD, PROCESS ON *.* TO 'admin'@'localhost' |
+-----------------------------------------------------+

As an alternative to CREATE USER and GRANT, you can create the same accounts directly by
issuing INSERT statements and then telling the server to reload the grant tables using FLUSH
PRIVILEGES:

shell> mysql --user=root mysql


mysql> INSERT INTO user
-> VALUES('localhost','monty',PASSWORD('some_pass'),
-> 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y');
mysql> INSERT INTO user
-> VALUES('%','monty',PASSWORD('some_pass'),
-> 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y',
-> 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y',
-> '','','','',0,0,0,0);
mysql> INSERT INTO user SET Host='localhost',User='admin',
-> Reload_priv='Y', Process_priv='Y';
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('localhost','dummy','');
mysql> FLUSH PRIVILEGES;

MYSQL ADMINISTRATION Page 42


When you create accounts with INSERT, it is necessary to use FLUSH PRIVILEGES to tell the
server to reload the grant tables. Otherwise, the changes go unnoticed until you restart the server.
With CREATE USER, FLUSH PRIVILEGES is unnecessary.

The reason for using the PASSWORD() function with INSERT is to encrypt the password. The
CREATE USER statement encrypts the password for you, so PASSWORD() is unnecessary.

The 'Y' values enable privileges for the accounts. Depending on your MySQL version, you may
have to use a different number of 'Y' values in the first two INSERT statements. The INSERT
statement for the admin account employs the more readable extended INSERT syntax using
SET.

In the INSERT statement for the dummy account, only the Host, User, and Password columns in
the user table row are assigned values. None of the privilege columns are set explicitly, so
MySQL assigns them all the default value of 'N'. This is equivalent to what CREATE USER
does.

If strict SQL mode is enabled, all columns that have no default value must have a value
specified. In this case, INSERT statements must explicitly specify values for the ssl_cipher,
x509_issuer, and x509_subject columns.

To set up a superuser account, it is necessary only to insert a user table row with all privilege
columns set to 'Y'. The user table privileges are global, so no entries in any of the other grant
tables are needed.

The next examples create three accounts and give them access to specific databases. Each of
them has a user name of custom and password of obscure.

To create the accounts with CREATE USER and GRANT, use the following statements:

shell> mysql --user=root mysql


mysql> CREATE USER 'custom'@'localhost' IDENTIFIED BY 'obscure';
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
-> ON bankaccount.*
-> TO 'custom'@'localhost';
mysql> CREATE USER 'custom'@'host47.example.com' IDENTIFIED BY 'obscure';
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
-> ON expenses.*
-> TO 'custom'@'host47.example.com';
mysql> CREATE USER 'custom'@'server.domain' IDENTIFIED BY 'obscure';
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
-> ON customer.*
-> TO 'custom'@'server.domain';

MYSQL ADMINISTRATION Page 43


The three accounts can be used as follows:

 The first account can access the bankaccount database, but only from the local host.
 The second account can access the expenses database, but only from the host
host47.example.com.
 The third account can access the customer database, but only from the host
server.domain.

To set up the custom accounts without GRANT, use INSERT statements as follows to modify
the grant tables directly:

shell> mysql --user=root mysql


mysql> INSERT INTO user (Host,User,Password)
-> VALUES('localhost','custom',PASSWORD('obscure'));
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('host47.example.com','custom',PASSWORD('obscure'));
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('server.domain','custom',PASSWORD('obscure'));
mysql> INSERT INTO db
-> (Host,Db,User,Select_priv,Insert_priv,
-> Update_priv,Delete_priv,Create_priv,Drop_priv)
-> VALUES('localhost','bankaccount','custom',
-> 'Y','Y','Y','Y','Y','Y');
mysql> INSERT INTO db
-> (Host,Db,User,Select_priv,Insert_priv,
-> Update_priv,Delete_priv,Create_priv,Drop_priv)
-> VALUES('host47.example.com','expenses','custom',
-> 'Y','Y','Y','Y','Y','Y');
mysql> INSERT INTO db
-> (Host,Db,User,Select_priv,Insert_priv,
-> Update_priv,Delete_priv,Create_priv,Drop_priv)
-> VALUES('server.domain','customer','custom',
-> 'Y','Y','Y','Y','Y','Y');
mysql> FLUSH PRIVILEGES;

The first three INSERT statements add user table entries that permit the user custom to connect
from the various hosts with the given password, but grant no global privileges (all privileges are
set to the default value of 'N'). The next three INSERT statements add db table entries that grant
privileges to custom for the bankaccount, expenses, and customer databases, but only when
accessed from the proper hosts. As usual when you modify the grant tables directly, you must tell
the server to reload them with FLUSH PRIVILEGES so that the privilege changes take effect.

To create a user who has access from all machines in a given domain (for example,
mydomain.com), you can use the “%” wildcard character in the host part of the account name:

MYSQL ADMINISTRATION Page 44


mysql> CREATE USER 'myname'@'%.mydomain.com' IDENTIFIED BY 'mypass';

To do the same thing by modifying the grant tables directly, do this:

mysql> INSERT INTO user (Host,User,Password,...)


-> VALUES('%.mydomain.com','myname',PASSWORD('mypass'),...);
mysql> FLUSH PRIVILEGES;

Setting Account Resource Limits :

One means of limiting use of MySQL server resources is to set the global
max_user_connections system variable to a nonzero value. This limits the number of
simultaneous connections that can be made by any given account, but places no limits on what a
client can do once connected. In addition, setting max_user_connections does not enable
management of individual accounts.

You can limit use of the following server resources for individual accounts:

 The number of queries that an account can issue per hour


 The number of updates that an account can issue per hour
 The number of times an account can connect to the server per hour
 The number of simultaneous connections to the server by an account

To set resource limits for an account, use the GRANT statement. Provide a WITH clause that
names each resource to be limited. The default value for each limit is zero (no limit). For
example, to create a new account that can access the customer database, but only in a limited
fashion, issue these statements:

mysql> CREATE USER 'francis'@'localhost' IDENTIFIED BY 'frank';


mysql> GRANT ALL ON customer.* TO 'francis'@'localhost'
-> WITH MAX_QUERIES_PER_HOUR 20
-> MAX_UPDATES_PER_HOUR 10
-> MAX_CONNECTIONS_PER_HOUR 5
-> MAX_USER_CONNECTIONS 2;

To modify existing limits for an account, use a GRANT USAGE statement at the global level
(ON *.*). The following statement changes the query limit for francis to 100:

mysql> GRANT USAGE ON *.* TO 'francis'@'localhost'


-> WITH MAX_QUERIES_PER_HOUR 100;

To remove a limit, set its value to zero. For example, to remove the limit on how many times per
hour francis can connect, use this statement:

mysql> GRANT USAGE ON *.* TO 'francis'@'localhost'


-> WITH MAX_CONNECTIONS_PER_HOUR 0;

MYSQL ADMINISTRATION Page 45


Assigning Account Passwords :

To assign a password when you create a new account with CREATE USER, include an
IDENTIFIED BY clause:

mysql> CREATE USER 'jeffrey'@'localhost'


-> IDENTIFIED BY 'mypass';

To assign or change a password for an existing account, one way is to issue a SET PASSWORD
statement:

mysql> SET PASSWORD FOR


-> 'jeffrey'@'localhost' = PASSWORD('mypass');

MySQL stores passwords in the user table in the mysql database. Only users such as root that
have update access to the mysql database can change the password for other users. If you are not
connected as an anonymous user, you can change your own password by omitting the FOR
clause:

mysql> SET PASSWORD = PASSWORD('mypass');

To establish a password for a new account, provide a value for the Password column:

 mysql> INSERT INTO mysql.user (Host,User,Password)


 -> VALUES('localhost','jeffrey',PASSWORD('mypass'));
 mysql> FLUSH PRIVILEGES;

To change the password for an existing account, use UPDATE to set the Password column value:

 mysql> UPDATE mysql.user SET Password = PASSWORD('bagel')


 -> WHERE Host = 'localhost' AND User = 'francis';
 mysql> FLUSH PRIVILEGES;

MYSQL ADMINISTRATION Page 46


9. BACKUP AND RECOVERY

It is important to back up your databases so that you can recover your data and be up and running
again in case problems occur, such as system crashes, hardware failures, or users deleting data by
mistake. Backups are also essential as a safeguard before upgrading a MySQL installation, and
they can be used to transfer a MySQL installation to another system or to set up replication slave
servers.

MySQL offers a variety of backup strategies from which you can choose the methods that best
suit the requirements for your installation. Several backup and recovery topics with which you
should be familiar:

Backup and Recovery Types

Physical (Raw) Versus Logical Backups

Physical backups consist of raw copies of the directories and files that store database contents.
This type of backup is suitable for large, important databases that need to be recovered quickly
when problems occur.

Logical backups save information represented as logical database structure (CREATE DATABASE,
CREATE TABLE statements) and content (INSERT statements or delimited-text files). This type of
backup is suitable for smaller amounts of data where you might edit the data values or table
structure, or recreate the data on a different machine architecture.

Logical backup methods have these characteristics:

 The backup is done by querying the MySQL server to obtain database structure and
content information.
 Backup is slower than physical methods because the server must access database
information and convert it to logical format. If the output is written on the client side, the
server must also send it to the backup program.
 Output is larger than for physical backup, particularly when saved in text format.
 Backup and restore granularity is available at the server level (all databases), database
level (all tables in a particular database), or table level. This is true regardless of storage
engine.
 The backup does not include log or configuration files, or other database-related files that
are not part of databases.
 Backups stored in logical format are machine independent and highly portable.
 Logical backups are performed with the MySQL server running. The server is not taken
offline.

MYSQL ADMINISTRATION Page 47


 Logical backup tools include the mysqldump program and the SELECT ... INTO OUTFILE
statement. These work for any storage engine, even MEMORY.
 To restore logical backups, SQL-format dump files can be processed using the mysql
client. To load delimited-text files, use the LOAD DATA INFILE statement or the
mysqlimport client.

Physical backup methods have these characteristics:

 The backup consists of exact copies of database directories and files. Typically this is a
copy of all or part of the MySQL data directory. Data from MEMORY tables cannot be
backed up this way because their contents are not stored on disk.
 Physical backup methods are faster than logical because they involve only file copying
without conversion.
 Output is more compact than for logical backup.
 Backup and restore granularity ranges from the level of the entire data directory down to
the level of individual files. This may or may not provide for table-level granularity,
depending on storage engine. (Each MyISAM table corresponds uniquely to a set of files,
but an InnoDB table shares file storage with other InnoDB tables.)
 In addition to databases, the backup can include any related files such as log or
configuration files.
 Backups are portable only to other machines that have identical or similar hardware
characteristics.
 Backups can be performed while the MySQL server is not running. If the server is
running, it is necessary to perform appropriate locking so that the server does not change
database contents during the backup.
 Physical backup tools include file system-level commands (such as cp, scp, tar, rsync),
mysqlhotcopy for MyISAM tables, ibbackup for InnoDB tables, or START BACKUP for
NDB tables.
 For restore, files copied at the file system level or with mysqlhotcopy can be copied back
to their original locations with file system commands; ibbackup restores InnoDB tables,
and ndb_restore restores NDB tables.

Online Versus Offline Backups


Online backups take place while the MySQL server is running so that the database information
can be obtained from the server. Offline backups take place while the server is stopped. This
distinction can also be described as “hot” versus “cold” backups; a “warm” backup is one where
the server remains running but locked against modifying data while you access database files
externally.

MYSQL ADMINISTRATION Page 48


Online backup methods have these characteristics:

 The backup is less intrusive to other clients, which can connect to the MySQL server
during the backup and may be able to access data depending on what operations they
need to perform.
 Care must be taken to impose appropriate locking so that data modifications do not take
place that would compromise backup integrity. The MySQL Enterprise Backup product
does such locking automatically.

Offline backup methods have these characteristics:

 Clients can be affected adversely because the server is unavailable during backup. For
that reason, such backups are often taken from a replication slave server that can be taken
offline without harming availability.
 The backup procedure is simpler because there is no possibility of interference from
client activity.

A similar distinction between online and offline applies for recovery operations, and similar
characteristics apply. However, it is more likely that clients will be affected for online recovery
than for online backup because recovery requires stronger locking. During backup, clients might
be able to read data while it is being backed up. Recovery modifies data and does not just read it,
so clients must be prevented from accessing data while it is being restored.

Local Versus Remote Backups

A local backup is performed on the same host where the MySQL server runs, whereas a remote
backup is done from a different host. For some types of backups, the backup can be initiated
from a remote host even if the output is written locally on the server. host.

 mysqldump can connect to local or remote servers. For SQL output (CREATE and
INSERT statements), local or remote dumps can be done and generate output on the
client. For delimited-text output (with the --tab option), data files are created on the
server host.
 mysqlhotcopy performs only local backups: It connects to the server to lock it against
data modifications and then copies local table files.
 SELECT ... INTO OUTFILE can be initiated from a local or remote client host, but the
output file is created on the server host.
 Physical backup methods typically are initiated locally on the MySQL server host so that
the server can be taken offline, although the destination for copied files might be remote.

MYSQL ADMINISTRATION Page 49


Snapshot Backups

Some file system implementations enable “snapshots” to be taken. These provide logical copies
of the file system at a given point in time, without requiring a physical copy of the entire file
system. (For example, the implementation may use copy-on-write techniques so that only parts
of the file system modified after the snapshot time need be copied.) MySQL itself does not
provide the capability for taking file system snapshots. It is available through third-party
solutions such as Veritas, LVM, or ZFS.

Full Vs Incremental Backups


A full backup includes all data managed by a MySQL server at a given point in time. An
incremental backup consists of the changes made to the data during a given time span (from one
point in time to another). MySQL has different ways to perform full backups, such as those
described earlier in this section. Incremental backups are made possible by enabling the server's
binary log, which the server uses to record data changes.

Full Vs Point-in-Time (Incremental) Recovery

A full recovery restores all data from a full backup. This restores the server instance to the state
that it had when the backup was made. If that state is not sufficiently current, a full recovery can
be followed by recovery of incremental backups made since the full backup, to bring the server
to a more up-to-date state.

Incremental recovery is recovery of changes made during a given time span. This is also called
point-in-time recovery because it makes a server's state current up to a given time. Point-in-time
recovery is based on the binary log and typically follows a full recovery from the backup files
that restores the server to its state when the backup was made. Then the data changes written in
the binary log files are applied as incremental recovery to redo data modifications and bring the
server up to the desired point in time.

Table Maintenance
Data integrity can be compromised if tables become corrupt. For InnoDB tables, this is not a
typical issue. For programs to check MyISAM tables and repair them if problems are found.

Backup Scheduling, Compression, and Encryption

Backup scheduling is valuable for automating backup procedures. Compression of backup output
reduces space requirements, and encryption of the output provides better security against
unauthorized access of backed-up data. MySQL itself does not provide these capabilities. The
MySQL Enterprise Backup product can compress InnoDB backups, and compression or
encryption of backup output can be achieved using file system utilities. Other third-party
solutions may be available.

MYSQL ADMINISTRATION Page 50


Database Backup Methods :

Making a Hot Backup with MySQL Enterprise Backup

Customers of MySQL Enterprise Edition can use the MySQL Enterprise Backup product to do
physical backups of entire instances or selected databases, tables, or both. This product includes
features for incremental and compressed backups. Backing up the physical database files makes
restore much faster than logical techniques such as the mysqldump command. InnoDB tables are
copied using a hot backup mechanism. (Ideally, the InnoDB tables should represent a substantial
majority of the data.) Tables from other storage engines are copied using a warm backup
mechanism.

Making Backups with mysqldump or mysqlhotcopy

The mysqldump program and the mysqlhotcopy script can make backups. mysqldump is more
general because it can back up all kinds of tables. mysqlhotcopy works only with some storage
engines.

For InnoDB tables, it is possible to perform an online backup that takes no locks on tables using
the --single-transaction option to mysqldump.

Making Backups by Copying Table Files

For storage engines that represent each table using its own files, tables can be backed up by
copying those files. For example, MyISAM tables are stored as files, so it is easy to do a backup
by copying files (*.frm, *.MYD, and *.MYI files). To get a consistent backup, stop the server or
lock and flush the relevant tables:

LOCK TABLES tbl_list READ;


FLUSH TABLES tbl_list;

You need only a read lock; this enables other clients to continue to query the tables while you are
making a copy of the files in the database directory. The FLUSH TABLES statement is needed
to ensure that the all active index pages are written to disk before you start the backup..

You can also create a binary backup simply by copying all table files, as long as the server isn't
updating anything. The mysqlhotcopy script uses this method. (But note that table file copying
methods do not work if your database contains InnoDB tables. mysqlhotcopy does not work for
InnoDB tables because InnoDB does not necessarily store table contents in database directories.
Also, even if the server is not actively updating data, InnoDB may still have modified data
cached in memory and not flushed to disk.)

MYSQL ADMINISTRATION Page 51


Making Delimited-Text File Backups

To create a text file containing a table's data, you can use SELECT * INTO OUTFILE
'file_name' FROM tbl_name. The file is created on the MySQL server host, not the client host.
For this statement, the output file cannot already exist because permitting files to be overwritten
constitutes a security risk.. This method works for any kind of data file, but saves only table data,
not the table structure.

Another way to create text data files (along with files containing CREATE TABLE statements
for the backed up tables) is to use mysqldump with the --tab option. To reload a delimited-text
data file, use LOAD DATA INFILE or mysqlimport.

Making Incremental Backups by Enabling the Binary Log

MySQL supports incremental backups: You must start the server with the --log-bin option to
enable binary loggin;. The binary log files provide you with the information you need to replicate
changes to the database that are made subsequent to the point at which you performed a backup.
At the moment you want to make an incremental backup (containing all changes that happened
since the last full or incremental backup), you should rotate the binary log by using FLUSH
LOGS. This done, you need to copy to the backup location all binary logs which range from the
one of the moment of the last full or incremental backup to the last but one. These binary logs are
the incremental backup; at restore time.The next time you do a full backup, you should also
rotate the binary log using FLUSH LOGS, mysqldump --flush-logs, or mysqlhotcopy --
flushlog.

Making Backups Using Replication Slaves

If you have performance problems with your master server while making backups, one strategy
that can help is to set up replication and perform backups on the slave rather than on the master.

If you are backing up a slave replication server, you should back up its master.info and relay-
log.info files when you back up the slave's databases, regardless of the backup method you
choose. These information files are always needed to resume replication after you restore the
slave's data. If your slave is replicating LOAD DATA INFILE statements, you should also back
up any SQL_LOAD-* files that exist in the directory that the slave uses for this purpose. The
slave needs these files to resume replication of any interrupted LOAD DATA INFILE
operations. The location of this directory is the value of the --slave-load-tmpdir option. If the
server was not started with that option, the directory location is the value of the tmpdir system
variable.

Recovering Corrupt Tables

If you have to restore MyISAM tables that have become corrupt, try to recover them using
REPAIR TABLE or myisamchk -r first. That should work in 99.9% of all cases.

MYSQL ADMINISTRATION Page 52


Using Backups for Recovery

Now, suppose that we have a catastrophic crash on Wednesday at 8 a.m. that requires recovery
from backups. To recover, first we restore the last full backup we have (the one from Sunday 1
p.m.). The full backup file is just a set of SQL statements, so restoring it is very easy:

shell> mysql < backup_sunday_1_PM.sql

At this point, the data is restored to its state as of Sunday 1 p.m.. To restore the changes made
since then, we must use the incremental backups; that is, the gbichot2-bin.000007 and gbichot2-
bin.000008 binary log files. Fetch the files if necessary from where they were backed up, and
then process their contents like this:

shell> mysqlbinlog gbichot2-bin.000007 gbichot2-bin.000008 | mysql

We now have recovered the data to its state as of Tuesday 1 p.m., but still are missing the
changes from that date to the date of the crash. To not lose them, we would have needed to have
the MySQL server store its MySQL binary logs into a safe location (RAID disks, SAN, ...)
different from the place where it stores its data files, so that these logs were not on the destroyed
disk. (That is, we can start the server with a --log-bin option that specifies a location on a
different physical device from the one on which the data directory resides. That way, the logs are
safe even if the device containing the directory is lost.) If we had done this, we would have the
gbichot2-bin.000009 file (and any subsequent files) at hand, and we could apply them using
mysqlbinlog and mysql to restore the most recent data changes with no loss up to the moment of
the crash:

shell> mysqlbinlog gbichot2-bin.000009 ... | mysql

Using mysqldump for Backups :

Here we will discuss how to use mysqldump to produce dump files, and how to reload dump
files. A dump file can be used in several ways:

 As a backup to enable data recovery in case of data loss.


 As a source of data for setting up replication slaves.
 As a source of data for experimentation:
o To make a copy of a database that you can use without changing the original data.
o To test potential upgrade incompatibilities.

mysqldump produces two types of output, depending on whether the --tab option is given:

MYSQL ADMINISTRATION Page 53


 Without --tab, mysqldump writes SQL statements to the standard output. This output
consists of CREATE statements to create dumped objects (databases, tables, stored
routines, and so forth), and INSERT statements to load data into tables. The output can be
saved in a file and reloaded later using mysql to recreate the dumped objects. Options are
available to modify the format of the SQL statements, and to control which objects are
dumped.
 With --tab, mysqldump produces two output files for each dumped table. The server
writes one file as tab-delimited text, one line per table row. This file is named
tbl_name.txt in the output directory. The server also sends a CREATE TABLE statement
for the table to mysqldump, which writes it as a file named tbl_name.sql in the output
directory.

By default, mysqldump writes information as SQL statements to the standard output. You can
save the output in a file:

shell> mysqldump [arguments] > file_name

To dump all databases, invoke mysqldump with the --all-databases option:

shell> mysqldump --all-databases > dump.sql

To dump only specific databases, name them on the command line and use the --databases
option:

shell> mysqldump --databases db1 db2 db3 > dump.sql

The --databases option causes all names on the command line to be treated as database names.
Without this option, mysqldump treats the first name as a database name and those following as
table names.

With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE


statements prior to the dump output for each database. This ensures that when the dump file is
reloaded, it creates each database if it does not exist and makes it the default database so
database contents are loaded into the same database from which they came. If you want to cause
the dump file to force a drop of each database before recreating it, use the --add-drop-database
option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each
CREATE DATABASE statement.

To dump a single database, name it on the command line:

shell> mysqldump --databases test > dump.sql

In the single-database case, it is permissible to omit the --databases option:

shell> mysqldump test > dump.sql

MYSQL ADMINISTRATION Page 54


The difference between the two preceding commands is that without --databases, the dump
output contains no CREATE DATABASE or USE statements. This has several implications:

 When you reload the dump file, you must specify a default database name so that the
server knows which database to reload.
 For reloading, you can specify a database name different from the original name, which
enables you to reload the data into a different database.
 If the database to be reloaded does not exist, you must create it first.
 Because the output will contain no CREATE DATABASE statement, the --add-drop-
database option has no effect. If you use it, it produces no DROP DATABASE statement.

To dump only specific tables from a database, name them on the command line following the
database name:

shell> mysqldump test t1 t3 t7 > dump.sql

Reloading SQL-Format Backups

To reload a dump file written by mysqldump that consists of SQL statements, use it as input to
the mysql client. If the dump file was created by mysqldump with the --all-databases or --
databases option, it contains CREATE DATABASE and USE statements and it is not necessary
to specify a default database into which to load the data:

shell> mysql < dump.sql

Alternatively, from within mysql, use a source command:

mysql> source dump.sql

If the file is a single-database dump not containing CREATE DATABASE and USE statements,
create the database first (if necessary):

shell> mysqladmin create db1

Then specify the database name when you load the dump file:

shell> mysql db1 < dump.sql

Alternatively, from within mysql, create the database, select it as the default database, and load
the dump file:

mysql> CREATE DATABASE IF NOT EXISTS db1;


mysql> USE db1;
mysql> source dump.sql

MYSQL ADMINISTRATION Page 55


Dumping Data in Delimited-Text Format with mysqldump

This section describes how to use mysqldump to create delimited-text dump files. For
information about reloading such dump files.

If you invoke mysqldump with the --tab=dir_name option, it uses dir_name as the output
directory and dumps tables individually in that directory using two files for each table. The table
name is the basename for these files. For a table named t1, the files are named t1.sql and t1.txt.
The .sql file contains a CREATE TABLE statement for the table. The .txt file contains the table
data, one line per table row.

The following command dumps the contents of the db1 database to files in the /tmp database:

shell> mysqldump --tab=/tmp db1

The .txt files containing table data are written by the server, so they are owned by the system
account used for running the server. The server uses SELECT ... INTO OUTFILE to write the
files, so you must have the FILE privilege to perform this operation, and an error occurs if a
given .txt file already exists.

The server sends the CREATE definitions for dumped tables to mysqldump, which writes them
to .sql files. These files therefore are owned by the user who executes mysqldump.

It is best that --tab be used only for dumping a local server. If you use it with a remote server, the
--tab directory must exist on both the local and remote hosts, and the .txt files will be written by
the server in the remote directory (on the server host), whereas the .sql files will be written by
mysqldump in the local directory (on the client host).

For mysqldump --tab, the server by default writes table data to .txt files one line per row with
tabs between column values, no quotation marks around column values, and newline as the line
terminator. (These are the same defaults as for SELECT ... INTO OUTFILE.)

To enable data files to be written using a different format, mysqldump supports these options:

 --fields-terminated-by=str

The string for separating column values (default: tab).

 --fields-enclosed-by=char

The character within which to enclose column values (default: no character).

 --fields-optionally-enclosed-by=char

The character within which to enclose non-numeric column values (default: no


character).

MYSQL ADMINISTRATION Page 56


 --fields-escaped-by=char

The character for escaping special characters (default: no escaping).

 --lines-terminated-by=str

The line-termination string (default: newline).

Depending on the value you specify for any of these options, it might be necessary on the
command line to quote or escape the value appropriately for your command interpreter.
Alternatively, specify the value using hex notation. Suppose that you want mysqldump to quote
column values within double quotation marks. To do so, specify double quote as the value for the
--fields-enclosed-by option. But this character is often special to command interpreters and must
be treated specially. For example, on Unix, you can quote the double quote like this:

--fields-enclosed-by='"'

On any platform, you can specify the value in hex:

--fields-enclosed-by=0x22

It is common to use several of the data-formatting options together. For example, to dump tables
in comma-separated values format with lines terminated by carriage-return/newline pairs (\r\n),
use this command (enter it on a single line):

shell> mysqldump --tab=/tmp --fields-terminated-by=,


--fields-enclosed-by=’ “ ‘ db1

Should you use any of the data-formatting options to dump table data, you will need to specify
the same format when you reload data files later, to ensure proper interpretation of the file
contents.

Reloading Delimited-Text Format Backups

For backups produced with mysqldump --tab, each table is represented in the output
directory by an .sql file containing the CREATE TABLE statement for the table, and a .txt file
containing the table data. To reload a table, first change location into the output directory. Then
process the .sql file with mysql to create an empty table and process the .txt file to load the data
into the table:

shell> mysql db1 < t1.sql


shell> mysqlimport db1 t1.txt

An alternative to using mysqlimport to load the data file is to use the LOAD DATA INFILE
statement from within the mysql client:

MYSQL ADMINISTRATION Page 57


mysql> USE db1;
mysql> LOAD DATA INFILE 't1.txt' INTO TABLE t1;

If you used any data-formatting options with mysqldump when you initially dumped the table,
you must use the same options with mysqlimport or LOAD DATA INFILE to ensure proper
interpretation of the data file contents:

shell> mysqlimport --fields-terminated-by=,


--fields-enclosed-by='"' --lines-terminated-by=0x0d0a db1 t1.txt

Or:

mysql> USE db1;


mysql> LOAD DATA INFILE 't1.txt' INTO TABLE t1
-> FIELDS TERMINATED BY ',' FIELDS ENCLOSED BY '"'
-> LINES TERMINATED BY '\r\n';

MyISAM Table Maintenance and Crash Recovery:

You can use myisamchk to check, repair, or optimize database tables. The following sections
describe how to perform these operations and how to set up a table maintenance. Even though
table repair with myisamchk is quite secure, it is always a good idea to make a backup before
doing a repair or any maintenance operation that could make a lot of changes to a table.

myisamchk operations that affect indexes can cause FULLTEXT indexes to be rebuilt with full-
text parameters that are incompatible with the values used by the MySQL server..

MyISAM table maintenance can also be done using the SQL statements that perform operations
similar to what myisamchk can do:

 To check MyISAM tables, use CHECK TABLE.


 To repair MyISAM tables, use REPAIR TABLE.
 To optimize MyISAM tables, use OPTIMIZE TABLE.
 To analyze MyISAM tables, use ANALYZE TABLE.

These statements can be used directly or by means of the mysqlcheck client program. One
advantage of these statements over myisamchk is that the server does all the work. With
myisamchk, you must make sure that the server does not use the tables at the same time so that
there is no unwanted interaction between myisamchk and the server.

MYSQL ADMINISTRATION Page 58


Using myisamchk for Crash Recovery:

Even though the MyISAM table format is very reliable (all changes to a table made by an SQL
statement are written before the statement returns), you can still get corrupted tables if any of the
following events occur:

 The mysqld process is killed in the middle of a write.


 An unexpected computer shutdown occurs (for example, the computer is turned off).
 Hardware failures.
 You are using an external program (such as myisamchk) to modify a table that is being
modified by the server at the same time.
 A software bug in the MySQL or MyISAM code.

Typical symptoms of a corrupt table are:

 You get the following error while selecting data from the table:
 Incorrect key file for table: '...'. Try to repair it
 Queries don't find rows in the table or return incomplete results.

If you run mysqld with external locking disabled (which is the default), you cannot
reliably use myisamchk to check a table when mysqld is using the same table. If you can be
certain that no one will access the tables through mysqld while you run myisamchk, you only
have to execute mysqladmin flush-tables before you start checking the tables. If you cannot
guarantee this, you must stop mysqld while you check the tables. If you run myisamchk to
check tables that mysqld is updating at the same time, you may get a warning that a table is
corrupt even when it is not.

 If the server is run with external locking enabled, you can use myisamchk to check tables
at any time. In this case, if the server tries to update a table that myisamchk is using, the
server will wait for myisamchk to finish before it continues.
 If you use myisamchk to repair or optimize tables, you must always ensure that the
mysqld server is not using the table (this also applies if external locking is disabled). If
you do not stop mysqld, you should at least do a mysqladmin flush-tables before you
run myisamchk. Your tables may become corrupted if the server and myisamchk access
the tables simultaneously.

myisamchk works by creating a copy of the .MYD data file row by row. It ends the
repair stage by removing the old .MYD file and renaming the new file to the original file name.
If you use --quick, myisamchk does not create a temporary .MYD file, but instead assumes that
the .MYD file is correct and generates only a new index file without touching the .MYD file.
This is safe, because myisamchk automatically detects whether the .MYD file is corrupt and
aborts the repair if it is. You can also specify the --quick option twice to myisamchk. In this
case, myisamchk does not abort on some errors (such as duplicate-key errors) but instead tries to
resolve them by modifying the .MYD file. Normally the use of two --quick options is useful only

MYSQL ADMINISTRATION Page 59


if you have too little free disk space to perform a normal repair. In this case, you should at least
make a backup of the table before running myisamchk

How to Check MyISAM Tables for Errors

To check a MyISAM table, use the following commands:

 myisamchk tbl_name

This finds 99.99% of all errors. What it cannot find is corruption that involves only the
data file (which is very unusual). If you want to check a table, you should normally run
myisamchk without options or with the -s (silent) option.

 myisamchk -m tbl_name

This finds 99.999% of all errors. It first checks all index entries for errors and then reads
through all rows. It calculates a checksum for all key values in the rows and verifies that
the checksum matches the checksum for the keys in the index tree.

 myisamchk -e tbl_name

This does a complete and thorough check of all data (-e means “extended check”). It does
a check-read of every key for each row to verify that they indeed point to the correct row.
This may take a long time for a large table that has many indexes. Normally, myisamchk
stops after the first error it finds. If you want to obtain more information, you can add the
-v (verbose) option. This causes myisamchk to keep going, up through a maximum of 20
errors.

 myisamchk -e -i tbl_name

This is like the previous command, but the -i option tells myisamchk to print additional
statistical information.

In most cases, a simple myisamchk command with no arguments other than the table name is
sufficient to check a table.

How to Repair MyISAM Tables


If you are going to repair a table from the command line, you must first stop the
mysqld server. Note that when you do mysqladmin shutdown on a remote server, the mysqld
server is still available for a while after mysqladmin returns, until all statement-processing has
stopped and all index changes have been flushed to disk.

MYSQL ADMINISTRATION Page 60


Stage 1: Checking your tables

Run myisamchk *.MYI or myisamchk -e *.MYI if you have more time. Use the -s (silent)
option to suppress unnecessary information.

If the mysqld server is stopped, you should use the --update-state option to tell myisamchk to
mark the table as “checked.”

You have to repair only those tables for which myisamchk announces an error. For such tables,
proceed to Stage 2.

If you get unexpected errors when checking (such as out of memory errors), or if myisamchk
crashes, go to Stage 3.

Stage 2: Easy safe repair

First, try myisamchk -r -q tbl_name (-r -q means “quick recovery mode”). This attempts to
repair the index file without touching the data file. If the data file contains everything that it
should and the delete links point at the correct locations within the data file, this should work,
and the table is fixed. Start repairing the next table. Otherwise, use the following procedure:

1. Make a backup of the data file before continuing.


2. Use myisamchk -r tbl_name (-r means “recovery mode”). This removes incorrect rows
and deleted rows from the data file and reconstructs the index file.
3. If the preceding step fails, use myisamchk --safe-recover tbl_name. Safe recovery
mode uses an old recovery method that handles a few cases that regular recovery mode
does not (but is slower).

Note:

If you want a repair operation to go much faster, you should set the values of the sort_buffer_size
and key_buffer_size variables each to about 25% of your available memory when running
myisamchk.

If you get unexpected errors when repairing (such as out of memory errors), or if myisamchk
crashes, go to Stage 3.

Stage 3: Difficult repair

You should reach this stage only if the first 16KB block in the index file is destroyed or contains
incorrect information, or if the index file is missing. In this case, it is necessary to create a new
index file. Do so as follows:

1. Move the data file to a safe place.


2. Use the table description file to create new (empty) data and index files:
3. shell> mysql db_name

MYSQL ADMINISTRATION Page 61


4. mysql> SET autocommit=1;
5. mysql> TRUNCATE TABLE tbl_name;
6. mysql> quit
7. Copy the old data file back onto the newly created data file. (Do not just move the old file
back onto the new file. You want to retain a copy in case something goes wrong.)

Important

If you are using replication, you should stop it prior to performing the above procedure, since it
involves file system operations, and these are not logged by MySQL.

Go back to Stage 2. myisamchk -r -q should work. (This should not be an endless loop.)

You can also use the REPAIR TABLE tbl_name USE_FRM SQL statement, which performs the
whole procedure automatically. There is also no possibility of unwanted interaction between a
utility and the server, because the server does all the work when you use REPAIR TABLE. See
Section 13.7.2.6, “REPAIR TABLE Syntax”.

Stage 4: Very difficult repair

You should reach this stage only if the .frm description file has also crashed. That should never
happen, because the description file is not changed after the table is created:

1. Restore the description file from a backup and go back to Stage 3. You can also restore
the index file and go back to Stage 2. In the latter case, you should start with myisamchk
-r.
2. If you do not have a backup but know exactly how the table was created, create a copy of
the table in another database. Remove the new data file, and then move the .frm
description and .MYI index files from the other database to your crashed database. This
gives you new description and index files, but leaves the .MYD data file alone. Go back
to Stage 2 and attempt to reconstruct the index file.

MYSQL ADMINISTRATION Page 62


10.LOCKING IN MYSQL

The Pros and Cons of MySQL Table Locking:

MySQL resource locking and performance go together much like movies and
popcorn. Applying table locking at the right time and in the right way can yield very positive
results, specifically in a noticeable speeding up of your SQL statements. By the same token,
when improperly executed, locking can slow the database down to a crawl or even cause your
entire system to grind to a halt. For this reason, it's crucial to know when and how to perform
table locking. This article aims to cover both these points, while highlighting the pros and cons
of using table locking under different circumstances.

The Benefits of Employing Table Locking

Every time you make a change to a table, MySQL must eventually commit that change to disk.
Making a change to a table that has indexes requires even more work as MySQL needs to
perform the additional step of updating the indexes. The effect of this additional step can be
compounded when you make numerous changes to the data at one time, such as by performing
an UPDATE to key (i.e., code) fields. In this case, MySQL might have to perform hundreds or
even thousands of write operations.

When we lock a table, MySQL won't update the index keys for the locked table until we unlock
it. That can result in substantial time savings. Depending on the complexity of the indexing,
judicious use of table locking can speed things up drastically.

Know Your Storage Engine

How you use table locking depends to some extent on what storage engine you're using. If you're
running InnoDB, the default storage engine in MySQL 5.6, MySQL automatically uses row-level
locking so that multiple sessions and applications can read from and write to the same table
simultaneously, without making each other wait. For this storage engine, it's best to avoid using
locks at the table level because they only reduce concurrency, while not offering any extra
protection.

For all storage engines other than InnoDB, MySQL does employ table locking. To get the best
performance from these storage engines, use table locks for tables that are queried often and
rarely inserted into or updated. Even transactions that affect a single row can result in
deadlocks because write operations automatically set locks on all records that are part of
the same index.

MYSQL ADMINISTRATION Page 63


Options to Consider Before Table Locking

There are many ways besides table locking to minimize the frequency of deadlocks. These
include committing your transactions often, because small transactions are less prone to
collision. Here are a few others:

 Use the SHOW STATUS command to determine the cause of the deadlocks when they
occur. That can help you to tune your application to avoid future deadlocks.

As of MySQL 3.23.33, you can analyze the table lock contention on your system by
checking the Table_locks_waited and Table_locks_immediate status variables:

mysql> SHOW STATUS LIKE 'Table%';


+-----------------------+---------+
| Variable_name | Value |
+-----------------------+---------+
| Table_locks_immediate | 39 |
| Table_locks_waited | 12 |
+-----------------------+---------+

 Always be prepared to re-execute a transaction if it fails due to a deadlock. Deadlocks are


not dangerous since they protect your data.
 Write smaller SELECT statements so that they run faster and hence lock tables for a
shorter time.
 Add new data to a temporary table and then add all the new rows in one INSERT.

Here is some code that you could use to perform many INSERT and SELECT operations
on a table when concurrent inserts are not possible. It adds rows to a temporary table and
only updates the real table with the records from the temporary table once in a while:

mysql> LOCK TABLES my_table WRITE, temp_table WRITE;


mysql> INSERT INTO my_table SELECT * FROM temp_table;
mysql> TRUNCATE TABLE temp_table;
mysql> UNLOCK TABLES;

Comparing Row and Table Level Locking

Like many factors in database performance tuning, whether row or table level locking is
preferable depends to a large extent on how the tables are used. As such, different tables may
require different lock types. Look at what mix of select and update statements are run against the
database and from there you'll be in a good position to decide whether you want to use a storage
engine with row-level locking or table locking.

The same applies for applications that interact with a database. For example, Web applications
tend to execute a lot of selects. Others may insert into or update tables based on key values. Most
applications that I've ever come across perform very few true deletes. Instead, an

MYSQL ADMINISTRATION Page 64


active_indicator flag is set accordingly. The default MySQL MyISAM engine is coincidentally
very well suited to this type of usage.

How MySQL Manages Table Locking

MySQL uses pretty much the same table-locking method for both READ and WRITE locks:

 If there are no locks on the table, put a read/write lock on it.


 Otherwise, put the lock request in the read/write lock queue.

When a lock is released, the lock is first made available to other processes in the write lock
queue, then to the threads in the read lock queue. So if you have many updates for a table,
SELECT statements will wait until there are no more updates.

Implementing Your Own Table Locking

When handling table locking yourself, it's best to serialize your transactions. The correct way to
use the LOCK TABLES command is to begin a transaction with “SET autocommit = 0”
followed by LOCK TABLES. Once you've committed the transaction, call UNLOCK TABLES
to remove the lock. For example, if you needed to write to table T1 and read from table T2, you
could write the following:

SET autocommit=0;
LOCK TABLES T1 WRITE, T2 READ;
--Issue your read and write commands here...
COMMIT;
UNLOCK TABLES;

Disadvantages of Table Locking

As alluded to above, table-level locking is not a silver bullet. It has both good and bad points.
Here are some of their disadvantages:

 Table locking still allows multiple processes to read from a table at the same time, but if a
process wants to write to a table, it must first get exclusive access, meaning it might have
to wait for other sessions to finish with the table first. During write actions, all other
sessions that want to access this particular table must wait until the write is done.
 Table locking runs into trouble when the disk runs out of free space and needs to make
room before the session can proceed. When that happens, all sessions that want to access
the affected table(s) are made to wait until more disk space becomes available.
 SELECT statements that take a long time to run prevent other sessions from updating the
table in the meantime, making the other sessions appear slow or unresponsive. While a
session waits for exclusive access to the table for updates, other sessions that issue
SELECT statements will queue up behind it, reducing concurrency even for read-only
sessions.

MYSQL ADMINISTRATION Page 65

You might also like