Table of Contents
PostgreSQL, like any database software, requires that certain tasks be performed regularly to achieve optimum performance. The tasks discussed here are required, but they are repetitive in nature and can easily be automated using standard Unix tools such as cron scripts or Windows' Task Scheduler. But it is the database administrator's responsibility to set up appropriate scripts, and to check that they execute successfully.
One obvious maintenance task is creation of backup copies of the data on a regular schedule. Without a recent backup, you have no chance of recovery after a catastrophe (disk failure, fire, mistakenly dropping a critical table, etc.). The backup and recovery mechanisms available in PostgreSQL are discussed at length in Chapter 23, Backup and Restore.
The other main category of maintenance task is periodic “vacuuming” of the database. This activity is discussed in Section 22.1, “Routine Vacuuming”. Closely related to this is updating the statistics that will be used by the query planner, as discussed in Section 22.1.2, “Updating planner statistics”.
Another task that might need periodic attention is log file management. This is discussed in Section 22.3, “Log File Maintenance”.
PostgreSQL is low-maintenance compared to some other database management systems. Nonetheless, appropriate attention to these tasks will go far towards ensuring a pleasant and productive experience with the system.
PostgreSQL's VACUUM
command
must be run on a regular basis for several reasons:
- To recover or reuse disk space occupied by updated or deleted rows.
- To update data statistics used by the PostgreSQL query planner.
- To protect against loss of very old data due to transaction ID wraparound.
The frequency and scope of the VACUUM
operations
performed for each of these reasons will vary depending on the
needs of each site. Therefore, database administrators must
understand these issues and develop an appropriate maintenance
strategy. This section concentrates on explaining the high-level
issues; for details about command syntax and so on, see the VACUUM reference page.
The standard form of VACUUM
can run in parallel with production
database operations. Commands such as SELECT
,
INSERT
, UPDATE
, and DELETE
will continue to function as normal, though you will not be able to modify the
definition of a table with commands such as ALTER TABLE ADD COLUMN
while it is being vacuumed.
Also, VACUUM
requires a substantial amount of I/O
traffic, which can cause poor performance for other active sessions.
There are configuration parameters that can be adjusted to reduce the
performance impact of background vacuuming — see
Section 17.4.4, “ Cost-Based Vacuum Delay
”.
An automated mechanism for performing the necessary VACUUM
operations has been added in PostgreSQL 8.1.
See Section 22.1.4, “The auto-vacuum daemon”.
In normal PostgreSQL operation, an
UPDATE
or DELETE
of a row does not
immediately remove the old version of the row.
This approach is necessary to gain the benefits of multiversion
concurrency control (see Chapter 12, Concurrency Control): the row version
must not be deleted while it is still potentially visible to other
transactions. But eventually, an outdated or deleted row version is no
longer of interest to any transaction. The space it occupies must be
reclaimed for reuse by new rows, to avoid infinite growth of disk
space requirements. This is done by running VACUUM
.
Clearly, a table that receives frequent updates or deletes will need
to be vacuumed more often than tables that are seldom updated. It
may be useful to set up periodic cron tasks that
VACUUM
only selected tables, skipping tables that are known not to
change often. This is only likely to be helpful if you have both
large heavily-updated tables and large seldom-updated tables — the
extra cost of vacuuming a small table isn't enough to be worth
worrying about.
There are two variants of the VACUUM
command. The first form, known as “lazy vacuum” or
just VACUUM
, marks expired data in tables and
indexes for future reuse; it does not attempt
to reclaim the space used by this expired data unless the space is
at the end of the table and an exclusive table lock can be easily
obtained. Unused space at the start or middle of the file does
not result in the file being shortened and space returned to the
operating system. This variant of VACUUM
can be
run concurrently with normal database operations.
The second form is the VACUUM FULL
command. This uses a more aggressive algorithm for reclaiming the
space consumed by expired row versions. Any space that is freed by
VACUUM FULL
is immediately returned to the
operating system. Unfortunately, this variant of the
VACUUM
command acquires an exclusive lock on
each table while VACUUM FULL
is processing
it. Therefore, frequently using VACUUM FULL
can
have an extremely negative effect on the performance of concurrent
database queries.
The standard form of VACUUM
is best used with the goal
of maintaining a fairly level steady-state usage of disk space. If
you need to return disk space to the operating system you can use
VACUUM FULL
— but what's the point of releasing disk
space that will only have to be allocated again soon? Moderately
frequent standard VACUUM
runs are a better approach
than infrequent VACUUM FULL
runs for maintaining
heavily-updated tables.
Recommended practice for most sites is to schedule a database-wide
VACUUM
once a day at a low-usage time of day,
supplemented by more frequent vacuuming of heavily-updated tables
if necessary. (Some installations with extremely high update rates
vacuum their busiest tables as often as once every few minutes.)
If you have multiple databases
in a cluster, don't forget to VACUUM
each one;
the program vacuumdb
may be helpful.
VACUUM FULL
is recommended for cases where you know
you have deleted the majority of rows in a table, so that the
steady-state size of the table can be shrunk substantially with
VACUUM FULL
's more aggressive approach. Use plain
VACUUM
, not VACUUM FULL
, for routine
vacuuming for space recovery.
If you have a table whose entire contents are deleted on a periodic
basis, consider doing it with TRUNCATE
rather
than using DELETE
followed by
VACUUM
. TRUNCATE
removes the
entire content of the table immediately, without requiring a
subsequent VACUUM
or VACUUM
FULL
to reclaim the now-unused disk space.
The PostgreSQL query planner relies on
statistical information about the contents of tables in order to
generate good plans for queries. These statistics are gathered by
the ANALYZE
command, which can be invoked by itself or
as an optional step in VACUUM
. It is important to have
reasonably accurate statistics, otherwise poor choices of plans may
degrade database performance.
As with vacuuming for space recovery, frequent updates of statistics
are more useful for heavily-updated tables than for seldom-updated
ones. But even for a heavily-updated table, there may be no need for
statistics updates if the statistical distribution of the data is
not changing much. A simple rule of thumb is to think about how much
the minimum and maximum values of the columns in the table change.
For example, a timestamp
column that contains the time
of row update will have a constantly-increasing maximum value as
rows are added and updated; such a column will probably need more
frequent statistics updates than, say, a column containing URLs for
pages accessed on a website. The URL column may receive changes just
as often, but the statistical distribution of its values probably
changes relatively slowly.
It is possible to run ANALYZE
on specific tables and even
just specific columns of a table, so the flexibility exists to update some
statistics more frequently than others if your application requires it.
In practice, however, it is usually best to just analyze the entire database
because it is a fast operation. It uses a statistical random sampling of
the rows of a table rather than reading every single row.
Tip
Although per-column tweaking of ANALYZE
frequency may not be
very productive, you may well find it worthwhile to do per-column
adjustment of the level of detail of the statistics collected by
ANALYZE
. Columns that are heavily used in WHERE
clauses
and have highly irregular data distributions may require a finer-grain
data histogram than other columns. See ALTER TABLE SET
STATISTICS
.
Recommended practice for most sites is to schedule a database-wide
ANALYZE
once a day at a low-usage time of day; this can
usefully be combined with a nightly VACUUM
. However,
sites with relatively slowly changing table statistics may find that
this is overkill, and that less-frequent ANALYZE
runs
are sufficient.
PostgreSQL's MVCC transaction semantics depend on being able to compare transaction ID (XID) numbers: a row version with an insertion XID greater than the current transaction's XID is “in the future” and should not be visible to the current transaction. But since transaction IDs have limited size (32 bits at this writing) a cluster that runs for a long time (more than 4 billion transactions) would suffer transaction ID wraparound: the XID counter wraps around to zero, and all of a sudden transactions that were in the past appear to be in the future — which means their outputs become invisible. In short, catastrophic data loss. (Actually the data is still there, but that's cold comfort if you can't get at it.) To avoid this, it is necessary to vacuum every table in every database at least once every two billion transactions.
The reason that periodic vacuuming solves the problem is that
PostgreSQL distinguishes a special XID
FrozenXID
. This XID is always considered older
than every normal XID. Normal XIDs are
compared using modulo-231 arithmetic. This means
that for every normal XID, there are two billion XIDs that are
“older” and two billion that are “newer”; another
way to say it is that the normal XID space is circular with no
endpoint. Therefore, once a row version has been created with a particular
normal XID, the row version will appear to be “in the past” for
the next two billion transactions, no matter which normal XID we are
talking about. If the row version still exists after more than two billion
transactions, it will suddenly appear to be in the future. To
prevent data loss, old row versions must be reassigned the XID
FrozenXID
sometime before they reach the
two-billion-transactions-old mark. Once they are assigned this
special XID, they will appear to be “in the past” to all
normal transactions regardless of wraparound issues, and so such
row versions will be good until deleted, no matter how long that is.
This reassignment of old XIDs is handled by VACUUM
.
VACUUM
's behavior is controlled by the configuration parameter
vacuum_freeze_min_age: any XID older than
vacuum_freeze_min_age
transactions is replaced by
FrozenXID
. Larger values of vacuum_freeze_min_age
preserve transactional information longer, while smaller values increase
the number of transactions that can elapse before the table must be
vacuumed again.
The maximum time that a table can go unvacuumed is two billion
transactions minus the vacuum_freeze_min_age
that was used
when it was last vacuumed.
If it were to go unvacuumed for longer than that,
data loss could result. To ensure that this does not
happen, the autovacuum facility described in
Section 22.1.4, “The auto-vacuum daemon” is invoked on any table
that might contain XIDs older than the age specified by the
configuration parameter
autovacuum_freeze_max_age. (This will happen
even if autovacuum is otherwise disabled.)
This implies that if a table is not otherwise vacuumed,
autovacuum will be invoked on it approximately once every
autovacuum_freeze_max_age
minus
vacuum_freeze_min_age
transactions.
For tables that are regularly vacuumed for space reclamation purposes,
this is of little importance. However, for static tables
(including tables that receive inserts, but no updates or deletes),
there is no need for vacuuming for space reclamation, and so it can
be useful to try to maximize the interval between forced autovacuums
on very large static tables. Obviously one can do this either by
increasing autovacuum_freeze_max_age
or by decreasing
vacuum_freeze_min_age
.
The sole disadvantage of increasing autovacuum_freeze_max_age
is that the pg_clog
subdirectory of the database cluster
will take more space, because it must store the commit status for all
transactions back to the autovacuum_freeze_max_age
horizon.
The commit status uses two bits per transaction, so if
autovacuum_freeze_max_age
has its maximum allowed value of
a little less than two billion, pg_clog
can be expected to
grow to about half a gigabyte. If this is trivial compared to your
total database size, setting autovacuum_freeze_max_age
to
its maximum allowed value is recommended. Otherwise, set it depending
on what you are willing to allow for pg_clog
storage.
(The default, 200 million transactions, translates to about 50MB of
pg_clog
storage.)
One disadvantage of decreasing vacuum_freeze_min_age
is that
it may cause VACUUM
to do useless work: changing a table row's
XID to FrozenXID
is a waste of time if the row is modified
soon thereafter (causing it to acquire a new XID). So the setting should
be large enough that rows are not frozen until they are unlikely to change
any more. Another disadvantage of decreasing this setting is
that details about exactly which transaction inserted or modified a
row will be lost sooner. This information sometimes comes in handy,
particularly when trying to analyze what went wrong after a database
failure. For these two reasons, decreasing this setting is not
recommended except for completely static tables.
To track the age of the oldest XIDs in a database,
VACUUM
stores XID
statistics in the system tables pg_class
and
pg_database
. In particular,
the relfrozenxid
column of a table's
pg_class
row contains the freeze cutoff XID that was used
by the last VACUUM
for that table. All normal
XIDs older than this cutoff XID are guaranteed to have been replaced by
FrozenXID
within the table. Similarly,
the datfrozenxid
column of a database's
pg_database
row is a lower bound on the normal XIDs
appearing in that database — it is just the minimum of the
per-table relfrozenxid
values within the database.
A convenient way to
examine this information is to execute queries such as
SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r'; SELECT datname, age(datfrozenxid) FROM pg_database;
The age
column measures the number of transactions from the
cutoff XID to the current transaction's XID. Immediately after a
VACUUM
, age(relfrozenxid)
should be a little
more than the vacuum_freeze_min_age
setting that was used
(more by the number of transactions started since the VACUUM
started). If age(relfrozenxid)
exceeds
autovacuum_freeze_max_age
, an autovacuum will soon be forced
for the table.
If for some reason autovacuum fails to clear old XIDs from a table, the system will begin to emit warning messages like this when the database's oldest XIDs reach ten million transactions from the wraparound point:
WARNING: database "mydb" must be vacuumed within 177009986 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "mydb".
If these warnings are ignored, the system will shut down and refuse to execute any new transactions once there are fewer than 1 million transactions left until wraparound:
ERROR: database is shut down to avoid wraparound data loss in database "mydb" HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
The 1-million-transaction safety margin exists to let the
administrator recover without data loss, by manually executing the
required VACUUM
commands. However, since the system will not
execute commands once it has gone into the safety shutdown mode,
the only way to do this is to stop the server and use a single-user
backend to execute VACUUM
. The shutdown mode is not enforced
by a single-user backend. See the postgres reference
page for details about using a single-user backend.
Beginning in PostgreSQL 8.1, there is a
separate optional server process called the autovacuum
daemon, whose purpose is to automate the execution of
VACUUM
and ANALYZE
commands.
When enabled, the autovacuum daemon runs periodically and checks for
tables that have had a large number of inserted, updated or deleted
tuples. These checks use the row-level statistics collection facility;
therefore, the autovacuum daemon cannot be used unless stats_start_collector and stats_row_level are set to true
. Also,
it's important to allow a slot for the autovacuum process when choosing
the value of superuser_reserved_connections.
The autovacuum daemon, when enabled, runs every autovacuum_naptime seconds. On each run, it selects
one database to process and checks each table within that database.
VACUUM
or ANALYZE
commands are
issued as needed.
Tables whose relfrozenxid
value is more than
autovacuum_freeze_max_age
transactions old are always
vacuumed. Otherwise,
two conditions are used to determine which operation(s)
to apply. If the number of obsolete tuples since the last
VACUUM
exceeds the “vacuum threshold”, the
table is vacuumed. The vacuum threshold is defined as:
vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuples
where the vacuum base threshold is
autovacuum_vacuum_threshold,
the vacuum scale factor is
autovacuum_vacuum_scale_factor,
and the number of tuples is
pg_class
.reltuples
.
The number of obsolete tuples is obtained from the statistics
collector; it is a semi-accurate count updated by each
UPDATE
and DELETE
operation. (It
is only semi-accurate because some information may be lost under heavy
load.) For analyze, a similar condition is used: the threshold, defined as
analyze threshold = analyze base threshold + analyze scale factor * number of tuples
is compared to the total number of tuples inserted, updated, or deleted
since the last ANALYZE
.
The default thresholds and scale factors are taken from
postgresql.conf
, but it is possible to override them
on a table-by-table basis by making entries in the system catalog
pg_autovacuum
.
If a pg_autovacuum
row exists for a particular
table, the settings it specifies are applied; otherwise the global
settings are used. See Section 17.9, “Automatic Vacuuming” for
more details on the global settings.
Besides the base threshold values and scale factors, there are five
more parameters that can be set for each table in
pg_autovacuum
.
The first, pg_autovacuum
.enabled
,
can be set to false
to instruct the autovacuum daemon
to skip that particular table entirely. In this case
autovacuum will only touch the table if it must do so
to prevent transaction ID wraparound.
The next two parameters, the vacuum cost delay
(pg_autovacuum
.vac_cost_delay
)
and the vacuum cost limit
(pg_autovacuum
.vac_cost_limit
),
are used to set table-specific values for the
Cost-Based Vacuum Delay
feature.
The last two parameters,
(pg_autovacuum
.freeze_min_age
)
and
(pg_autovacuum
.freeze_max_age
),
are used to set table-specific values for
vacuum_freeze_min_age and
autovacuum_freeze_max_age respectively.
If any of the values in pg_autovacuum
are set to a negative number, or if a row is not present at all in
pg_autovacuum
for any particular table, the
corresponding values from postgresql.conf
are used.
There is not currently any support for making
pg_autovacuum
entries, except by doing
manual INSERT
s into the catalog. This feature will be
improved in future releases, and it is likely that the catalog
definition will change.
Caution
The contents of the pg_autovacuum
system
catalog are currently not saved in database dumps created by
the tools pg_dump
and pg_dumpall
.
If you want to preserve them across a dump/reload cycle, make sure you
dump the catalog manually.