Several aspects of PostgreSQL database performance improvement

  • 2020-05-06 11:51:59
  • OfStack

1. Use EXPLAIN
The EXPLAIN command allows you to view the execution plan, as described earlier in blog. This method is our primary debugging tool.

2. Keep up to date with statistics
used in the execution plan Because statistical information is not updated every operating data library in, usually in VACUUM, ANALYZE, CREATE INDEX DDL execution will update statistics, such as
Therefore, the statistics used in the execution plan are likely to be older. The result of this analysis of the execution plan may be greater error error.

Below are some statistics related to table tenk1.
SELECT relname, relkind, reltuples, relpages
FROM pg_class
WHERE relname LIKE 'tenk1%';

relname | relkind | reltuples | relpages
----------------------+---------+-----------+----------
tenk1 | r | 10000 | 358
tenk1_hundred | i | 10000 | 30
tenk1_thous_tenthous | i | 10000 | 30
tenk1_unique1 | i | 10000 | 30
tenk1_unique2 | i | 10000 | 30
(5 rows)
Where relkind is the type, r is its own table and i is the index index. reltuples is the number of projects; relpages is the number of blocks on the hard drive.

3. Explicitly associate
with join SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id;
The execution plan is relatively easy to control at execution time if join is explicitly used.
Example:
SELECT * FROM a CROSS JOIN b CROSS JOIN c WHERE a.id = b.id AND b.ref = c.id;
SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id);

4. Turn off auto-submit (autocommit=false)

5. Multiple inserts of data are more efficient with the copy command
We have processes that perform insert operations on the same table many times. This is when we are more efficient with the copy command. Since insert is done once, the index associated with it is done once, which takes time to compare.

6. Temporarily delete index
Sometimes when we backup and re-import data, it can take hours to complete if there is a large amount of data. You can delete index at this time. Import build index.

7. Delete
for foreign key association If there are foreign keys in the table, each operation does not go to check foreign key integration. So it's slower. Establishing a foreign key after data import is also an option.

8. Increase maintenance_work_mem parameter size
Adding this parameter improves the performance of CREATE INDEX and ALTER TABLE ADD FOREIGN KEY.

9. Increase the size of the checkpoint_segments parameter
Adding this parameter increases the speed of large data imports.

10. Setting archive_mode is invalid
When this parameter is set to invalid, the following operations can be increased in speed by
・CREATE TABLE AS SELECT
・CREATE INDEX
・ALTER TABLE SET TABLESPACE
The & # 12539; CLUSTER, etc.

11. VACUUM ANALYZE
is finally executed VACUUM ANALYZE is recommended when the data in
table changes a lot.


Related articles: