PostgreSQL VS Oracle OLTP 的測試方法 - 1
基於同一台主機和存儲,分別測試PostgreSQL 9.4.1, Oracle 12c 的小事務處理能力。
測試結果僅供參考,有興趣的同學可以自行測試或者更改測試用例來玩。
(因測試使用工具不一樣,工具本身的損耗不一樣,結果可能沒有可比性。)
(即使用同樣的工具,驅動的性能可能也不一樣,很難做到完全沒有偏頗。)
(所以,本文目的旨在挑戰產品自身的極限或者發現自身的問題和缺陷,而非兩種產品的VS,純屬娛樂。)
壓力測試結果匯總:
PostgreSQL 9.4.1:
UPDATE
平均TPS:95021
最小TPS:90017
最大TPS:113981
SELECT
平均TPS:328895
最小TPS:327336
最大TPS:330360
INSERT
平均TPS:70433
最小TPS:57417.4
最大TPS:75758.9
Oracle 12c:
詳見:
https://blog.163.com/digoal@126/blog/static/16387704020154431045764/
UPDATE
平均TPS:32xxx
最小TPS:29000
最大TPS:33900
SELECT
平均TPS:36xxx
最小TPS:36300
最大TPS:36620
INSERT
平均TPS:9xxx
最小TPS:8750
最大TPS:10500
[ 測試詳情 ]
壓力測試內容:
基於主鍵的查詢,更新。
帶主鍵的表的插入。
測試環境:
服務器 2009年購買的 IBM X3950, 和現在的CPU比起來性能已經比較差了.
CPU 4 * 6核 Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
內存 32 * 4GB DDR2 533MHz
硬盤 上海寶存 1.2TB Direct-IO PCI-E SSD
數據庫 PostgreSQL 9.4.1
操作係統 CentOS 6.6 x64
文件係統 EXT4, noatime,nodiratime,nobarrier
更新,查詢數據量 5000萬
插入數據量 100億
PostgreSQL 數據庫參數:
listen_addresses = '0.0.0.0' # what IP address(es) to listen on;
port = 1921 # (change requires restart)
max_connections = 56 # (change requires restart)
superuser_reserved_connections = 13 # (change requires restart)
unix_socket_directories = '.' # comma-separated list of directories
tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;
tcp_keepalives_interval = 10 # TCP_KEEPINTVL, in seconds;
tcp_keepalives_count = 10 # TCP_KEEPCNT;
shared_buffers = 8GB # min 128kB 內存足夠大的情況下,配置為和數據庫的活躍數據量相當即可獲得最好的性能.
huge_pages = try # on, off, or try
maintenance_work_mem = 1GB # min 1MB
autovacuum_work_mem = 1GB # min 1MB, or -1 to use maintenance_work_mem
dynamic_shared_memory_type = posix # the default is the first option
bgwriter_delay = 10ms # 10-10000ms between rounds
synchronous_commit = off # synchronization level;
wal_sync_method = fdatasync # the default is the first option
wal_buffers = 16MB # min 32kB, -1 sets based on shared_buffers
wal_writer_delay = 10ms # 1-10000 milliseconds
checkpoint_timeout = 10min # 對於產生XLOG非常頻繁的數據庫, 為了降低性能影響, 可以配置為大於產生checkpoint_segments需要的周期.
# 例如產生512個XLOG需要8分鍾, 那麼這裏可以配置為超過8分鍾.
# 這裏配置的時間越長, 如果數據庫DOWN機, 恢複需要的時間也越長.
checkpoint_completion_target = 0.9
checkpoint_segments = 512 # in logfile segments, min 1, 16MB each 配置為大於等於shared_buffers
random_page_cost = 2.0 # same scale as above
effective_cache_size = 100GB
log_destination = 'csvlog' # Valid values are combinations of
logging_collector = on # Enable capturing of stderr and csvlog
log_truncate_on_rotation = on # If on, an existing log file with the
log_checkpoints = on
log_connections = on
log_disconnections = on
log_error_verbosity = verbose # terse, default, or verbose messages
log_timezone = 'PRC'
autovacuum = on # Enable autovacuum subprocess? 'on'
log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and
autovacuum_vacuum_scale_factor = 0.002 # fraction of table size before vacuum , 對於產生垃圾非常頻繁的庫, 越小越好
autovacuum_analyze_scale_factor = 0.001 # fraction of table size before analyze
autovacuum_vacuum_cost_delay = 0ms # default vacuum cost delay for , 對於IO能力非常好的庫, 不要延遲
datestyle = 'iso, mdy'
timezone = 'PRC'
lc_messages = 'C' # locale for system error message
lc_monetary = 'C' # locale for monetary formatting
lc_numeric = 'C' # locale for number formatting
lc_time = 'C' # locale for time formatting
default_text_search_config = 'pg_catalog.english'
生成查詢,更新壓力測試數據:
digoal=> create table tbl(id int, info text, crt_time timestamptz default now()) tablespace tbs_digoal;
CREATE TABLE
digoal=> insert into tbl select generate_series(1,50000000),now(),now();
INSERT 0 50000000
digoal=> set maintenance_work_mem='4GB';
SET
digoal=> alter table tbl add constraint tbl_pkey primary key(id) using index tablespace tbs_digoal_idx;
ALTER TABLE
digoal=> \dt+ tbl
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------+-------+--------+---------+-------------
digoal | tbl | table | digoal | 3634 MB |
(1 row)
digoal=> \di+ tbl_pkey
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+----------+-------+--------+-------+---------+-------------
digoal | tbl_pkey | index | digoal | tbl | 1063 MB |
(1 row)
根據主鍵進行更新測試,測試時長超過8小時。
$ vi test.sql
\setrandom id 1 50000000
update tbl set crt_time=now() where id=:id;
nohup pgbench -M prepared -n -f test.sql -P 10 -c 26 -j 26 -T 30000000 >./log 2>&1 &
超過8小時的測試後,表大了100多MB,索引未變化。
digoal=> \dt+
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------+-------+--------+---------+-------------
digoal | tbl | table | digoal | 3842 MB |
(1 rows)
digoal=> \di+
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+----------+-------+--------+-------+---------+-------------
digoal | tbl_pkey | index | digoal | tbl | 1063 MB |
(1 row)
統計到tbl已更新超過21億次。
digoal=> select * from pg_stat_all_tables where relname='tbl';
-[ RECORD 1 ]-------+------------------------------
relid | 16387
schemaname | digoal
relname | tbl
seq_scan | 2
seq_tup_read | 100000000
idx_scan | 2136267592
idx_tup_fetch | 2136267592
n_tup_ins | 100278348
n_tup_upd | 2136267592
n_tup_del | 0
n_tup_hot_upd | 2097129671
n_live_tup | 50081001
n_dead_tup | 135956
n_mod_since_analyze | 3111673
last_vacuum |
last_autovacuum | 2015-05-02 08:27:02.690159+08
last_analyze |
last_autoanalyze | 2015-05-02 08:27:05.800603+08
vacuum_count | 0
autovacuum_count | 580
analyze_count | 0
autoanalyze_count | 579
可以導入測試結果,或者使用R進行分析。
digoal=> create table az(tps numeric);
CREATE TABLE
digoal=# \copy digoal.az from program 'awk ''NR>2 {print $4}'' /home/postgres/log'
COPY 3057
digoal=> select avg(tps),min(tps),max(tps),count(*) from az;
avg | min | max | count
--------------------+---------+---------+-------
60217.684494602552 | 27666.0 | 65708.7 | 3057
(1 row)
平均TPS:60217
最小TPS:27666
最大TPS:65708
圖1:
每一次tps下降都和checkpoint有關,因為檢查點後第一次變髒的數據塊需要寫full page,所以會導致wal寫buffer的壓力(實際是連續寫幾個wal block size大小的能力,如果block_size=32K, wal_block_size=8K, 那麼一個髒塊需要寫4個wal_block_size,假設wal fsync能力是每秒寫10000個8K的塊,那麼檢查點後的寫操作如果都發生在不同的數據塊上麵,寫WAL可能造成瓶頸,即tps可能降到2500以下。),原因分析見:
https://blog.163.com/digoal@126/blog/static/163877040201542103933969/
https://blog.163.com/digoal@126/blog/static/1638770402015463252387/
https://blog.163.com/digoal@126/blog/static/16387704020154651655783/
https://blog.163.com/digoal@126/blog/static/16387704020154653422892/
https://blog.163.com/digoal@126/blog/static/16387704020154811421484/
https://blog.163.com/digoal@126/blog/static/16387704020154129958753/
關閉full page write的壓力測試TPS散點圖如下,檢查點帶來的影響消失了:
圖2:
查詢測試,測試時長超過8小時:
$ vi test.sql
\setrandom id 1 50000000
select * from tbl where id=:id;
nohup pgbench -M prepared -n -f test.sql -P 10 -c 38 -j 38 -T 30000000 >./log 2>&1 &
導入測試結果:
digoal=> create table az(tps numeric);
CREATE TABLE
digoal=> \c digoal postgres
You are now connected to database "digoal" as user "postgres".
digoal=# \copy digoal.az from program 'awk ''NR>2 {print $4}'' /home/postgres/log'
COPY 3027
digoal=> select avg(tps),min(tps),max(tps),count(*) from digoal.az;
avg | min | max | count
---------------------+----------+----------+-------
328895.445688800793 | 327336.0 | 330360.6 | 3027
(1 row)
平均TPS:328895
最小TPS:327336
最大TPS:330360
圖3:
查詢的TPS比較平穩,維持在32.7萬tps以上。
插入測試,測試時長超過8小時:
digoal=> drop table tbl;
digoal=> create table tbl(id serial primary key using index tablespace tbs_digoal_idx, info text, crt_time timestamptz default now()) tablespace tbs_digoal;
$ vi test.sql
insert into tbl(info) values ('hello world');
nohup pgbench -M prepared -n -f test.sql -P 10 -c 20 -j 20 -T 30000000 >./log 2>&1 &
導入測試結果:
約4小時後插入數據量如下:
digoal=> \dt+
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------+-------+--------+--------+-------------
digoal | tbl | table | digoal | 69 GB |
(1 rows)
digoal=> \di+
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+----------+-------+--------+-------+-------+-------------
digoal | tbl_pkey | index | digoal | tbl | 20 GB |
(1 row)
digoal=> create table az(tps numeric);
CREATE TABLE
digoal=> \c digoal postgres
You are now connected to database "digoal" as user "postgres".
digoal=# \copy digoal.az from program 'awk ''NR>2 {print $4}'' /home/postgres/log'
COPY 1385
digoal=# select avg(tps),min(tps),max(tps),count(*) from digoal.az;
avg | min | max | count
--------------------+---------+---------+-------
69839.050685920578 | 65283.7 | 72175.5 | 1385
(1 row)
平均TPS:70433
最小TPS:57417.4
最大TPS:75758.9
圖4:
檢查點同樣會對插入有一定影響,不過比對更新的影響小很多,因為並發的xlog full page write更少了(寫完一個再擴展一個新的)。
[其他]
1. 使用PostgreSQL 9.5 重新測試更新,性能同樣受到檢查點的影響:
listen_addresses = '0.0.0.0' # what IP address(es) to listen on;
port = 1922 # (change requires restart)
max_connections = 100 # (change requires restart)
superuser_reserved_connections = 13 # (change requires restart)
unix_socket_directories = '.' # comma-separated list of directories
tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;
tcp_keepalives_interval = 10 # TCP_KEEPINTVL, in seconds;
tcp_keepalives_count = 10 # TCP_KEEPCNT;
shared_buffers = 8GB # min 128kB
huge_pages = try # on, off, or try
maintenance_work_mem = 1GB # min 1MB
autovacuum_work_mem = 1GB # min 1MB, or -1 to use maintenance_work_mem
dynamic_shared_memory_type = posix # the default is the first option
vacuum_cost_delay = 0 # 0-100 milliseconds
vacuum_cost_limit = 10000 # 1-10000 credits
bgwriter_delay = 10ms # 10-10000ms between rounds
synchronous_commit = off # synchronization level;
wal_sync_method = fdatasync # the default is the first option
wal_buffers = 16MB # min 32kB, -1 sets based on shared_buffers
wal_writer_delay = 10ms # 1-10000 milliseconds
checkpoint_timeout = 10min # range 30s-1h
max_wal_size = 16GB # 配置為shared_buffers一倍, 對於DML頻繁的數據庫較好
min_wal_size = 512MB
random_page_cost = 2.0 # same scale as above
effective_cache_size = 64GB
log_destination = 'csvlog' # Valid values are combinations of
logging_collector = on # Enable capturing of stderr and csvlog
log_truncate_on_rotation = on # If on, an existing log file with the
log_checkpoints = on
log_connections = on
log_disconnections = on
log_error_verbosity = verbose # terse, default, or verbose messages
log_timezone = 'PRC'
autovacuum = on # Enable autovacuum subprocess? 'on'
log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and
autovacuum_vacuum_scale_factor = 0.002 # fraction of table size before vacuum
autovacuum_analyze_scale_factor = 0.001 # fraction of table size before analyze
autovacuum_vacuum_cost_delay = 0 # default vacuum cost delay for
datestyle = 'iso, mdy'
timezone = 'PRC'
lc_messages = 'C' # locale for system error message
lc_monetary = 'C' # locale for monetary formatting
lc_numeric = 'C' # locale for number formatting
lc_time = 'C' # locale for time formatting
default_text_search_config = 'pg_catalog.english'
關於檢查點為什麼有如此大的影響,後麵的文章再針對檢查點源碼分析一下原因。
[小結]
1. 測試結果反應了PostgreSQL checkpoint方麵的不足之處,影響太大(實際上和checkpointer帶來的IO關係不大,主要是這裏的更新測試用例瞬間的FULL PAGE WRITE量太大,導致wal write buffer延遲變大而影響了TPS)。
有興趣的朋友可查看我另外幾篇文章的分析。
https://blog.163.com/digoal@126/blog/static/163877040201542103933969/
https://blog.163.com/digoal@126/blog/static/1638770402015463252387/
https://blog.163.com/digoal@126/blog/static/16387704020154651655783/
https://blog.163.com/digoal@126/blog/static/16387704020154653422892/
https://blog.163.com/digoal@126/blog/static/16387704020154811421484/
https://blog.163.com/digoal@126/blog/static/16387704020154129958753/
2. 如果你不想使用pgbench來測試PG, 也可以用python, 不過因為psycopg2目前不支持綁定變量, 所以效率會低很多.
原因見:
https://blog.163.com/digoal@126/blog/static/1638770402015151653642/
import threading
import time
import psycopg2
import random
xs=12000
tps=dict()
class n_t(threading.Thread): # The timer class is derived from the class threading.Thread
def __init__(self, num):
threading.Thread.__init__(self)
self.thread_num = num
def run(self): #Overwrite run() method, put what you want the thread do here
conn = psycopg2.connect(database="digoal", user="digoal", password="digoal", host="/data01/pgdata/pg_root", port="1922")
curs = conn.cursor()
conn.autocommit=True
tps[self.thread_num] = dict()
f = open("/tmp/pg_test." + str(self.thread_num), "w")
for x in range(1,3001):
start_t = time.time()
for i in range(0,xs):
curs.execute("update tbl set info=now(),crt_time=now() where id=%(id)s", {"id": random.randrange(1,50000000)})
stop_t = time.time()
tps[self.thread_num][x] = round(xs/(stop_t-start_t),2)
res = "Round: " + str(x) + " TID: " + str(self.thread_num) + " Sec: " + str(round((stop_t-start_t),2)) + " tps: " + str(tps[self.thread_num][x])
print >> f, res
f.flush()
f.close()
def test():
t_names = []
for i in xrange(0,27):
t_names.append(n_t(i))
for t in t_names:
t.start()
return
if __name__ == '__main__':
test()
3. https://www.slideshare.net/petereisentraut/programming-with-python-and-postgresql
最後更新:2017-04-01 13:37:07