閱讀499 返回首頁    go 阿裏雲 go 技術社區[雲棲]


一天學會PostgreSQL應用開發與管理 - 8 PostgreSQL 管理

本章大綱

一、權限體係

1 邏輯結構

2 權限體係

3 schema使用 , 特別注意

4 用戶

5 public

6 如何查看和解讀一個對象的當前權限狀態

二、索引介紹

1 索引有什麼用?

2 索引的類型

3 索引合並掃描

4 表膨脹檢查

5 檢查膨脹

6 索引維護

三、係統配置

1 存儲、文件係統規劃

2 網絡規劃

3 CPU評估

4 內核配置

5 資源限製

6 防火牆配置

四、數據庫初始化

1 initdb 介紹

2 postgresql.conf參數配置

3 pg_hba.conf數據庫防火牆配置

五、PostgreSQL控製

1 環境變量配置

2 停庫

3 重啟

4 啟動

5 Linux自啟動服務

六、數據庫備份和還原

1 邏輯備份

2 邏輯還原

3 配置歸檔、流複製

4 創建流複製用戶

5 配置流複製防火牆

6 遠程物理基礎備份

7 物理增量備份

8 PITR(時間點恢複)

PITR 時間點恢複 例子

七、數據庫性能

1 參數優化

2 性能視圖 pg_stat*

3 TOP SQL

4 AWR

5 auto_explain

6 explain解讀

八、數據庫壓測

1 pg_bench介紹

2 tpc-b測試

3 自定義測試

九、日常維護

第六章:PostgreSQL 管理

1. 權限體係

邏輯結構

最上層是實例,實例中允許創建多個數據庫,每個數據庫中可以創建多個schema,每個schema下麵可以創建多個對象。

對象包括表、物化視圖、操作符、索引、視圖、序列、函數、... 等等。

pic1

在數據庫中所有的權限都和角色(用戶)掛鉤,public是一個特殊角色,代表所有人。

超級用戶是有允許任意操作對象的,普通用戶隻能操作自己創建的對象。

另外有一些對象是有賦予給public角色默認權限的,所以建好之後,所以人都有這些默認權限。

權限體係

pic2

實例級別的權限由pg_hba.conf來控製,例如 :

# TYPE  DATABASE        USER            ADDRESS                 METHOD    
# "local" is for Unix domain socket connections only    
local   all             all                                     trust    
# IPv4 local connections:    
host    all             all             127.0.0.1/32            trust    
host all postgres 0.0.0.0/0 reject    
host all all 0.0.0.0/0 md5    

配置解釋

允許任何本地用戶無密碼連接任何數據庫      
      
不允許postgres用戶從任何外部地址連接任何數據庫      
      
允許其他任何用戶從外部地址通過密碼連接任何數據庫      

數據庫級別的權限,包括允許連接數據庫,允許在數據庫中創建schema。

默認情況下,數據庫在創建後,允許public角色連接,即允許任何人連接。

默認情況下,數據庫在創建後,不允許除了超級用戶和owner之外的任何人在數據庫中創建schema。

默認情況下,數據庫在創建後,會自動創建名為public 的schema,這個schema的all權限已經賦予給public角色,即允許任何人在裏麵創建對象。

schema級別的權限,包括允許查看schema中的對象,允許在schema中創建對象。

默認情況下新建的schema的權限不會賦予給public角色,因此除了超級用戶和owner,任何人都沒有權限查看schema中的對象或者在schema中新建對象。

schema使用 , 特別注意

According to the SQL standard, the owner of a schema always owns all objects within it.     
    
PostgreSQL allows schemas to contain objects owned by users other than the schema owner.     
    
This can happen only if the schema owner grants the CREATE privilege on his schema to someone else,     
or a superuser chooses to create objects in it.    
    
schema的owner默認是該schema下的所有對象的owner,但是PostgreSQL又允許用戶在別人的schema下創建對象,所以一個對象可能屬於兩個owner,而且schema 的owner有 drop對象的權限。      
    
對於兩個owner都有drop的權限,這個我個人認為是一個BUG。      
    
所以千萬不要把自己的對象創建到別人的schema下麵,那很危險。      

對象級別的權限,每種類型的對象權限屬性都不一樣,具體請參考

https://www.postgresql.org/docs/9.5/static/sql-grant.html

以表為例,可以有SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER這些權限。

GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER }    
    [, ...] | ALL [ PRIVILEGES ] }    
    ON { [ TABLE ] table_name [, ...]    
         | ALL TABLES IN SCHEMA schema_name [, ...] }    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { { SELECT | INSERT | UPDATE | REFERENCES } ( column_name [, ...] )    
    [, ...] | ALL [ PRIVILEGES ] ( column_name [, ...] ) }    
    ON [ TABLE ] table_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { { USAGE | SELECT | UPDATE }    
    [, ...] | ALL [ PRIVILEGES ] }    
    ON { SEQUENCE sequence_name [, ...]    
         | ALL SEQUENCES IN SCHEMA schema_name [, ...] }    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { { CREATE | CONNECT | TEMPORARY | TEMP } [, ...] | ALL [ PRIVILEGES ] }    
    ON DATABASE database_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { USAGE | ALL [ PRIVILEGES ] }    
    ON DOMAIN domain_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { USAGE | ALL [ PRIVILEGES ] }    
    ON FOREIGN DATA WRAPPER fdw_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { USAGE | ALL [ PRIVILEGES ] }    
    ON FOREIGN SERVER server_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { EXECUTE | ALL [ PRIVILEGES ] }    
    ON { FUNCTION function_name ( [ [ argmode ] [ arg_name ] arg_type [, ...] ] ) [, ...]    
         | ALL FUNCTIONS IN SCHEMA schema_name [, ...] }    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { USAGE | ALL [ PRIVILEGES ] }    
    ON LANGUAGE lang_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] }    
    ON LARGE OBJECT loid [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] }    
    ON SCHEMA schema_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { CREATE | ALL [ PRIVILEGES ] }    
    ON TABLESPACE tablespace_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
GRANT { USAGE | ALL [ PRIVILEGES ] }    
    ON TYPE type_name [, ...]    
    TO role_specification [, ...] [ WITH GRANT OPTION ]    
    
where role_specification can be:    
    
    [ GROUP ] role_name    
  | PUBLIC    
  | CURRENT_USER    
  | SESSION_USER    
    
GRANT role_name [, ...] TO role_name [, ...] [ WITH ADMIN OPTION ]    

簡單介紹一下grant的一些通用選項

WITH ADMIN OPTION表示被賦予權限的用戶,拿到對應的權限後,還能將對應的權限賦予給其他人,否則隻能自己有這個權限,但是不能再賦予給其他人。

用戶

用戶,角色在PostgreSQL是一個概念。

public

public角色,代表所有人的意思。

如何查看和解讀一個對象的當前權限狀態

以表為例 :

select relname,relacl from pg_class where relkind='r';    

或者執行

SELECT n.nspname as "Schema",    
  c.relname as "Name",    
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'S' THEN 'sequence' WHEN 'f' THEN 'foreign table' END as "Type",    
  pg_catalog.array_to_string(c.relacl, E'\n') AS "Access privileges",    
  pg_catalog.array_to_string(ARRAY(    
    SELECT attname || E':\n  ' || pg_catalog.array_to_string(attacl, E'\n  ')    
    FROM pg_catalog.pg_attribute a    
    WHERE attrelid = c.oid AND NOT attisdropped AND attacl IS NOT NULL    
  ), E'\n') AS "Column privileges",    
  pg_catalog.array_to_string(ARRAY(    
    SELECT polname    
    || CASE WHEN polcmd != '*' THEN    
           E' (' || polcmd || E'):'    
       ELSE E':'     
       END    
    || CASE WHEN polqual IS NOT NULL THEN    
           E'\n  (u): ' || pg_catalog.pg_get_expr(polqual, polrelid)    
       ELSE E''    
       END    
    || CASE WHEN polwithcheck IS NOT NULL THEN    
           E'\n  (c): ' || pg_catalog.pg_get_expr(polwithcheck, polrelid)    
       ELSE E''    
       END    || CASE WHEN polroles <> '{0}' THEN    
           E'\n  to: ' || pg_catalog.array_to_string(    
               ARRAY(    
                   SELECT rolname    
                   FROM pg_catalog.pg_roles    
                   WHERE oid = ANY (polroles)    
                   ORDER BY 1    
               ), E', ')    
       ELSE E''    
       END    
    FROM pg_catalog.pg_policy pol    
    WHERE polrelid = c.oid), E'\n')    
    AS "Policies"    
FROM pg_catalog.pg_class c    
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace    
WHERE c.relkind IN ('r', 'v', 'm', 'S', 'f')    
  AND n.nspname !~ '^pg_' AND pg_catalog.pg_table_is_visible(c.oid)    
ORDER BY 1, 2;    

得到權限說明如下

 Schema |      Name       |   Type   |       Access privileges        | Column privileges | Policies     
--------+-----------------+----------+--------------------------------+-------------------+----------    
 public | sbtest1         | table    | postgres=arwdDxt/postgres     +|                   |     
        |                 |          | digoal=a*r*w*d*D*x*t*/postgres |                   |     
 public | sbtest10        | table    | postgres=arwdDxt/postgres      |                   |     
 public | sbtest10_id_seq | sequence |                                |                   |     
 public | sbtest11        | table    | postgres=arwdDxt/postgres      |                   |     
 public | sbtest11_id_seq | sequence |                                |                   |     
 public | sbtest12        | table    | postgres=arwdDxt/postgres      |                   |     
 public | sbtest12_id_seq | sequence |                                |                   |     

解釋一下 Access privileges

rolename=xxx 其中rolename就是被賦予權限的用戶名,即權限被賦予給誰了?

=xxx 表示這個權限賦予給了public角色,即所有人

/yyyy 表示是誰賦予的這個權限?

權限的含義如下

rolename=xxxx -- privileges granted to a role    
        =xxxx -- privileges granted to PUBLIC    
    
            r -- SELECT ("read")    
            w -- UPDATE ("write")    
            a -- INSERT ("append")    
            d -- DELETE    
            D -- TRUNCATE    
            x -- REFERENCES    
            t -- TRIGGER    
            X -- EXECUTE    
            U -- USAGE    
            C -- CREATE    
            c -- CONNECT    
            T -- TEMPORARY    
      arwdDxt -- ALL PRIVILEGES (for tables, varies for other objects)    
            * -- grant option for preceding privilege    
    
        /yyyy -- role that granted this privilege    

例子

賦予權限的人是postgres用戶, sbtest2表的select權限被賦予給了digoal用戶。

postgres=# grant select on sbtest2 to digoal;    
GRANT    
postgres=# \dp+ sbtest2    
                                  Access privileges    
 Schema |  Name   | Type  |     Access privileges     | Column privileges | Policies     
--------+---------+-------+---------------------------+-------------------+----------    
 public | sbtest2 | table | postgres=arwdDxt/postgres+|                   |     
        |         |       | digoal=r/postgres         |                   |     
(1 row)    

回收權限一定要針對已有的權限來,如果你發現這裏的權限還在,那照著權限回收即可。

例如

revoke select on sbtest2 from digoal;      

2. 索引介紹

索引有什麼用?

加速TUPLE定位    
    
主鍵, 唯一約束    
    
排序    

索引的類型

B-Tree:支持排序、範圍查詢、精確查詢;適合所有數據類型,單個索引條目不能超過索引頁的1/3。

hash:支持等值查詢;適合超長字段。

gin:倒排索引,支持等值、包含、相交、等查詢;適合多值類型(數組、全文檢索等),任意字段組合查詢。

gist:R-Tree索引,支持包含,相交,距離,點麵判斷等查詢;適合幾何類型、範圍類型、全文檢索、異構類型等。

sp-gist:空間分區(平衡)r-tree,支持包含,相交,距離,點麵判斷等查詢;適合幾何類型、範圍類型、全文檢索、異構類型等。

brin:塊級索引,適合物理存儲與列值存在較好相關性的字段。比如時序數據、物聯網傳感數據、FEED數據等。支持範圍查詢、等值查詢。

rum:擴展索引接口,支持全文檢索,支持附加標量類型的全文檢索,支持帶位置關係的全文檢索。

索引合並掃描

多個索引,PostgreSQL會使用bitmapAnd或bitmapOr合並掃描。

pic

表膨脹檢查

SELECT    
  current_database() AS db, schemaname, tablename, reltuples::bigint AS tups, relpages::bigint AS pages, otta,    
  ROUND(CASE WHEN otta=0 OR sml.relpages=0 OR sml.relpages=otta THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,    
  CASE WHEN relpages < otta THEN 0 ELSE relpages::bigint - otta END AS wastedpages,    
  CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,    
  CASE WHEN relpages < otta THEN $$0 bytes$$::text ELSE (bs*(relpages-otta))::bigint || $$ bytes$$ END AS wastedsize,    
  iname, ituples::bigint AS itups, ipages::bigint AS ipages, iotta,    
  ROUND(CASE WHEN iotta=0 OR ipages=0 OR ipages=iotta THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,    
  CASE WHEN ipages < iotta THEN 0 ELSE ipages::bigint - iotta END AS wastedipages,    
  CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes,    
  CASE WHEN ipages < iotta THEN $$0 bytes$$ ELSE (bs*(ipages-iotta))::bigint || $$ bytes$$ END AS wastedisize,    
  CASE WHEN relpages < otta THEN    
    CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta::bigint) END    
    ELSE CASE WHEN ipages < iotta THEN bs*(relpages-otta::bigint)    
      ELSE bs*(relpages-otta::bigint + ipages-iotta::bigint) END    
  END AS totalwastedbytes    
FROM (    
  SELECT    
    nn.nspname AS schemaname,    
    cc.relname AS tablename,    
    COALESCE(cc.reltuples,0) AS reltuples,    
    COALESCE(cc.relpages,0) AS relpages,    
    COALESCE(bs,0) AS bs,    
    COALESCE(CEIL((cc.reltuples*((datahdr+ma-    
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)),0) AS otta,    
    COALESCE(c2.relname,$$?$$) AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,    
    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols    
  FROM    
     pg_class cc    
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> $$information_schema$$    
  LEFT JOIN    
  (    
    SELECT    
      ma,bs,foo.nspname,foo.relname,    
      (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,    
      (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2    
    FROM (    
      SELECT    
        ns.nspname, tbl.relname, hdr, ma, bs,    
        SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth,    
        MAX(coalesce(null_frac,0)) AS maxfracsum,    
        hdr+(    
          SELECT 1+count(*)/8    
          FROM pg_stats s2    
          WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname    
        ) AS nullhdr    
      FROM pg_attribute att     
      JOIN pg_class tbl ON att.attrelid = tbl.oid    
      JOIN pg_namespace ns ON ns.oid = tbl.relnamespace     
      LEFT JOIN pg_stats s ON s.schemaname=ns.nspname    
      AND s.tablename = tbl.relname    
      AND s.inherited=false    
      AND s.attname=att.attname,    
      (    
        SELECT    
          (SELECT current_setting($$block_size$$)::numeric) AS bs,    
            CASE WHEN SUBSTRING(SPLIT_PART(v, $$ $$, 2) FROM $$#"[0-9]+.[0-9]+#"%$$ for $$#$$)    
              IN ($$8.0$$,$$8.1$$,$$8.2$$) THEN 27 ELSE 23 END AS hdr,    
          CASE WHEN v ~ $$mingw32$$ OR v ~ $$64-bit$$ THEN 8 ELSE 4 END AS ma    
        FROM (SELECT version() AS v) AS foo    
      ) AS constants    
      WHERE att.attnum > 0 AND tbl.relkind=$$r$$    
      GROUP BY 1,2,3,4,5    
    ) AS foo    
  ) AS rs    
  ON cc.relname = rs.relname AND nn.nspname = rs.nspname    
  LEFT JOIN pg_index i ON indrelid = cc.oid    
  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid    
) AS sml order by wastedbytes desc limit 5    

檢查膨脹

SELECT    
  current_database() AS db, schemaname, tablename, reltuples::bigint AS tups, relpages::bigint AS pages, otta,    
  ROUND(CASE WHEN otta=0 OR sml.relpages=0 OR sml.relpages=otta THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,    
  CASE WHEN relpages < otta THEN 0 ELSE relpages::bigint - otta END AS wastedpages,    
  CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,    
  CASE WHEN relpages < otta THEN $$0 bytes$$::text ELSE (bs*(relpages-otta))::bigint || $$ bytes$$ END AS wastedsize,    
  iname, ituples::bigint AS itups, ipages::bigint AS ipages, iotta,    
  ROUND(CASE WHEN iotta=0 OR ipages=0 OR ipages=iotta THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,    
  CASE WHEN ipages < iotta THEN 0 ELSE ipages::bigint - iotta END AS wastedipages,    
  CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes,    
  CASE WHEN ipages < iotta THEN $$0 bytes$$ ELSE (bs*(ipages-iotta))::bigint || $$ bytes$$ END AS wastedisize,    
  CASE WHEN relpages < otta THEN    
    CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta::bigint) END    
    ELSE CASE WHEN ipages < iotta THEN bs*(relpages-otta::bigint)    
      ELSE bs*(relpages-otta::bigint + ipages-iotta::bigint) END    
  END AS totalwastedbytes    
FROM (    
  SELECT    
    nn.nspname AS schemaname,    
    cc.relname AS tablename,    
    COALESCE(cc.reltuples,0) AS reltuples,    
    COALESCE(cc.relpages,0) AS relpages,    
    COALESCE(bs,0) AS bs,    
    COALESCE(CEIL((cc.reltuples*((datahdr+ma-    
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)),0) AS otta,    
    COALESCE(c2.relname,$$?$$) AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,    
    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols    
  FROM    
     pg_class cc    
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> $$information_schema$$    
  LEFT JOIN    
  (    
    SELECT    
      ma,bs,foo.nspname,foo.relname,    
      (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,    
      (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2    
    FROM (    
      SELECT    
        ns.nspname, tbl.relname, hdr, ma, bs,    
        SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth,    
        MAX(coalesce(null_frac,0)) AS maxfracsum,    
        hdr+(    
          SELECT 1+count(*)/8    
          FROM pg_stats s2    
          WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname    
        ) AS nullhdr    
      FROM pg_attribute att     
      JOIN pg_class tbl ON att.attrelid = tbl.oid    
      JOIN pg_namespace ns ON ns.oid = tbl.relnamespace     
      LEFT JOIN pg_stats s ON s.schemaname=ns.nspname    
      AND s.tablename = tbl.relname    
      AND s.inherited=false    
      AND s.attname=att.attname,    
      (    
        SELECT    
          (SELECT current_setting($$block_size$$)::numeric) AS bs,    
            CASE WHEN SUBSTRING(SPLIT_PART(v, $$ $$, 2) FROM $$#"[0-9]+.[0-9]+#"%$$ for $$#$$)    
              IN ($$8.0$$,$$8.1$$,$$8.2$$) THEN 27 ELSE 23 END AS hdr,    
          CASE WHEN v ~ $$mingw32$$ OR v ~ $$64-bit$$ THEN 8 ELSE 4 END AS ma    
        FROM (SELECT version() AS v) AS foo    
      ) AS constants    
      WHERE att.attnum > 0 AND tbl.relkind=$$r$$    
      GROUP BY 1,2,3,4,5    
    ) AS foo    
  ) AS rs    
  ON cc.relname = rs.relname AND nn.nspname = rs.nspname    
  LEFT JOIN pg_index i ON indrelid = cc.oid    
  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid    
) AS sml order by wastedibytes desc limit 5    

索引維護

1、新建索引

set maintenance_work_mem ='4GB';  -- 越大,創建索引越快    
set lock_timeout = '1s';  -- 避免長時間鎖等待    
    
create index [CONCURRENTLY] xxx on xxx using xxx (colxxx);  -- 根據實際情況,是否使用 CONCURRENTLY(不堵塞讀寫)。    

2、刪除舊索引

set lock_timeout = '1s';  -- 避免長時間鎖等待    
    
drop index oldindex;    

3、PK\UK的維護,與之類似,增加一步驟。

ALTER TABLE tbl ADD    
    [ CONSTRAINT constraint_name ]    
    { UNIQUE | PRIMARY KEY } USING INDEX index_name    
    [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]    

3. 係統配置

1、存儲、文件係統規劃

索引盤:對應索引表空間。

空間評估、IOPS評估、帶寬評估。建議采用SSD,文件係統采用ext4或xfs。

數據盤:對應數據文件表空間。

空間評估、IOPS評估、帶寬評估。建議采用SSD,文件係統采用ext4或xfs。

重做日誌盤:對應WAL目錄。

空間評估、IOPS評估、帶寬評估。建議采用SSD,文件係統采用ext4或xfs。

歸檔盤:對應WAL歸檔目錄。

空間評估、IOPS評估、帶寬評估。可以采用機械盤,文件係統采用ext4或xfs或ZFS,可以開啟文件係統壓縮功能。

備份盤:對應數據庫備份目錄。

空間評估、IOPS評估、帶寬評估。可以采用機械盤,文件係統采用ext4或xfs或ZFS,可以開啟文件係統壓縮功能。

日誌盤(pg_log):對應數據庫審計、錯誤日誌目錄。

空間評估、IOPS評估、帶寬評估。可以采用機械盤,文件係統采用ext4或xfs或ZFS,可以開啟文件係統壓縮功能。

2、網絡規劃

帶寬評估,網段規劃,防火牆規劃。

不建議使用公網。

3、CPU評估

根據業務需求,評估CPU主頻,核數。建議實測性能指標。

4、內核配置

/etc/sysctl.conf

# add by digoal.zhou    
fs.aio-max-nr = 1048576    
fs.file-max = 76724600    
kernel.core_pattern= /data01/corefiles/core_%e_%u_%t_%s.%p             
# /data01/corefiles事先建好,權限777,如果是軟鏈接,對應的目錄修改為777    
kernel.sem = 4096 2147483647 2147483646 512000        
# 信號量, ipcs -l 或 -u 查看,每16個進程一組,每組信號量需要17個信號量。    
kernel.shmall = 107374182          
# 所有共享內存段相加大小限製(建議內存的80%)    
kernel.shmmax = 274877906944       
# 最大單個共享內存段大小(建議為內存一半), >9.2的版本已大幅降低共享內存的使用    
kernel.shmmni = 819200             
# 一共能生成多少共享內存段,每個PG數據庫集群至少2個共享內存段    
net.core.netdev_max_backlog = 10000    
net.core.rmem_default = 262144           
# The default setting of the socket receive buffer in bytes.    
net.core.rmem_max = 4194304              
# The maximum receive socket buffer size in bytes    
net.core.wmem_default = 262144           
# The default setting (in bytes) of the socket send buffer.    
net.core.wmem_max = 4194304              
# The maximum send socket buffer size in bytes.    
net.core.somaxconn = 4096    
net.ipv4.tcp_max_syn_backlog = 4096    
net.ipv4.tcp_keepalive_intvl = 20    
net.ipv4.tcp_keepalive_probes = 3    
net.ipv4.tcp_keepalive_time = 60    
net.ipv4.tcp_mem = 8388608 12582912 16777216    
net.ipv4.tcp_fin_timeout = 5    
net.ipv4.tcp_synack_retries = 2    
net.ipv4.tcp_syncookies = 1        
# 開啟SYN Cookies。當出現SYN等待隊列溢出時,啟用cookie來處理,可防範少量的SYN攻擊    
net.ipv4.tcp_timestamps = 1        
# 減少time_wait    
net.ipv4.tcp_tw_recycle = 0        
# 如果=1則開啟TCP連接中TIME-WAIT套接字的快速回收,但是NAT環境可能導致連接失敗,建議服務端關閉它    
net.ipv4.tcp_tw_reuse = 1          
# 開啟重用。允許將TIME-WAIT套接字重新用於新的TCP連接    
net.ipv4.tcp_max_tw_buckets = 262144    
net.ipv4.tcp_rmem = 8192 87380 16777216    
net.ipv4.tcp_wmem = 8192 65536 16777216    
net.nf_conntrack_max = 1200000    
net.netfilter.nf_conntrack_max = 1200000    
vm.dirty_background_bytes = 409600000           
#  係統髒頁到達這個值,係統後台刷髒頁調度進程 pdflush(或其他) 自動將(dirty_expire_centisecs/100)秒前的髒頁刷到磁盤    
vm.dirty_expire_centisecs = 3000                 
#  比這個值老的髒頁,將被刷到磁盤。3000表示30秒。    
vm.dirty_ratio = 95                              
#  如果係統進程刷髒頁太慢,使得係統髒頁超過內存 95 % 時,則用戶進程如果有寫磁盤的操作(如fsync, fdatasync等調用),則需要主動把係統髒頁刷出。    
#  有效防止用戶進程刷髒頁,在單機多實例,並且使用CGROUP限製單實例IOPS的情況下非常有效。      
vm.dirty_writeback_centisecs = 100                
#  pdflush(或其他)後台刷髒頁進程的喚醒間隔, 100表示1秒。    
vm.mmap_min_addr = 65536    
vm.overcommit_memory = 0         
#  在分配內存時,允許少量over malloc, 如果設置為 1, 則認為總是有足夠的內存,內存較少的測試環境可以使用 1 .      
vm.overcommit_ratio = 90         
#  當overcommit_memory = 2 時,用於參與計算允許指派的內存大小。    
vm.swappiness = 0                
#  關閉交換分區    
vm.zone_reclaim_mode = 0         
# 禁用 numa, 或者在vmlinux中禁止.     
net.ipv4.ip_local_port_range = 40000 65535        
# 本地自動分配的TCP, UDP端口號範圍    
fs.nr_open=20480000    
# 單個進程允許打開的文件句柄上限    
    
# 以下參數請注意    
# vm.extra_free_kbytes = 4096000    
# vm.min_free_kbytes = 2097152    
# 如果是小內存機器,以上兩個值不建議設置    
# vm.nr_hugepages = 66536        
#  建議shared buffer設置超過64GB時 使用大頁,頁大小 /proc/meminfo Hugepagesize    
# vm.lowmem_reserve_ratio = 1 1 1    
# 對於內存大於64G時,建議設置,否則建議默認值 256 256 32    

sysctl -p

5、資源限製

/etc/security/limits.conf

* soft    nofile  1024000    
* hard    nofile  1024000    
* soft    nproc   unlimited    
* hard    nproc   unlimited    
* soft    core    unlimited    
* hard    core    unlimited    
* soft    memlock unlimited    
* hard    memlock unlimited    

6、防火牆配置

例子

# 私有網段    
-A INPUT -s 192.168.0.0/16 -j ACCEPT    
-A INPUT -s 10.0.0.0/8 -j ACCEPT    
-A INPUT -s 172.16.0.0/16 -j ACCEPT    

4. 數據庫初始化

1、initdb 介紹

initdb initializes a PostgreSQL database cluster.    
    
Usage:    
  initdb [OPTION]... [DATADIR]    
    
Options:    
  -A, --auth=METHOD         default authentication method for local connections    
      --auth-host=METHOD    default authentication method for local TCP/IP connections    
      --auth-local=METHOD   default authentication method for local-socket connections    
 [-D, --pgdata=]DATADIR     location for this database cluster    
  -E, --encoding=ENCODING   set default encoding for new databases    
      --locale=LOCALE       set default locale for new databases    
      --lc-collate=, --lc-ctype=, --lc-messages=LOCALE    
      --lc-monetary=, --lc-numeric=, --lc-time=LOCALE    
                            set default locale in the respective category for    
                            new databases (default taken from environment)    
      --no-locale           equivalent to --locale=C    
      --pwfile=FILE         read password for the new superuser from file    
  -T, --text-search-config=CFG    
                            default text search configuration    
  -U, --username=NAME       database superuser name    
  -W, --pwprompt            prompt for a password for the new superuser    
  -X, --xlogdir=XLOGDIR     location for the transaction log directory    
    
Less commonly used options:    
  -d, --debug               generate lots of debugging output    
  -k, --data-checksums      use data page checksums    
  -L DIRECTORY              where to find the input files    
  -n, --noclean             do not clean up after errors    
  -N, --nosync              do not wait for changes to be written safely to disk    
  -s, --show                show internal settings    
  -S, --sync-only           only sync data directory    
    
Other options:    
  -V, --version             output version information, then exit    
  -?, --help                show this help, then exit    
    
If the data directory is not specified, the environment variable PGDATA    
is used.    

例子

initdb -D $PGDATA -E UTF8 --locale=C -U postgres -X $PG_XLOG -W    

2、postgresql.conf參數配置

例子

listen_addresses = '0.0.0.0'      
port = 1921      
max_connections = 200      
tcp_keepalives_idle = 60      
tcp_keepalives_interval = 10      
tcp_keepalives_count = 6      
shared_buffers = 512MB      
maintenance_work_mem = 64MB      
dynamic_shared_memory_type = windows      
vacuum_cost_delay = 0      
bgwriter_delay = 10ms      
bgwriter_lru_maxpages = 1000      
bgwriter_lru_multiplier = 5.0      
bgwriter_flush_after = 0      
old_snapshot_threshold = -1      
wal_level = minimal      
synchronous_commit = off      
full_page_writes = on      
wal_buffers = 64MB      
wal_writer_delay = 10ms      
wal_writer_flush_after = 4MB      
checkpoint_timeout = 35min      
max_wal_size = 2GB      
min_wal_size = 80MB      
checkpoint_completion_target = 0.1      
checkpoint_flush_after = 0      
random_page_cost = 1.5      
log_destination = 'csvlog'      
logging_collector = on      
log_directory = 'pg_log'      
log_truncate_on_rotation = on      
log_checkpoints = on      
log_connections = on      
log_disconnections = on      
log_error_verbosity = verbose      
log_temp_files = 8192      
log_timezone = 'Asia/Hong_Kong'      
autovacuum = on      
log_autovacuum_min_duration = 0      
autovacuum_naptime = 20s      
autovacuum_vacuum_scale_factor = 0.05      
autovacuum_freeze_max_age = 1500000000      
autovacuum_multixact_freeze_max_age = 1600000000      
autovacuum_vacuum_cost_delay = 0      
vacuum_freeze_table_age = 1400000000      
vacuum_multixact_freeze_table_age = 1500000000      
datestyle = 'iso, mdy'      
timezone = 'Asia/Hong_Kong'      
lc_messages = 'C'      
lc_monetary = 'C'      
lc_numeric = 'C'      
lc_time = 'C'      
default_text_search_config = 'pg_catalog.english'      

3、pg_hba.conf數據庫防火牆配置

例子

host all all 0.0.0.0/0 md5      

5. PostgreSQL控製

1、環境變量配置

/home/digoal/.bash_profile

export PGPORT=1921      
export PGDATA=/home/digoal/pgdata      
export LANG=en_US.utf8      
export PGHOME=/home/digoal/pgsql9.6      
export LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH      
export PATH=$PGHOME/bin:$PATH:.      
export MANPATH=$PGHOME/share/man:$MANPATH      
export PGHOST=$PGDATA      
export PGUSER=postgres      
export PGDATABASE=postgres    

2、停庫

pg_ctl stop -m fast -D $PGDATA    

3、重啟

pg_ctl restart -m fast -D $PGDATA    

4、啟動

pg_ctl start -D $PGDATA    

5、Linux自啟動服務

vi /etc/rc.local    
    
su - digoal -c "pg_ctl start -D $PGDATA"    

6. 數據庫備份和還原

1、邏輯備份

pg_dump --help    
    
pg_dump dumps a database as a text file or to other formats.    
    
Usage:    
  pg_dump [OPTION]... [DBNAME]    
    
General options:    
  -f, --file=FILENAME          output file or directory name    
  -F, --format=c|d|t|p         output file format (custom, directory, tar,    
                               plain text (default))    
  -j, --jobs=NUM               use this many parallel jobs to dump    
  -v, --verbose                verbose mode    
  -V, --version                output version information, then exit    
  -Z, --compress=0-9           compression level for compressed formats    
  --lock-wait-timeout=TIMEOUT  fail after waiting TIMEOUT for a table lock    
  -?, --help                   show this help, then exit    
    
Options controlling the output content:    
  -a, --data-only              dump only the data, not the schema    
  -b, --blobs                  include large objects in dump    
  -c, --clean                  clean (drop) database objects before recreating    
  -C, --create                 include commands to create database in dump    
  -E, --encoding=ENCODING      dump the data in encoding ENCODING    
  -n, --schema=SCHEMA          dump the named schema(s) only    
  -N, --exclude-schema=SCHEMA  do NOT dump the named schema(s)    
  -o, --oids                   include OIDs in dump    
  -O, --no-owner               skip restoration of object ownership in    
                               plain-text format    
  -s, --schema-only            dump only the schema, no data    
  -S, --superuser=NAME         superuser user name to use in plain-text format    
  -t, --table=TABLE            dump the named table(s) only    
  -T, --exclude-table=TABLE    do NOT dump the named table(s)    
  -x, --no-privileges          do not dump privileges (grant/revoke)    
  --binary-upgrade             for use by upgrade utilities only    
  --column-inserts             dump data as INSERT commands with column names    
  --disable-dollar-quoting     disable dollar quoting, use SQL standard quoting    
  --disable-triggers           disable triggers during data-only restore    
  --enable-row-security        enable row security (dump only content user has    
                               access to)    
  --exclude-table-data=TABLE   do NOT dump data for the named table(s)    
  --if-exists                  use IF EXISTS when dropping objects    
  --inserts                    dump data as INSERT commands, rather than COPY    
  --no-security-labels         do not dump security label assignments    
  --no-synchronized-snapshots  do not use synchronized snapshots in parallel jobs    
  --no-tablespaces             do not dump tablespace assignments    
  --no-unlogged-table-data     do not dump unlogged table data    
  --quote-all-identifiers      quote all identifiers, even if not key words    
  --section=SECTION            dump named section (pre-data, data, or post-data)    
  --serializable-deferrable    wait until the dump can run without anomalies    
  --snapshot=SNAPSHOT          use given snapshot for the dump    
  --strict-names               require table and/or schema include patterns to    
                               match at least one entity each    
  --use-set-session-authorization    
                               use SET SESSION AUTHORIZATION commands instead of    
                               ALTER OWNER commands to set ownership    
    
Connection options:    
  -d, --dbname=DBNAME      database to dump    
  -h, --host=HOSTNAME      database server host or socket directory    
  -p, --port=PORT          database server port number    
  -U, --username=NAME      connect as specified database user    
  -w, --no-password        never prompt for password    
  -W, --password           force password prompt (should happen automatically)    
  --role=ROLENAME          do SET ROLE before dump    
    
If no database name is supplied, then the PGDATABASE environment    
variable value is used.    

《PostgreSQL 最佳實踐 - 在線邏輯備份與恢複介紹》

例子

備份digoal庫, DDL中不包含表空間. 所以恢複時不需要提前創建對應的表空間.

pg_dump -f ./digoal.dmp -F p -C -E UTF8 --no-tablespaces -h 127.0.0.1 -p 1999 -U postgres digoal        

2、邏輯還原

pg_restore --help    
    
pg_restore restores a PostgreSQL database from an archive created by pg_dump.    
    
Usage:    
  pg_restore [OPTION]... [FILE]    
    
General options:    
  -d, --dbname=NAME        connect to database name    
  -f, --file=FILENAME      output file name    
  -F, --format=c|d|t       backup file format (should be automatic)    
  -l, --list               print summarized TOC of the archive    
  -v, --verbose            verbose mode    
  -V, --version            output version information, then exit    
  -?, --help               show this help, then exit    
    
Options controlling the restore:    
  -a, --data-only              restore only the data, no schema    
  -c, --clean                  clean (drop) database objects before recreating    
  -C, --create                 create the target database    
  -e, --exit-on-error          exit on error, default is to continue    
  -I, --index=NAME             restore named index    
  -j, --jobs=NUM               use this many parallel jobs to restore    
  -L, --use-list=FILENAME      use table of contents from this file for    
                               selecting/ordering output    
  -n, --schema=NAME            restore only objects in this schema    
  -O, --no-owner               skip restoration of object ownership    
  -P, --function=NAME(args)    restore named function    
  -s, --schema-only            restore only the schema, no data    
  -S, --superuser=NAME         superuser user name to use for disabling triggers    
  -t, --table=NAME             restore named relation (table, view, etc.)    
  -T, --trigger=NAME           restore named trigger    
  -x, --no-privileges          skip restoration of access privileges (grant/revoke)    
  -1, --single-transaction     restore as a single transaction    
  --disable-triggers           disable triggers during data-only restore    
  --enable-row-security        enable row security    
  --if-exists                  use IF EXISTS when dropping objects    
  --no-data-for-failed-tables  do not restore data of tables that could not be    
                               created    
  --no-security-labels         do not restore security labels    
  --no-tablespaces             do not restore tablespace assignments    
  --section=SECTION            restore named section (pre-data, data, or post-data)    
  --strict-names               require table and/or schema include patterns to    
                               match at least one entity each    
  --use-set-session-authorization    
                               use SET SESSION AUTHORIZATION commands instead of    
                               ALTER OWNER commands to set ownership    
    
Connection options:    
  -h, --host=HOSTNAME      database server host or socket directory    
  -p, --port=PORT          database server port number    
  -U, --username=NAME      connect as specified database user    
  -w, --no-password        never prompt for password    
  -W, --password           force password prompt (should happen automatically)    
  --role=ROLENAME          do SET ROLE before restore    
    
The options -I, -n, -P, -t, -T, and --section can be combined and specified    
multiple times to select multiple objects.    
    
If no input file name is supplied, then standard input is used.    

如果備份為-F p格式,備份輸出為文本,直接運行即可。

psql postgres postgres -f ./digoal.dmp      

3、配置歸檔、流複製

要支持物理備份,務必打開歸檔,同時建議打開流複製。

配置歸檔目錄

# mkdir -p /disk1/digoal/arch    
# chown digoal:digoal /disk1/digoal/arch    

配置流複製和歸檔

vi postgresql.conf

max_worker_processes = 16      
    
max_wal_senders = 8    
    
archive_command = 'test ! -f /disk1/digoal/arch/%f && cp %p /disk1/digoal/arch/%f'    

重啟數據庫

pg_ctl restart -m fast -D $PGDATA    

4、創建流複製用戶

postgres=# create role rep replication login encrypted password 'rep123';    
CREATE ROLE    

5、配置流複製防火牆

vi $PGDATA/pg_hba.conf

host replication rep 0.0.0.0/0 md5    

pg_ctl reload -D $PGDATA

6、遠程物理基礎備份

pg_basebackup --help    
pg_basebackup takes a base backup of a running PostgreSQL server.    
    
Usage:    
  pg_basebackup [OPTION]...    
    
Options controlling the output:    
  -D, --pgdata=DIRECTORY receive base backup into directory    
  -F, --format=p|t       output format (plain (default), tar)    
  -r, --max-rate=RATE    maximum transfer rate to transfer data directory    
                         (in kB/s, or use suffix "k" or "M")    
  -R, --write-recovery-conf    
                         write recovery.conf after backup    
  -S, --slot=SLOTNAME    replication slot to use    
  -T, --tablespace-mapping=OLDDIR=NEWDIR    
                         relocate tablespace in OLDDIR to NEWDIR    
  -x, --xlog             include required WAL files in backup (fetch mode)    
  -X, --xlog-method=fetch|stream    
                         include required WAL files with specified method    
      --xlogdir=XLOGDIR  location for the transaction log directory    
  -z, --gzip             compress tar output    
  -Z, --compress=0-9     compress tar output with given compression level    
    
General options:    
  -c, --checkpoint=fast|spread    
                         set fast or spread checkpointing    
  -l, --label=LABEL      set backup label    
  -P, --progress         show progress information    
  -v, --verbose          output verbose messages    
  -V, --version          output version information, then exit    
  -?, --help             show this help, then exit    
    
Connection options:    
  -d, --dbname=CONNSTR   connection string    
  -h, --host=HOSTNAME    database server host or socket directory    
  -p, --port=PORT        database server port number    
  -s, --status-interval=INTERVAL    
                         time between status packets sent to server (in seconds)    
  -U, --username=NAME    connect as specified database user    
  -w, --no-password      never prompt for password    
  -W, --password         force password prompt (should happen automatically)    

創建備份目錄,備份可以通過流複製協議,備份到遠程。

# mkdir -p /disk1/digoal/backup    
# chown digoal:digoal /disk1/digoal/backup    

例子 1,通過流複製用戶,以及流複製備份

export PGPASSWORD=rep123    
    
pg_basebackup -D /disk1/digoal/backup -F t -z -h 數據庫IP -p 數據庫端口 -U rep     

例子 2,使用cp備份

步驟如下

select pg_start_backup('test');    
    
copy 數據文件,表空間等目錄    
    
select pg_stop_backup();    

7、物理增量備份

物理增量備份分為兩種,

一種是數據文件的增量備份,需要等10.0或者使用pg_rman, pg_probackup

https://github.com/postgrespro/pg_probackup

https://github.com/ossc-db/pg_rman

另一種是歸檔增量備份。

將歸檔文件拷貝到備份存儲即可。

8、PITR(時間點恢複)

1. 如果是異機備份,首先要部署PostgreSQL軟件環境,建議部署的PostgreSQL軟件版本與備份的數據文件一致。編譯參數一致。

如何查看編譯參數?在源庫執行如下命令

pg_config    

部署軟件時,還需要部署源庫所有的PostgreSQL插件,並且需要確保插件版本一致。

2. 構建恢複目錄,需要有足夠的空間。

3. 解壓數據文件,歸檔到各自的目錄。

如果有表空間,將表空間使用軟鏈連接到$PGDATA/pg_tblspc,或者將表空間解壓到$PGDATA/pg_tblspc。

4. 修改postgresql.conf

主要修改參數,包括監聽端口,shared buffer, preload library 等,防止端口衝突導致啟動失敗。

5. 配置$PGDATA/recovery.conf,同時設置恢複目標

主要配置,restore_command。(如果是搭建流複製備庫,則需要配置的是#primary_conninfo = '' # e.g. 'host=localhost port=5432'。)

#restore_command = ''           # e.g. 'cp /mnt/server/archivedir/%f %p'    

設置恢複到哪個時間點、XID或者target name。(如果是搭建流複製備庫,則需要配置recovery_target_timeline = 'latest')

#recovery_target_name = ''      # e.g. 'daily backup 2011-01-26'    
#    
#recovery_target_time = ''      # e.g. '2004-07-14 22:39:00 EST'    
#    
#recovery_target_xid = ''    
#    
#recovery_target_inclusive = true    

6. 啟動恢複庫

pg_ctl start -D 恢複庫$PGDATA    

PITR 時間點恢複 例子

數據變更操作(主庫)

postgres=# select pg_create_restore_point('digoal');      
 pg_create_restore_point       
-------------------------      
 1D6/FB17EC08      
(1 row)      

首先要創建一個記錄還原點XID的表。記錄XID,時間,以及描述信息。(來代替pg_create_restore_point 係統函數的功能)

postgres=> create table restore_point(id serial primary key, xid int8, crt_time timestamp default now(), point text);      
CREATE TABLE      

創建一個函數,代替pg_create_restore_point的功能,插入還原點。

postgres=> create or replace function create_restore_point(i_point text) returns void as $$      
declare       
begin      
  insert into restore_point(xid,point) values (txid_current(),i_point);      
end;      
$$ language plpgsql strict;      
CREATE FUNCTION      

插入一個還原點

postgres=> select create_restore_point('digoal');      
 create_restore_point       
----------------------      
       
(1 row)      

查詢這個表的信息:

postgres=> select * from restore_point;      
 id |  xid   |          crt_time          | point        
----+--------+----------------------------+--------      
  1 | 561426 | 2015-06-19 09:18:57.525475 | digoal      
(1 row)      
postgres=> select * from restore_point where point='digoal';      
 id |  xid   |          crt_time          | point        
----+--------+----------------------------+--------      
  1 | 561426 | 2015-06-19 09:18:57.525475 | digoal      
(1 row)      

接下來要模擬一下還原:

postgres=> create table test(id int,info text);      
CREATE TABLE      
postgres=> insert into test select generate_series(1,1000),md5(random()::text);      
INSERT 0 1000      

記錄當前哈希值。用於恢複後的比對。

postgres=> select sum(hashtext(t.*::text)) from test t;      
     sum            
--------------      
 -69739904784      
(1 row)      

接下來我要做一筆刪除操作,在刪除前,我先創建一條還原點信息。

postgres=> select create_restore_point('before delete test');      
 create_restore_point       
----------------------      
       
(1 row)      
postgres=> delete from test;      
DELETE 1000      
postgres=> select * from restore_point where point='before delete test';      
 id |  xid   |          crt_time          |       point              
----+--------+----------------------------+--------------------      
  2 | 561574 | 2015-06-19 09:45:28.030295 | before delete test      
(1 row)      

我隻需要恢複到561574 即可。接下來就是模擬恢複了。

主動產生一些XLOG,觸發歸檔。

postgres=> select pg_xlogfile_name(pg_current_xlog_location());      
     pg_xlogfile_name           
--------------------------      
 000000010000000200000041      
(1 row)      
postgres=> insert into test select generate_series(1,100000);      
INSERT 0 100000      
postgres=> insert into test select generate_series(1,100000);      
INSERT 0 100000      
postgres=> select pg_xlogfile_name(pg_current_xlog_location());      
     pg_xlogfile_name           
--------------------------      
 000000010000000200000042      
(1 row)      

下載阿裏雲RDS的備份和歸檔到本地。

在本地需要安裝一個postgresql, 與阿裏雲RDS的編譯配置參數一致(例如數據塊的大小),最好使用的模塊也一致,這裏沒有用到其他模塊,所以無所謂。

通過pg_settings來看一下RDS的配置信息,版本信息,方便我們在本地進行恢複。

postgres=> select name,setting,unit from pg_settings where category='Preset Options';      
         name          | setting | unit       
-----------------------+---------+------      
 block_size            | 8192    |        
 data_checksums        | on      |       
 integer_datetimes     | on      |       
 max_function_args     | 100     |       
 max_identifier_length | 63      |       
 max_index_keys        | 32      |       
 segment_size          | 131072  | 8kB      
 server_version        | 9.4.1   |       
 server_version_num    | 90401   |       
 wal_block_size        | 8192    |       
 wal_segment_size      | 2048    | 8kB      
(11 rows)      
postgres=> select version();      
                                                   version                                                          
--------------------------------------------------------------------------------------------------------------      
 PostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3), 64-bit      
(1 row)      

本地編譯安裝PostgreSQL 9.4.1,編譯參數與RDS一致。阿裏雲RDS這幾個參數都是默認的。

  --with-blocksize=BLOCKSIZE      
                          set table block size in kB [8]      
  --with-segsize=SEGSIZE  set table segment size in GB [1]      
  --with-wal-blocksize=BLOCKSIZE      
                          set WAL block size in kB [8]      
  --with-wal-segsize=SEGSIZE      
                          set WAL segment size in MB [16]      
      
      
# useradd digoal      
# su - digoal      
$ vi .bash_profile      
# add by digoal      
export PS1="$USER@`/bin/hostname -s`-> "      
export PGPORT=1931      
export PGDATA=/home/digoal/pg_root      
export LANG=en_US.utf8      
export PGHOME=/home/digoal/pgsql9.4.1      
export LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH      
export DATE=`date +"%Y%m%d%H%M"`      
export PATH=$PGHOME/bin:$PATH:.      
export MANPATH=$PGHOME/share/man:$MANPATH      
export PGHOST=$PGDATA      
export PGDATABASE=postgres      
alias rm='rm -i'      
alias ll='ls -lh'      
unalias vi      
      
# wget https://ftp.postgresql.org/pub/source/v9.4.1/postgresql-9.4.1.tar.bz2      
# tar -jxvf postgresql-9.4.1.tar.bz2      
# cd postgresql-9.4.1      
# ./configure --prefix=/home/digoal/pgsql9.4.1 --with-pgport=1931 --with-perl --with-python --with-tcl --with-openssl --with-pam --with-ldap --with-libxml --with-libxslt --enable-thread-safety --enable-debug      
# gmake world && gmake install-world      

下載備份和歸檔文件,解壓:

基礎備份選擇需要恢複的時間點之前的一個備份,歸檔則選擇在此之後的所有歸檔文件。

total 453M      
-rw-r--r-- 1 root   root    17M Jun 19 10:23 000000010000000200000040.tar.gz      
-rw-r--r-- 1 root   root    17M Jun 19 10:23 000000010000000200000041.tar.gz      
-rw-r--r-- 1 root   root   404M Jun 19 10:23 hins668881_xtra_20150618232331.tar.gz      
      
# mkdir /home/digoal/pg_root      
# mv hins668881_xtra_20150618232331.tar.gz /home/digoal/pg_root      
# tar -zxvf hins668881_xtra_20150618232331.tar.gz      
[root@db-172-16-3-150 ~]# tar -xvf 000000010000000200000040.tar.gz       
000000010000000200000040      
[root@db-172-16-3-150 ~]# tar -xvf 000000010000000200000041.tar.gz       
000000010000000200000041      
[root@db-172-16-3-150 ~]# mv 000000010000000200000040 /home/digoal/

最後更新:2017-04-12 23:25:18

  上一篇:go java麵試題—精選30道Java筆試題解答(二)
  下一篇:go 如何判斷字符串是否為合法數值、浮點、科學計數等格式