查看原文
其他

大新闻: PG 64-bit XID 设计和patch出炉, 它真的来了?

digoal PostgreSQL码农集散地 2024-07-08

文中参考文档点击阅读原文打开, 同时推荐2个学习环境: 

1、懒人Docker镜像, 已打包200+插件:《最好的PostgreSQL学习镜像

2、有web浏览器就能用的云起实验室: 《免费体验PolarDB开源数据库

3、PolarDB开源数据库内核、最佳实践等学习图谱:  https://www.aliyun.com/database/openpolardb/activity 

关注公众号, 持续发布PostgreSQL、PolarDB、DuckDB等相关文章. 


大新闻: PG 64-bit XID 设计和patch出炉, 它真的来了?

PG 采用32位事务号是目前最被诟病的问题之一, 加之MVCC的特性, 导致它不太适合长期高并发更新/删除的快速消耗事务号的业务. 因为事务号消耗最多20亿就必须freeze xid才能使之重用, freeze带来的是额外的IO开销, 大量WAL日志, 缓存失效, 从库延迟等问题. 如果freeze遇到高峰期, 那简直就是天雷遇到地火很容易产生性能抖动, 对业务造成影响.


槽点回顾: Tom Lane老师, 求求你别挤牙膏了, 先解决xid回卷的问题吧

PG社区对于64位xid讨论了很久, 当前都已经到第54个patch了, 这次又被编上了18的版本, 不知道18能不能如期支持. 

https://commitfest.postgresql.org/48/4703/

https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com



64-bit XID 设计文档解读

归结为2句话:

  • 兼容pg_upgrade, 使用pg_upgrade进行大版本升级可以获得64bit xid的能力.

  • 本质上是把freeze从全局维度降低到了 page/block 维度. 整个集群所有tuple消耗2^31个事务很容易, 但是一个page内的tuple消耗2^31个事务是非常罕见的.

1、64bit xid磁盘表达

利用 heap page 末尾 pd_special 区域的16个字节, 每8个字节用于存储64位的 pd_xid_base和pd_multi_base.

原来的tuple head存储的t_xmin、t_xmax不变, 但是 XMIN, XMAX重新计算如下:

XMIN = t_xmin + pd_xid_base. (1)
XMAX = t_xmax + pd_xid_base/pd_multi_base. (2)

2、内存表达

内存除了原始tuple, 同时针对每个tuple额外使用HeapTuple来存储使用以上方法计算好的XMIN, XMAX.


3、page freeze

仅当一个page内的事务号跨度超过MaxShortTransactionId时, 会修改pd_xid_base/pd_multi_base, 如果通过修改pd_xid_base/pd_multi_base还不能使得计算出来的XMIN/XMAX不能落在 (pd_xid_base, pd_xid_base + MaxShortTransactionId) 范围时, 才需要触发该页面的freeze, 理论上跨度是40亿个事务, 对于同一个page内tuple之间的事务跨度来说已经够够的了.


4、pg_upgrade

升级是个有趣的问题, 不过这个设计巧妙的解决了.

当老的heap page没有写满, 也就是末尾 pd_special 区域的16个字节可用时, 这个页面的pd_xid_base/pd_multi_base 可用通过当前真实的64bit事务号, 以及升级时的epoch 和frozen xid值(这两个值都是静态的)来进行计算得到.

当老的heap page已写满, 可以把tuple t_xmin拿来使用. 因为升级强制要求正常停库, 所以不存在还没有结束的事务, 并且要求没有未结束的2阶段事务. 所以tuple对所有事务可见, t_xmin就没有意义了, 和t_xmax合起来刚好8字节, 存储真实的64bit xid. 这个过程被称为"double xmax", 反正设计文档这么叫的, 我管他呢.


5、pg_upgrade 疯狂模式

为了解决pg_upgrade速度问题, page的升级不需要在升级过程完成, 而是在升级后, 用户读到对应的page时, 完成double xmax或pd_special区域pd_xid_base/pd_multi_base的设置.

因此在pg_upgrade升级后, 可能会有短暂的性能下降现象.


64-bit XID设计原文

+src/backend/access/heap/README.XID64
+
+64-bit Transaction ID's (XID)
+=============================
+
+A limited number (N = 2^32) of XID's required to do vacuum freeze to prevent
+wraparound every N/2 transactions. This causes performance degradation due
+to the need to read and rewrite all not yet frozen pages tables while being
+vacuumed. In each wraparound cycle, SLRU buffers are also being cut.
+
+With 64-bit XID's wraparound is effectively postponed to a very distant
+future. Even in highly loaded systems that had 2^32 transactions per day
+it will take huge 2^31 days before the first enforced "vacuum to prevent
+wraparound"). Buffers cutting and routine vacuum are not enforced, and DBA
+can plan them independently at the time with the least system load and least
+critical for database performance. Also, it can be done less frequently
+(several times a year vs every several days) on systems with transaction rates
+similar to those mentioned above.
+
+On-disk tuple and page format
+-----------------------------
+
+On-disk tuple format remains unchanged. 32-bit t_xmin and t_xmax store the
+lower parts of 64-bit XMIN and XMAX values. Each heap page has additional
+64-bit pd_xid_base and pd_multi_base which are common for all tuples on a page.
+They are placed into a pd_special area - 16 bytes in the end of a heap page.
+Actual XMIN/XMAX for a tuple are calculated upon reading a tuple from a page
+as follows:
+
+XMIN = t_xmin + pd_xid_base. (1)
+XMAX = t_xmax + pd_xid_base/pd_multi_base. (2)
+
+"Double XMAX" page format
+---------------------------------
+
+At first read of a heap page after pg_upgrade from 32-bit XID PostgreSQL
+version pd_special area with a size of 16 bytes should be added to a page.
+Though a page may not have space for this. Then it can be converted to a
+temporary format called "double XMAX".
+
+All tuples after pg-upgrade doesn't need t_xmin anymore as no older transactions
+could be running. So we don't need tuple header t_xmin field and we reuse
+t_xmin to store higher 32 bits of its XMAX.
+
+Double XMAX format is only for full pages that don't have 16 bytes for
+pd_special. So it neither has a place for a single tuple. Insert and HOT update
+for double XMAX pages is impossible and not supported. We can only read or
+delete tuples from it.
+
+When we are able to prune page double XMAX it will be converted from it to
+general 64-bit XID page format with all operations on its tuples supported.
+
+In-memory tuple format
+----------------------
+
+In-memory tuple representation consists of two parts:
+- HeapTupleHeader from disk page (contains all heap tuple contents, not only
+header)
+- HeapTuple with additional in-memory fields
+
+HeapTuple for each tuple in memory stores 64bit XMIN/XMAX. They are
+precalculated on tuple read from page with (1) and (2).
+
+The filling of XMIN and XMAX in HeapTuple is done in the same way as the other
+fields of HeapTuple struct. It is done in all cases of HeapTuple manipulation.
+
+Update/delete with 64-bit XIDs and 32-bit t_xmin/t_xmax
+--------------------------------------------------------------
+
+When we try to delete/update a tuple, we check that XMAX for a page fits (2).
+I.e. that t_xmax will not be over MaxShortTransactionId relative to
+pd_xid_base/pd_multi_base of a its page.
+
+If the current XID doesn't fit a range
+(pd_xid_base, pd_xid_base + MaxShortTransactionId) (3):
+
+- heap_page_prepare_for_xid() will try to increase pd_xid_base/pd_multi_base on
+a page and update all t_xmin/t_xmax of the other tuples on the page to
+correspond new pd_xid_base/pd_multi_base.
+
+- If it was impossible, it will try to prune and freeze tuples on a page.
+
+- If this is unsuccessful it will throw an error. Normally this is very
+unlikely but if there is a very old living transaction with an age of around
+2^32 this can arise. Basically, this is a behavior similar to one during the
+vacuum to prevent wraparound when XID was 32-bit. DBA should take care and
+avoid very-long-living transactions with an age close to 2^32. So long-living
+transactions often they are most likely defunct.
+
+Insert with 64-bit XIDs and 32-bit t_xmin/t_xmax
+------------------------------------------------
+
+On insert we check if current XID fits a range (3). Otherwise:
+
+- heap_page_prepare_for_xid() will try to increase pd_xid_base for t_xmin will
+not be over MaxShortTransactionId.
+
+- If it is impossible, then it will try to prune and freeze tuples on a page.
+
+Known issue: if pd_xid_base could not be shifted to accommodate a tuple being
+inserted due to a very long-running transaction, we just throw an error. We
+neither try to insert a tuple into another page nor mark the current page as
+full. So, in this (unlikely) case we will get regular insert errors on the next
+tries to insert to the page 'locked' by this very long-running transaction.
+
+Upgrade from 32-bit XID versions
+--------------------------------
+
+pg_upgrade doesn't change pages format itself. It is done lazily after.
+
+1. At first heap page read, tuples on a page are repacked to free 16 bytes
+at the end of a page, possibly freeing space from dead tuples.
+
+2A. 16 bytes of pd_special is added if there is a place for it
+
+2B. Page is converted to "Double XMAX" format if there is no place for
+pd_special
+
+3. If a page is in double XMAX format after its first read, and vacuum (or
+micro-vacuum at select query) could prune some tuples and free space for
+pd_special, prune_page will add pd_special and convert page from double XMAX
+to general 64-bit XID page format.
+
+This lazy conversion is called only on pages being read. This can slow down
+performance after upgrade, but just for a short period of time while "hot"
+pages are read (and therefore converted to 64-bit format).
+
+There is a special case when the first read of a tuple is done in read-only
+state (in read-only transaction or on replica). This tuples are to be converted
+"in memory", but not sync "to disk", unless cluster or transaction changed to
+read-write state (e.g. replica is promoted). In order to support this, we mark
+"in memory" pages with converted tuples with bit REGBUF_CONVERTED in buffer
+descriptor. When in read-write state this will trigger full page write xlog
+record.



往期吐槽文章:
欢迎大家留言或联系我把踩过的坑发过来, 一起鞭策开源和国产数据库: 
德哥邀你鞭策数据库第1期 - PG MVCC
Tom Lane老师, 求求你别挤牙膏了, 先解决xid回卷的问题吧
3 为什么增加只读实例不能提高单条SQL的执行速度?
4 德哥邀你鞭策数据库第4期-逻辑日志居然只有全局开关
第5期吐槽:经常OOM?吃内存元凶找到了:元数据缓存居然不能共享
第6期吐槽:2024了还没用上DIO,不浪费内存才怪呢!
7 第7期吐槽:今年才等来slot failover,附上海DBA招聘信息
8 第8期吐槽:高并发短连接性能怎么这么差?
9 第9期鞭策:“最先进”的开源数据库上万连接就扛不动了,怪研发咯?
10 第10期吐槽:说删库跑路的都是骗子,千万别信,他们有的宝贝你可能没有!
11 第11期吐槽:关闭FPW来提升性能,你想过后果吗! 本期彩蛋-老板提出变态的要求,你会答应吗?
12 第12期吐槽:SQL执行计划不对?能好就见鬼了!优化器还在用几十年前的参数模板,环境自适应能力几乎为零
13 第13期吐槽:十个中年人有九个发福的,数据库用久了也会变胖!这一期吐槽PG膨胀收缩之痛,tom lane啊您为啥不根治膨胀呢?
14 吐槽(鞭策)PG以来我掉了“一半流量”!老外听不得忠言逆耳吗? (本期抽奖-掌上游戏机)
15 第15期吐槽:没有全局临时表,除了难受还有哪些潜在危害?
16 空缺,因为这一期的吐槽PG社区已经落实了.
17 第17期吐槽:被DDL坑过的人不计其数!严重时引起雪崩,危害仅次于删库跑路!PG官方不支持online DDL确实后患无穷
18 第18期吐槽:都走索引了为什么还要回表访问?原来是索引里缺少了“灵魂”
19 第19期吐槽:从DuckDB导入到PG后膨胀了5倍,把存储销售乐坏了!什么情况?
20 第20期吐槽:PG17新版本这么香,为什么不升级呢?居然是因为这个
21 第21期吐槽:90%的性能抖动是缺少这个功能造成的!也是DBA害怕开发去线上跑SQL的魔咒
22 第22期吐槽:DB容灾节点延迟了,网络带宽瓶颈?用CPU换啊!该“魔法”PG还不支持!
25 第25期吐槽:PG的物理Standby无法Partial导致单元化架构/SaaS使用不灵活
99 第99期吐槽:SQL hang住锁阻塞性能暴跌!抓不到捣蛋SQL的DBA很尴尬。
26 第26期吐槽:开发者使用PG的第1件事-配置访问控制策略,体验有待加强
27 第27期吐槽:block size既大又小!谁把成年人惯成这样的?
28 想撼动Oracle,PG系国产你还不配!吐槽你连最基本的空间分配都没做好
29 吐槽PG表空间搞得跟"玩具"一样,全靠ZFS来凑
30 快改密码!你的PG密码可能已经泄露了
31 注意别踩坑!PG大表又发现一处隐患
100 直播+吐槽: 看看你的PG有没有被注水? 聊聊孤儿文件
32 第32期吐槽: PG大表激怒架构师,分区后居然不能创建唯一约束?
33 有奖谜题:PG里100%会爆的定时炸弹是什么?
34 第34期吐槽:PG做SaaS/DBaaS?隔墙有耳。(本期彩蛋PG岗位招聘)
35 "富人"的烦恼
36 PG商业上失败的重要原因之一


本期彩蛋-招商中,有需要的小伙伴可联系嵌入...


文章中的参考文档请点击阅读原文获得. 


欢迎关注我的github (https://github.com/digoal/blog) , 学习数据库不迷路.  

近期正在写公开课材料, 未来将通过视频号推出, 欢迎关注视频号:


继续滑动看下一个
向上滑动看下一个

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存