登录
首页 >  Golang >  Go问答

Kubernetes Pod被终止后内存消失的原因是什么?

来源:stackoverflow

时间:2024-02-20 15:48:25 301浏览 收藏

“纵有疾风来,人生不言弃”,这句话送给正在学习Golang的朋友们,也希望在阅读本文《Kubernetes Pod被终止后内存消失的原因是什么?》后,能够真的帮助到大家。我也会在后续的文章中,陆续更新Golang相关的技术文章,有好的建议欢迎大家在评论留言,非常感谢!

问题内容

我遇到了 k8s pod 被 oom 杀死的问题,但有一些奇怪的条件和观察结果。

pod 是一个基于 golang 1.15.6 的 rest 服务,运行在 x86 64 位架构上。当 pod 在基于 vm 的集群上运行时,一切都很好,服务运行正常。当服务在直接在硬件上配置的节点上运行时,它似乎会遇到内存泄漏并最终导致 oomed。

观察结果是,当在有问题的配置上运行时,“kubectl top pod”将报告不断增加的内存利用率,直到达到定义的限制 (64mib),此时 oom killer 将被调用。

使用“top”从 pod 内部进行的观察表明,pod 内各个进程的内存使用情况稳定,使用大约 40mib rss。 top 报告的 virt、res、shr 值随着时间的推移保持稳定,只有微小的波动。

我广泛分析了 golang 代码,包括随着时间的推移获取内存配置文件 (pprof)。实际的 golang 代码中没有泄漏的迹象,这与基于 vm 的环境中的正确操作以及来自顶部的观察相符。

下面的 oom 消息还表明 pod 使用的总 rss 仅为 38.75mib(总和/rss = 9919 页 *4k = 38.75mib)。

kernel: [651076.945552] xxxxxxxxxxxx invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=999
kernel: [651076.945556] CPU: 35 PID: 158127 Comm: xxxxxxxxxxxx Not tainted 5.4.0-73-generic #82~18.04.1
kernel: [651076.945558] Call Trace:
kernel: [651076.945567]  dump_stack+0x6d/0x8b
kernel: [651076.945573]  dump_header+0x4f/0x200
kernel: [651076.945575]  oom_kill_process+0xe6/0x120
kernel: [651076.945577]  out_of_memory+0x109/0x510
kernel: [651076.945582]  mem_cgroup_out_of_memory+0xbb/0xd0
kernel: [651076.945584]  try_charge+0x79a/0x7d0
kernel: [651076.945585]  mem_cgroup_try_charge+0x75/0x190
kernel: [651076.945587]  __add_to_page_cache_locked+0x1e1/0x340
kernel: [651076.945592]  ? scan_shadow_nodes+0x30/0x30
kernel: [651076.945594]  add_to_page_cache_lru+0x4f/0xd0
kernel: [651076.945595]  pagecache_get_page+0xea/0x2c0
kernel: [651076.945596]  filemap_fault+0x685/0xb80
kernel: [651076.945600]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945601]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945602]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945603]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945604]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945605]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945606]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945608]  ? filemap_map_pages+0x181/0x3b0
kernel: [651076.945611]  ext4_filemap_fault+0x31/0x50
kernel: [651076.945614]  __do_fault+0x57/0x110
kernel: [651076.945615]  __handle_mm_fault+0xdde/0x1270
kernel: [651076.945617]  handle_mm_fault+0xcb/0x210
kernel: [651076.945621]  __do_page_fault+0x2a1/0x4d0
kernel: [651076.945625]  ? __audit_syscall_exit+0x1e8/0x2a0
kernel: [651076.945627]  do_page_fault+0x2c/0xe0 
kernel: [651076.945628]  page_fault+0x34/0x40
kernel: [651076.945630] RIP: 0033:0x5606e773349b 
kernel: [651076.945634] Code: Bad RIP value.
kernel: [651076.945635] RSP: 002b:00007fbdf9088df0 EFLAGS: 00010206
kernel: [651076.945637] RAX: 0000000000000000 RBX: 0000000000004e20 RCX: 00005606e775ce7d
kernel: [651076.945637] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fbdf9088dd0
kernel: [651076.945638] RBP: 00007fbdf9088e48 R08: 0000000000006c50 R09: 00007fbdf9088dc0
kernel: [651076.945638] R10: 0000000000000000 R11: 0000000000000202 R12: 00007fbdf9088dd0
kernel: [651076.945639] R13: 0000000000000000 R14: 00005606e7c6140c R15: 0000000000000000
kernel: [651076.945640] memory: usage 65536kB, limit 65536kB, failcnt 26279526
kernel: [651076.945641] memory+swap: usage 65536kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] kmem: usage 37468kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] Memory cgroup stats for /kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe:
kernel: [651076.945652] anon 25112576
kernel: [651076.945652] file 0
kernel: [651076.945652] kernel_stack 221184
kernel: [651076.945652] slab 41406464
kernel: [651076.945652] sock 0
kernel: [651076.945652] shmem 0
kernel: [651076.945652] file_mapped 2838528
kernel: [651076.945652] file_dirty 0
kernel: [651076.945652] file_writeback 0 
kernel: [651076.945652] anon_thp 0
kernel: [651076.945652] inactive_anon 0
kernel: [651076.945652] active_anon 25411584
kernel: [651076.945652] inactive_file 0
kernel: [651076.945652] active_file 536576
kernel: [651076.945652] unevictable 0
kernel: [651076.945652] slab_reclaimable 16769024
kernel: [651076.945652] slab_unreclaimable 24637440
kernel: [651076.945652] pgfault 7211542
kernel: [651076.945652] pgmajfault 2895749
kernel: [651076.945652] workingset_refault 71200645
kernel: [651076.945652] workingset_activate 5871824
kernel: [651076.945652] workingset_nodereclaim 330
kernel: [651076.945652] pgrefill 39987763
kernel: [651076.945652] pgscan 144468270 
kernel: [651076.945652] pgsteal 71255273 
kernel: [651076.945652] pgactivate 27649178
kernel: [651076.945652] pgdeactivate 33525031
kernel: [651076.945653] Tasks state (memory values in pages):
kernel: [651076.945653] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name   
kernel: [651076.945656] [ 151091]     0 151091      255        1    36864        0          -998 pause  
kernel: [651076.945675] [ 157986]     0 157986       58        4    32768        0           999 dumb-init  
kernel: [651076.945676] [ 158060]     0 158060    13792      869   151552        0           999 su  
kernel: [651076.945678] [ 158061]  1234 158061    18476     6452   192512        0           999 yyyyyy
kernel: [651076.945679] [ 158124]  1234 158124     1161      224    53248        0           999 sh  
kernel: [651076.945681] [ 158125]  1234 158125   348755     2369   233472        0           999 xxxxxxxxxxxx
kernel: [651076.945682] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,mems_allowed=0-3,oom_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe,task_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe/a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,task=yyyyyy,pid=158061,uid=1234
kernel: [651076.945695] Memory cgroup out of memory: Killed process 158061 (yyyyyy) total-vm:73904kB, anon-rss:17008kB, file-rss:8800kB, shmem-rss:0kB, UID:1234 pgtables:188kB oom_score_adj:999
kernel: [651076.947429] oom_reaper: reaped process 158061 (yyyyyy), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

oom 消息明确指出使用量 = 65536kb,限制 = 65536kb,但我不知道 rss 下未计算的大约 25mib 内存已经消失了。

我看到slab_unreclaimable = 24637440, (24mib),这大约是似乎未计算在内的内存量,但不确定其中是否有任何重要意义。

寻找有关内存使用位置的任何建议。任何意见都将受到欢迎。


正确答案


我看到slab_unreclaimable = 24637440, (24MiB),这大约是似乎未计算在内的内存量...

有关平板详细信息,您可以尝试命令 slabinfo 或执行 cat /proc/slabinfo。该表可以指示您内存的去向。

到这里,我们也就讲完了《Kubernetes Pod被终止后内存消失的原因是什么?》的内容了。个人认为,基础知识的学习和巩固,是为了更好的将其运用到项目中,欢迎关注golang学习网公众号,带你了解更多关于的知识点!

声明:本文转载于:stackoverflow 如有侵犯,请联系study_golang@163.com删除
相关阅读
更多>
最新阅读
更多>
课程推荐
更多>