登录
首页 >  Golang >  Go教程

golang进程在docker中OOM后hang住问题解析

来源:脚本之家

时间:2022-12-22 17:54:23 105浏览 收藏

对于一个Golang开发者来说,牢固扎实的基础是十分重要的,golang学习网就来带大家一点点的掌握基础知识点。今天本篇文章带大家了解《golang进程在docker中OOM后hang住问题解析》,主要介绍了进程、docker、oom、hang,希望对大家的知识积累有所帮助,快点收藏起来吧,否则需要时就找不到了!

正文

golang版本:1.16

背景:golang进程在docker中运行,因为使用内存较多,经常在内存未达到docker上限时,就被oom-kill,为了避免程序频繁被杀,在docker启动时禁用了oom-kill,但是出现了新的问题。

现象:docker内存用满后,golang进程hang住,无任何响应(没有额外内存系统无法分配新的fd,无法服务),即使在程序内置了内存达到上限就重启,也不会生效,只能kill

因为pprof查看进程内存有很多是能在gc时释放的,起初怀疑是golang进程问题

在hang住之前,先登录到docker上,写一个golang测试程序,只申请一小段内存后sleep,启动时加GODEBUG=GCTRACE=1打印gc信息,发现mark 阶段stw耗时达到31s(31823+15+0.11 ms对应STW Mark Prepare,Concurrent Marking,STW Mark Termination)

怀疑是不是申请内存失败后,没有触发oom退出。在golang标准库中查看oom相关的逻辑

mgcwork.go:374

if s == nil {
   systemstack(func() {
      s = mheap_.allocManual(workbufAlloc/pageSize, spanAllocWorkBuf)
   })
   if s == nil {
      throw("out of memory")
   }
   // Record the new span in the busy list.
   lock(&work.wbufSpans.lock)
   work.wbufSpans.busy.insert(s)
   unlock(&work.wbufSpans.lock)
}

mheap分配内存使用了mmap,继续怀疑是mmap返回的错误码在docker中不是非0

func sysMap(v unsafe.Pointer, n uintptr, sysStat *sysMemStat) {
   sysStat.add(int64(n))
   p, err := mmap(v, n, _PROT_READ| _PROT_WRITE, _MAP_ANON| _MAP_FIXED| _MAP_PRIVATE, -1, 0)
   if err == _ENOMEM {
      throw("runtime: out of memory")
   }
   if p != v || err != 0 {
      throw("runtime: cannot map pages in arena address space")
   }
}

为了对比验证,用c写一段调用mmap的代码,在同一个docker中同时跑看下

#include 
#include 
#include 
#include 
#define BUF_SIZE 393216
void main() {
    char *addr;
    int i;
    for(i=0;i

mmap没有失败,而且同样会hang住,说明不是golang机制的问题,应该是阻塞在了系统调用上。查看调用堆栈,发现是hang在了cgroup中

[] mem_cgroup_oom_synchronize+0x275/0x340
[] pagefault_out_of_memory+0x2f/0x74
[] __do_page_fault+0x4bd/0x4f0
[] async_page_fault+0x45/0x50
[] 0xffffffffffffffff

查看go程序,也有相同的调用堆栈

[] futex_wait_queue_me+0xc1/0x120
[] futex_wait+0xf6/0x250
[] do_futex+0x2fb/0xb20
[] SyS_futex+0x7a/0x170
[] do_syscall_64+0x68/0x100
[] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[] 0xffffffffffffffff
[] hrtimer_nanosleep+0xce/0x1e0
[] SyS_nanosleep+0x8b/0xa0
[] do_syscall_64+0x68/0x100
[] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[] 0xffffffffffffffff
[] mem_cgroup_oom_synchronize+0x16a/0x340
[] pagefault_out_of_memory+0x2f/0x74
[] __do_page_fault+0x4bd/0x4f0
[] async_page_fault+0x45/0x50
[] 0xffffffffffffffff
[] mem_cgroup_oom_synchronize+0x16a/0x340
[] pagefault_out_of_memory+0x2f/0x74
[] __do_page_fault+0x4bd/0x4f0
[] async_page_fault+0x45/0x50
[] 0xffffffffffffffff
[] mem_cgroup_oom_synchronize+0x16a/0x340
[] pagefault_out_of_memory+0x2f/0x74
[] __do_page_fault+0x4bd/0x4f0
[] async_page_fault+0x45/0x50
[] 0xffffffffffffffff

看了下cgroup内存控制的代码,策略是没有可用内存并且未配置oom kill的程序,会锁在一个等待队列里,当有可用内存时再从队首唤醒。这个逻辑没办法通过配置或者其他方式绕过去。

elixir.bootlin.com/linux/v4.14…

 /**
 * mem_cgroup_oom_synchronize - complete memcg OOM handling
 * @handle: actually kill/wait or just clean up the OOM state
 *
 * This has to be called at the end of a page fault if the memcg OOM
 * handler was enabled.
 *
 * Memcg supports userspace OOM handling where failed allocations must
 * sleep on a waitqueue until the userspace task resolves the
 * situation.  Sleeping directly in the charge context with all kinds
 * of locks held is not a good idea, instead we remember an OOM state
 * in the task and mem_cgroup_oom_synchronize() has to be called at
 * the end of the page fault to complete the OOM handling.
 *
 * Returns %true if an ongoing memcg OOM situation was detected and
 * completed, %false otherwise.
 */
bool mem_cgroup_oom_synchronize(bool handle)
{
        struct mem_cgroup *memcg = current->memcg_in_oom;
        struct oom_wait_info owait;
        bool locked;
        /* OOM is global, do not handle */
        if (!memcg)
                return false;
        if (!handle)
                goto cleanup;
        owait.memcg = memcg;
        owait.wait.flags = 0;
        owait.wait.func = memcg_oom_wake_function;
        owait.wait.private = current;
        INIT_LIST_HEAD(&owait.wait.entry);
        prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
        mem_cgroup_mark_under_oom(memcg);
        locked = mem_cgroup_oom_trylock(memcg);
        if (locked)
                mem_cgroup_oom_notify(memcg);
        if (locked && !memcg->oom_kill_disable) {
                mem_cgroup_unmark_under_oom(memcg);
                finish_wait(&memcg_oom_waitq, &owait.wait);
                mem_cgroup_out_of_memory(memcg, current->memcg_oom_gfp_mask,
                                         current->memcg_oom_order);
        } else {
                schedule();
                mem_cgroup_unmark_under_oom(memcg);
                finish_wait(&memcg_oom_waitq, &owait.wait);
        }
        if (locked) {
                mem_cgroup_oom_unlock(memcg);
                /*
 * There is no guarantee that an OOM-lock contender
 * sees the wakeups triggered by the OOM kill
 * uncharges.  Wake any sleepers explicitly.
 */
                memcg_oom_recover(memcg);
        }
cleanup:
        current->memcg_in_oom = NULL;
        css_put(&memcg->css);
        return true;
}

结论:

docker内存耗光后,golang在gc的mark阶段,需要申请新的内存记录被标记的对象时,需要调用mmap,因为没有可用内存,就会被hang在cgroup中,gc无法完成也就无法释放内存,就会导致golang程序一直在stw阶段,无法对外服务,即使压力下降也无法恢复。最好还是不要关闭docker的oom-kill

今天关于《golang进程在docker中OOM后hang住问题解析》的内容介绍就到此结束,如果有什么疑问或者建议,可以在golang学习网公众号下多多回复交流;文中若有不正之处,也希望回复留言以告知!

声明:本文转载于:脚本之家 如有侵犯,请联系study_golang@163.com删除
相关阅读
更多>
最新阅读
更多>
课程推荐
更多>
评论列表