前些天有一位朋友在公众号上找到我,说他们的WinForm程序部署在20多台机器上,只有两台机器上的程序会出现崩溃的情况,自己找了好久也没分析出来,让我帮忙看下怎么回事,就喜欢这些有点调试基础的,dump也不需要我指导怎么去抓,接下来我们就上windbg开始分析吧。
寻找崩溃的表象比较简单,使用 windbg 的 !analyze -v 命令即可。
0:000> !analyze -v...EXCEPTION_RECORD: (.exr -1)ExceptionAddress: 0000000000000000 ExceptionCode: 80000003 (Break instruction exception) ExceptionFlags: 00000000NumberParameters: 0...STACK_TEXT: 0000003f`76f7ed58 00007ffa`f7c66d88 : 0000003f`00006120 00007ffa`f7bf98da 00000000`00000000 0000e4f5`bb3ba231 : user32!NtUserWaitMessage+0xa0000003f`76f7ed60 00007ffa`f7bf9517 : 0000003f`00006120 0000003f`76f7ee80 00000000`00000000 00000000`00000000 : System_Windows_Forms_ni+0x2b6d880000003f`76f7ee10 00007ffa`f7bf8c2c : 0000003f`0006ec30 0000003f`00000001 0000003f`000c88c0 00000000`00000000 : System_Windows_Forms_ni+0x2495170000003f`76f7ef10 00007ffa`f7bf8a25 : 0000003f`00006120 00000000`ffffffff 0000003f`00054848 0000003f`76f7f300 : System_Windows_Forms_ni+0x248c2c0000003f`76f7efa0 00007ffa`9b4a0a08 : 0000003f`00007970 00000000`ffffffff 0000003f`000c88c0 0000003f`770bda90 : System_Windows_Forms_ni+0x248a250000003f`76f7f000 00007ffa`fab13753 : 00000000`00000001 0000003f`76f7f530 00007ffa`fac6710d 00000000`00000001 : 0x00007ffa`9b4a0a080000003f`76f7f040 00007ffa`fab1361c : 0000003f`00003330 00007ffa`f9acd94c 00000000`20000001 0000003f`00000000 : clr!CallDescrWorkerInternal+0x830000003f`76f7f080 00007ffa`fab144d3 : 00000000`00000000 00000000`00000004 0000003f`76f7f300 0000003f`76f7f3b8 : clr!CallDescrWorkerWithHandler+0x4e0000003f`76f7f0c0 00007ffa`fac6f75a : 0000003f`76f7f200 00000000`00000000 00000000`00000000 00000000`00000000 : clr!MethodDescCallSite::CallTargetWorker+0x2af0000003f`76f7f250 00007ffa`fac6f596 : 00000000`00000000 00000000`00000001 0000003f`00000000 00000000`00000000 : clr!RunMain+0x1ba0000003f`76f7f430 00007ffa`fac6f4d4 : 0000003f`770bda90 0000003f`000015b0 0000003f`770bda90 0000003f`77093490 : clr!Assembly::ExecuteMainMethod+0xba0000003f`76f7f720 00007ffa`fac6ea02 : 0000003f`76f7fd88 0000003f`76de0000 00000000`00000000 00000000`00000000 : clr!SystemDomain::ExecuteMainMethod+0x6b90000003f`76f7fd60 00007ffa`fac6e9b2 : 0000003f`76de0000 0000003f`76f7fee0 00000000`00000000 00007ffb`03c420e8 : clr!ExecuteEXE+0x430000003f`76f7fdd0 00007ffa`fac6e8f4 : ffffffff`ffffffff 00000000`00000000 00000000`00000000 00000000`00000000 : clr!_CorExeMainInternal+0xb20000003f`76f7fe60 00007ffb`03be6cf5 : 00000000`00000000 00000000`00000091 00000000`00000000 0000003f`76f7fe48 : clr!CorExeMain+0x140000003f`76f7fea0 00007ffb`03c8ea5b : 00000000`00000000 00007ffa`fac6e8e0 00000000`00000000 00000000`00000000 : mscoreei!CorExeMain+0xe00000003f`76f7fef0 00007ffb`0dc716ad : 00007ffb`03be0000 00000000`00000000 00000000`00000000 00000000`00000000 : mscoree!_CorExeMain_Exported+0xcb0000003f`76f7ff20 00007ffb`0f924629 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : kernel32!BaseThreadInitThunk+0xd0000003f`76f7ff50 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ntdll!RtlUserThreadStart+0x1dSTACK_COMMAND: ~0s; .ecxr ; kb...
从卦中看,真的吸了一口凉气,尼玛这dump没记录到 crash 信息,有些朋友说这个 int 3 不是吗?简单的说不是,它是一个软trap,抓dump的时候会有一个进程的冻结,这个冻结就是 int 3,所以你看dump中有这个异常 99% 都是正常的。
按往常的套路,我都会推荐procdump这款工具让朋友再抓一下,在重抓之前先看看可还有其他线索,可以用 !t 看看托管线程上是否挂了异常。
0:000> !tThreadCount: 76UnstartedThread: 0BackgroundThread: 69PendingThread: 0DeadThread: 6Hosted Runtime: no Lock ID OSID ThreadOBJ State GC Mode GC Alloc Context Domain Count Apt Exception 0 1 26c4 0000003f770bda90 26020 Preemptive 0000000000000000:0000000000000000 0000003f77093490 0 STA ... 74 77 c544 0000003f1a08c470 21220 Preemptive 0000000000000000:0000000000000000 0000003f77093490 0 Ukn System.ExecutionEngineException 0000003f000011f8 75 78 18a88 0000003f1a329ae0 8029220 Preemptive 0000000000000000:0000000000000000 0000003f77093490 0 MTA (Threadpool Completion Port)
从卦中可以看到有一个线程抛了 System.ExecutionEngineException 异常,这是一个灾难性的情况,表示 CLR 在执行自身代码的时候崩掉了,惊讶之余赶紧看看它的线程栈为什么会崩。
0:074> k # Child-SP RetAddr Call Site00 0000003f`1bafea90 00007ffa`fb0283aa clr!WKS::gc_heap::background_mark_simple+0x3601 0000003f`1bafeac0 00007ffa`fb028701 clr!WKS::gc_heap::revisit_written_page+0x2fe02 0000003f`1bafeb50 00007ffa`fb01ffec clr!WKS::gc_heap::revisit_written_pages+0x25103 0000003f`1bafec10 00007ffa`facefd01 clr!WKS::gc_heap::background_mark_phase+0x29804 0000003f`1bafeca0 00007ffa`fb021fe5 clr!WKS::gc_heap::gc1+0xc005 0000003f`1bafed10 00007ffa`fab33e1e clr!WKS::gc_heap::bgc_thread_function+0x16906 0000003f`1bafed50 00007ffb`0dc716ad clr!Thread::intermediateThreadProc+0x7d07 0000003f`1baff810 00007ffb`0f924629 kernel32!BaseThreadInitThunk+0xd08 0000003f`1baff840 00000000`00000000 ntdll!RtlUserThreadStart+0x1d0:074> rrax=000000001f808000 rbx=0000003f1bafe870 rcx=0000003efac80140rdx=0000003f01000000 rsi=0000000000000000 rdi=0000003f1bafe380rip=00007ffafb020c06 rsp=0000003f1bafea90 rbp=0000003f01c63270 r8=0000000000000000 r9=0000003f01c64000 r10=0000003f04271000r11=0000000000000001 r12=00007ffa9bca83c0 r13=0000003f01c632a8r14=ffffffffffffffff r15=0000003f01c63000iopl=0 nv up ei pl zr na po nccs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010244clr!WKS::gc_heap::background_mark_simple+0x36:00007ffa`fb020c06 41f70000000080 test dword ptr [r8],80000000h ds:00000000`00000000=????????
从卦中信息看,当前是一个 bgc 线程,在后台标记对象的时候踩到了0区导致的崩溃,经验告诉我,是不是此时的托管堆损坏了? 可以用 !verifyheap 验证下。
0:000> !verifyheap No heap corruption detected.
从卦中信息看,当前托管堆并没有损坏,作为一个经常为sos输出坑过的人,现在我是不相信这个输出的,所以我要找一下这个 r8 对象到底是什么对象,接下来反汇编下 background_mark_simple 方法。
0:074> ub 00007ffa`fb020c06clr!WKS::gc_heap::background_mark_simple+0x1a:00007ffa`fb020bea 0941d3 or dword ptr [rcx-2Dh],eax00007ffa`fb020bed e048 loopne clr!WKS::gc_heap::background_mark_simple+0x67 (00007ffa`fb020c37)00007ffa`fb020bef 8b0dd3253c00 mov ecx,dword ptr [clr!WKS::gc_heap::mark_array (00007ffa`fb3e31c8)]00007ffa`fb020bf5 44850481 test dword ptr [rcx+rax*4],r8d00007ffa`fb020bf9 7548 jne clr!WKS::gc_heap::background_mark_simple+0x73 (00007ffa`fb020c43)00007ffa`fb020bfb 44090481 or dword ptr [rcx+rax*4],r8d00007ffa`fb020bff 4c8b02 mov r8,qword ptr [rdx]00007ffa`fb020c02 4983e0fe and r8,0FFFFFFFFFFFFFFFEh0:074> r rdxrdx=0000003f010000000:074> !lno rdxBefore: 0000003f00ffff38 512 (0x200) xxx.xxxAfter: 0000003f01000138 32 (0x20) System.StringHeap local consistency confirmed.0:074> ? 0000003f01000000 - 0000003f00ffff38Evaluate expression: 200 = 00000000`000000c80:074> !do 0000003f00ffff38Name: xxx.xxxMethodTable: 00007ffa9c0ac278EEClass: 00007ffa9c095b20Size: 512(0x200) bytesFields: MT Field Offset Type VT Attr Value Name...00007ffaf9d1da88 40012e6 c8 System.String 0 instance 0000000000000000 <OPPORTUNITY>k__BackingField...
经过我上面的一顿分析,原来bgc标记的对象是 <OPPORTUNITY>k__BackingField 字段,同时也验证了确实托管堆没有损坏,接下来的问题是为什么BGC在mark这个字段的时候抛出来了异常呢?
找不到突破口那就只能从线程栈上去挖,熟悉 bgc 后台标记的朋友应该知道,后台标记会分成三个阶段。
截一张我在 .NET高级调试训练营 PPT里的图。
接下来的问题是这个程序目前处于哪一个阶段呢?根据线程栈上的 revisit_written_pages 方法,很显然是处于第二阶段,在第二阶段中为了能够识别对象修改的情况,CLR 使用了 Win32 的GetWriteWatch函数对内存页进行监控,监控到的脏内存页会在第三阶段做最后的清洗。
说了这么多,有没有源码支撑呢?这里我们简单看一下 coreclr 的源代码即可。
void gc_heap::revisit_written_pages(BOOL concurrent_p, BOOL reset_only_p){ get_write_watch_for_gc_heap(reset_watch_state, base_address, region_size, (void**)background_written_addresses, &bcount, is_runtime_suspended);}// staticvoid gc_heap::get_write_watch_for_gc_heap(bool reset, void * base_address, size_t region_size, void * *dirty_pages, uintptr_t * dirty_page_count_ref, bool is_runtime_suspended){ bool success = GCToOSInterface::GetWriteWatch(reset, base_address, region_size, dirty_pages, dirty_page_count_ref);}bool GCToOSInterface::GetWriteWatch(bool resetState, void * address, size_t size, void * *pageAddresses, uintptr_t * pageAddressesCount){ uint32_t flags = resetState ? 1 : 0; ULONG granularity; bool success = ::GetWriteWatch(flags, address, size, pageAddresses, (ULONG_PTR*)pageAddressesCount, &granularity) == 0; if (success) { assert(granularity == OS_PAGE_SIZE); } return success;}
给了这么多的代码,主要是想说 bgc的并发标记利用了 Windows 提供的功能,结合朋友说的只有两台机器会出现这种情况,到这里大概可以给出两种方案:
说实话在我的dump分析旅程中,这个dump的分析难度还是比较大的,它考验着你对bgc线程底层运作的理解,所幸的是我在调试训练营里用windbg让大家亲眼目睹了后台标记三阶段的详细过程,真是三生有幸!
本文链接:http://www.28at.com/showinfo-26-79460-0.html记一次 .NET 某半导体CIM系统崩溃分析
声明:本网页内容旨在传播知识,若有侵权等问题请及时与本网联系,我们将在第一时间删除处理。邮件:2376512515@qq.com