Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation faults when using Queue (non full push path) with netmap pool #83

Open
piotrjurkiewicz opened this issue May 8, 2018 · 3 comments

Comments

@piotrjurkiewicz
Copy link
Contributor

piotrjurkiewicz commented May 8, 2018

I emulate a virtual network by connecting several Click processes with netmap native (patched) veth pairs and From/ToNetmapDevice.

Now I want to rate limit ToNetmapDevice interfaces.

My first attempt was to use BandwidthRatedSplitter in order to maintain full push path. However, TCP rate control algorithms go crazy with that. When I limit an interface to 1 Gbps, rate of a TCP flow varies between 0 and 1.5 Gbps and iperf reports average 200 Mbps.

So I decided to use Queue -> BandwidthRatedUnqueue. This introduced a problem of empty runs and failed pulls because Click was repeatedly scheduling run_task. Using QuickNoteQueue instead Queue reduced this problem to an acceptable level. But what is more important, TCP rate was smooth and equal to limit, the same as I would limit interface bandwidth with tc command.

This was working fine as long as all Click processes were connected serially. When I emulated any non-linear network, I started to observe segfaults. They happen on packet allocation or freeing. Here are some examples:

Program terminated with signal SIGSEGV, Segmentation fault.
#0  WritablePacket::pool_data_allocate () at ../lib/packet.cc:397
397     ../lib/packet.cc: No such file or directory.
(gdb) bt
#0  WritablePacket::pool_data_allocate () at ../lib/packet.cc:397
#1  WritablePacket::pool_allocate (tailroom=0, length=1514, headroom=0) at ../lib/packet.cc:418
#2  Packet::make (headroom=0, data=data@entry=0x0, length=1514, tailroom=tailroom@entry=0) at ../lib/packet.cc:719
#3  0x000056093ab363d0 in KernelTun::one_selected (p=@0x7ffebad4ace0: 0x0, now=<synthetic pointer>..., this=0x56093c363160) at ../elements/userlevel/kerneltun.cc:522
#4  KernelTun::selected (this=0x56093c363160, fd=<optimized out>) at ../elements/userlevel/kerneltun.cc:494
#5  0x000056093abb7022 in SelectSet::call_selected (mask=<optimized out>, fd=10, this=0x56093c34f8e8) at ../lib/selectset.cc:367
#6  SelectSet::run_selects_poll (this=this@entry=0x56093c34f8e8, thread=thread@entry=0x56093c34f870) at ../lib/selectset.cc:474
#7  0x000056093abb7221 in SelectSet::run_selects (this=this@entry=0x56093c34f8e8, thread=thread@entry=0x56093c34f870) at ../lib/selectset.cc:581
#8  0x000056093aba7309 in RouterThread::run_os (this=0x56093c34f870) at ../lib/routerthread.cc:476
#9  RouterThread::driver (this=0x56093c34f870) at ../lib/routerthread.cc:645
#10 0x000056093aa4a0ed in main (argc=<optimized out>, argv=<optimized out>) at click.cc:803

Program terminated with signal SIGSEGV, Segmentation fault.
#0  __GI___libc_free (mem=0x663a61633a38313a) at malloc.c:3103
3103    malloc.c: No such file or directory.
(gdb) bt
#0  __GI___libc_free (mem=0x663a61633a38313a) at malloc.c:3103
#1  0x000056539d7e17ee in WritablePacket::~WritablePacket (this=0x56539eac5dac, __in_chrg=<optimized out>) at ../include/click/packet.hh:937
#2  WritablePacket::recycle (p=0x56539eac5dac) at ../lib/packet.cc:546
#3  0x000056539d7b6575 in Packet::kill (this=<optimized out>) at ../include/click/packet.hh:1671
#4  KernelTun::one_selected (p=@0x7ffcb59a68f0: 0x56539eac5dac, now=<synthetic pointer>..., this=0x56539eaea2d0) at ../elements/userlevel/kerneltun.cc:560
#5  KernelTun::selected (this=0x56539eaea2d0, fd=<optimized out>) at ../elements/userlevel/kerneltun.cc:494
#6  0x000056539d837022 in SelectSet::call_selected (mask=<optimized out>, fd=13, this=0x56539eab08e8) at ../lib/selectset.cc:367
#7  SelectSet::run_selects_poll (this=this@entry=0x56539eab08e8, thread=thread@entry=0x56539eab0870) at ../lib/selectset.cc:474
#8  0x000056539d837221 in SelectSet::run_selects (this=this@entry=0x56539eab08e8, thread=thread@entry=0x56539eab0870) at ../lib/selectset.cc:581
#9  0x000056539d827309 in RouterThread::run_os (this=0x56539eab0870) at ../lib/routerthread.cc:476
#10 RouterThread::driver (this=0x56539eab0870) at ../lib/routerthread.cc:645
#11 0x000056539d6ca0ed in main (argc=<optimized out>, argv=<optimized out>) at click.cc:803

Program terminated with signal SIGSEGV, Segmentation fault.
#0  __memmove_sse2_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:332
332     ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory.
(gdb) bt
#0  __memmove_sse2_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:332
#1  0x000055f2562c7ff5 in _send_packet (cur=<synthetic pointer>: <optimized out>, slot=<optimized out>, txring=0x7f469628f000, p=<optimized out>) at ../elements/userlevel/tonetmapdevice.cc:373
#2  ToNetmapDevice::send_packets (txsync_on_empty=true, ask_sync=true, head=@0x55f258113748: 0x7f470000001f, this=0x55f258113120) at ../elements/userlevel/tonetmapdevice.cc:429
#3  ToNetmapDevice::push_batch (this=<optimized out>, port=0, b_head=0x7f470000001f) at ../elements/userlevel/tonetmapdevice.cc:207
#4  0x000055f2562518f6 in Element::Port::push_batch (batch=<optimized out>, this=<optimized out>) at ../include/click/element.hh:781
#5  BandwidthRatedUnqueue::run_task (this=0x55f258113c70) at ../elements/standard/bwratedunqueue.cc:48
#6  0x000055f2563253a9 in Task::fire (this=0x55f258113da0) at ../include/click/task.hh:584
#7  RouterThread::run_tasks (ntasks=124, this=0x55f2580cc870) at ../lib/routerthread.cc:392
#8  RouterThread::driver (this=0x55f2580cc870) at ../lib/routerthread.cc:613
#9  0x000055f2561c80ed in main (argc=<optimized out>, argv=<optimized out>) at click.cc:803

Most of them happen in KernelTun, but they are not specific to this element. When I replace it with From/ToDevice, the same happens inside it. These elements are the ones which allocate/deallocate packets mostly (since From/ToNetmapDevice only forward them). I use these elements for connecting with routing daemons.

With Pipeliner instead Queue -> BandwidthRatedUnqueue segfaults do not happen, but of course I can not rate limit it.

All the above applies to single thread runs (click -j 1).

With multiple threads, these same segfaults happen. In addition, Click crashes even in linear network after forwarding some number of packets with messages "No more netmap buffers" and "netmap_ring_reinit". This happen also with Pipeliner. So the only configurations working with multiple threads are full push paths without Queue and Pipeliner.

So there are at least two problems:

  • segfaults when Queue is used and processes are connected non-serially (happen both with single and multiple threads)
  • "No more netmap buffers" when Queue or Pipeliner is used, no matter how routers are connected (happens only with multiple threads)

Full push paths works fine, both with single and multi threads and any network topology.

So are these bugs? Or Queue elements are meant not to be used in netmap pool mode? If so, how rate limiting can be achieved? Should it be implemented in Pipeliner? But it still would only work with single thread.

@piotrjurkiewicz piotrjurkiewicz changed the title Segmentation faults when using Queue (non full-push path) with netmap pool Segmentation faults when using Queue (non full push path) with netmap pool May 8, 2018
@tbarbette
Copy link
Owner

Following the full-push idea, you would have to implement a push BandwidthRatedUnqueue. Maybe having the internals similar to a Pipeliner. However pull should still work.

All of those problems probably happen because of a leak. For the "no more netmap buffers", using NetmapInfo(EXTRA_BUFFER 65536) will confirm it. If they all disappear then they are lost somewhere.
Could you look at the memory usage of your Click processes? Does it empty the RAM?

Also, if you use only the Queue but not rate limiters, does it work?

@piotrjurkiewicz
Copy link
Contributor Author

Also, if you use only the Queue but not rate limiters, does it work?

Segfaults also happen without rate limiters (i.e. when using Unqueue after Queue).

I also noticed that usage of Pipeliner also results in segfaults. Due to direct traversal being on by default, in single thread scenarios it wasn't really storing packets. With Pipeliner(DIRECT_TRAVERSAL false) segfaults happen, just like with Queue. So it seems than any kind of packet storage results in segfaults in non-serially connected networks.

As for multihreaded scenarios and NetmapInfo(EXTRA_BUFFER 65536) , sometimes it does help. But I still get:

[ 1008.189955] 314.688741 [1715] nm_rxsync_prologue        eth1_fp RX0: fail 'cur < head || cur > kring->nr_hwtail' h 955 c 944 t 955 rh 955 rc 944 rt 955 hc 944 ht 958
[ 1008.192727] 314.691516 [1758] netmap_ring_reinit        called for eth1_fp RX0
[ 1008.231922] 314.730710 [1715] nm_rxsync_prologue        eth1_fp RX0: fail 'cur < head || cur > kring->nr_hwtail' h 767 c 753 t 767 rh 767 rc 753 rt 767 hc 753 ht 769
[ 1008.234532] 314.733323 [1758] netmap_ring_reinit        called for eth1_fp RX0
[ 1008.343992] 314.842785 [1715] nm_rxsync_prologue        eth1_fp RX0: fail 'cur < head || cur > kring->nr_hwtail' h 305 c 293 t 305 rh 305 rc 293 rt 305 hc 293 ht 308

@tbarbette tbarbette self-assigned this Dec 3, 2018
@tbarbette tbarbette added the bug label Jan 28, 2020
@tbarbette tbarbette removed their assignment Aug 22, 2024
@tbarbette
Copy link
Owner

I'm not sure who's using netmap anymore so I'm sadly applying wontfix to this...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants