News: This forum is now permanently frozen.
Pages: [1]
Topic: Maximum Ipsec speed  (Read 4943 times)
« on: June 12, 2012, 12:01:48 »
aqualityplace *
Posts: 10

I am just trying to get an idea of the maximum ipsec speed people are seeing across their VPN tunnels.

We are using 2 VMware appliances. CPU usage is less than 50% but the maximum speed we seem to get on a 100Mbit connection is 25Mb/sec. Usually this is fine but some times we need to sync a lot of data across the link and I cant help but feel we should be able to get a bit more out of these.

I appreciate the bottleneck could be else where but I have seen a lot of posts with people getting the same speeds and I just wanted to see if it is actually possible to get 50 or more Mbps out of these before I start looking for issues which are not there.

Whats your maximum transfer speed across your Ipsec site-to-site VPN tunnels?

« Reply #1 on: June 12, 2012, 16:20:46 »
iridris ***
Posts: 145

The throughput of m0n0wall (whether IPsec or not) is very reliant on the hardware powering it. What kind of hardware is the VMware host running, and how heavily is that host loaded? Is the 50% CPU usage being reported by the m0n0wall, or by VMware?
« Reply #2 on: June 12, 2012, 16:50:18 »
aqualityplace *
Posts: 10

CPU is being reported by VMware & M0n0wall, both of which are less than 50% Memory usage is at 5%

I have spoken to some one who said latency could cause the problem. I get 6ms ping responses across the tunnel
« Reply #3 on: June 12, 2012, 17:27:09 »
Manuel Kasper
Administrator
*****
Posts: 364

What protocol do you use to transfer the files? SMB (especially v1) is very latency sensitive. If you haven't already, run a raw TCP throughput benchmark (e.g. using iperf with a window size of 256k).

Fragmentation could be another issue - if packets need to be fragmented to go through the VPN tunnel, this will hurt performance. The best way to check for that is using a packet sniffer (e.g. Wireshark) on the WAN link.
« Reply #4 on: June 12, 2012, 17:31:29 »
aqualityplace *
Posts: 10

We are using it to replicate a storage array. Im not certian on the protocol it uses but I it is designed to copy across sites so I hope it was built with latency in mind.

I will try and run the suggested tests tonight
« Reply #5 on: June 12, 2012, 19:28:02 »
aqualityplace *
Posts: 10

I havent had a chance to test using iperf yet but I have found using a ping test the highest packet size I can use before the packet needs to be fragmented is 1472

I can reduce the MTU size on the SAN appliances, will I need to make any changes to the M0n0wall configuration?
« Reply #6 on: June 12, 2012, 19:34:16 »
aqualityplace *
Posts: 10

having said that the minium the SAN appliance will allow me to set is 1500.

Do you think it is the MTU slowing things down?
« Reply #7 on: June 12, 2012, 19:57:42 »
iridris ***
Posts: 145

having said that the minium the SAN appliance will allow me to set is 1500.

Do you think it is the MTU slowing things down?

It very likely is the source of the slowdown. As Manuel stated, fragmentation can cause performance issues.
« Reply #8 on: June 12, 2012, 20:01:22 »
aqualityplace *
Posts: 10

Sorry networking isnt my strong point. Are we suggesting that any traffic being sent across the tunnel should use an MTU of 1472 or less?

Is there anything I can do on the M0n0wall appliances to help improve things?
« Reply #9 on: June 12, 2012, 22:09:31 »
iridris ***
Posts: 145

Under the 'Advanced' page, under the 'Firewall' heading, try checking the box to "Allow fragmented IPsec traffic". Perhaps that will help? In addition, perhaps try checking the box to "Allow fragmented packets" under any IPsec firewall rules you may have.
« Reply #10 on: June 13, 2012, 17:29:45 »
aqualityplace *
Posts: 10

made the change but the SAN replication still maxes out at 20Mbps. It isnt the end of the world, its just sometimes we need to sync about 300-400GB which takes some time
« Reply #11 on: July 04, 2012, 18:14:47 »
astronot *
Posts: 5

I'm experiencing an issue similar to this.  I have two sites that both run at 100mbps with about 30ms across the tunnel.  iperf with 256kb window size will hit ~50mbps. If I run two streams at that size, it will saturate the connection.

All other regular traffic over the tunnel maxes out at 20 or so mbps, whether between win7 machines, win7 pulling from solaris cifs, or two solaris machines replicating.

There's a few other issues from the main site too, in that a speedtest from inside will saturate the uplink, but hitting an internal speedtest from outside will cap at 20mbps.  I've been working a bit with a tech at our isp, and the same speedtest mini via the same route to their network but hosted on their machine will saturate 100mbps.

Any ideas?  The hardware on both ends is adequate, and since iperf can saturate the connection, I'm a bit stumped.
« Reply #12 on: July 06, 2012, 19:02:23 »
iridris ***
Posts: 145

Taking a stab in the dark here - What version of m0n0wall are you using? Have you tried the latest betas?
 
Pages: [1]
 
 
Powered by SMF 1.1.20 | SMF © 2013, Simple Machines