News: This forum is now permanently frozen.
Pages: [1]
Topic: M0n0 1.8.1 Traffic Shaping over bridge  (Read 1188 times)
« on: November 13, 2014, 12:24:35 »
JohnJFowler *
Posts: 18

Hi,

We have been using M0n0wall v1.3.4 on an ALIX 2D3 as a WAN simulator for quite some time with very good results by means of the Traffic Shaper over bridged connections to simulate various bandwidth scenarios for our needs.

However, as a recent trial with a slightly different network setup, we are experiencing loss of connections and excessive CPU load (85-100%) when used with the same 1.3.4 version, and also with v1.8.1 on the same ALIX 2D3.

Have also tried the same setup using a slightly more beefier setup using a HP ProLiant ML110G5 and Intel Pro 1000 LAN cards, with similar results of dropped connectivity (CPU usage was below 25%).

The previous network setup used two different subnetted networks on "dumb" switches physically linked together with a M0n0wall bridging the LAN and OPT1 ports between the switches. Traffic shaping was simple by means of using Pipes for various bandwidth and latancy (no masks), combined with the rules to allocate all protocols for all traffic via the pipe on both directions. Shaping worked very well and met our needs for quite a few years and could simulate the various WAN connections easily.

The new network setup is very similar in physical design, but using VLANs and Trunk ports.
In this case we have two Cisco 2960 switches containing various VLANS associated on each switch to represent two different physical "site" networks (i.e. an office building). These switches are then physically "linked" together via a trunk port on the switches (which is effectively the "WAN" connection).

Network connection is working great. So we then place the M0n0wall as the WAN simulator by bridging the LAN and OPT1 as the link on the Trunk Port.
So it would be

Cisco -> trunk port -> M0n0wall <- Cisco <- trunk port

with Traffic Shaper disabled, traffic is flowing through at normal speeds expected with the network. So no FW rules are blocking traffic, etc and we are happy that the "WAN" is working full speed through the M0n0.

However, once the Traffic Shaper is enabled, the connections between the two switches appear to drop and the two "sites" then become isolated without a WAN connection. M0n0wall status page doesn't appear to show any dropped packets. Buffers and caches also do not appear to be full as there are quite a few "spare" to utilise, but think they are being queued by M0n0wall and either not released, or being dropped?

Again, the traffic shaper is configured with simple pipes with bandwith and/or latency with no masks, and the rules are for anything via the pipe in both directions using the LAN port. Have also tried single incoming and outgoing directios, and also separate pipes tagged to separate incoming and outgoing rules for both LAN and OPT1 ports also with the same issues.

Traffic Shaper disabled, allows the network to revert back to normal?

The only difference we can see from our old setup is that we are bridging a trunk port containing VLAN's instead of simple IP routing?

In the end we have had to resort to an alternative product WanBridge for the same simulation in this case, and this does not suffer with dropped connections over the same trunk connections.

Can anyone shed any light as to why this may have occurred to put our minds at rest as the M0n0wall has fulfilled alot of functions for us over the years, in some cases putting to shame some commercial devices!

Thanks
John
« Reply #1 on: November 15, 2014, 19:01:28 »
Lee Sharp *****
Posts: 517

V-lan can add considerable load to a system, and some cards more than others.  I am betting that is your problem here.  Can you leave all the v-lanning on the switches and have different default v-lan ports feed the m0n0wall?
« Reply #2 on: November 17, 2014, 09:33:53 »
JohnJFowler *
Posts: 18

Hi Lee,

Unfortunately, I cannot as it is a dedicated trunk port between the two switches simulating the WAN connection.

All the other physical connections on the switches are separeated using VLans to simulate different parts of the locations (i.e. offices) with each switch representing different locations (i.e Switch one for North and Switch 2 for South) with restricted access for certain ranges too with no free ports available.

My original setup was with an ALIX 2D3 and there was quite a high CPU load with the traffic passing through when the Traffic Shaper was enabled, but when re-tried with a Dual XEON where CPU load was only just "tickling" the graph, I was still getting the same results of traffic denied.

I think it may be down to Dummynet used in M0n0 and its use with VLans for Traffic Shaping, as when in a standard network the Traffic Shaper was working fine.
« Reply #3 on: November 18, 2014, 04:37:46 »
Lee Sharp *****
Posts: 517

Are you out of ports?  Trunk into your switch, and have vlan1 be default on port one, and nic one.  Vlan2 is default on port 2 and nic 2.  And so on...
« Reply #4 on: November 18, 2014, 06:09:04 »
JohnJFowler *
Posts: 18

Yes out of ports if I were to try and utilise the number of VLANS against ports and NICs. Both physically with M0n0wall and with the switches.

There really isn't an easier way of reconfiguring the whole network due to various necessary internal routing and ACL's required for the network simulation and having the one trunk port as the "gateway" will simulate the most realistic setup for our specific needs when deployed.
« Reply #5 on: November 19, 2014, 23:02:41 »
Lee Sharp *****
Posts: 517

Then get a bigger box.  Vlan is load, and your box is low end to begin with.
« Reply #6 on: November 20, 2014, 10:01:55 »
JohnJFowler *
Posts: 18

By bigger box, you mean hardware?

If so, then alongside the ALIX 2D3, I also used a HP Compaq ML110 Generation 5 Server, running Dual Xeon E3120 (3.16Ghz) with 4GB RAM and Intel Pro 1000 LAN cards (Gigabit), where the CPU wasn't even being tickled in the Xeon under the same M0n0wall version, configuration and LAN conditions.

This is obviously far quicker than the 800Mhz AMD GEODE in the ALIX, and if it was down to Load then I would also have expected high CPU, but it wasn't registering anything over 5% usage?

The same was with the LAN cards, where ALIX has 100Mbps and server has Gigabit.

Using the server and shaping turned off, I could see throughput top 200Mbps so there was still plenty of overhead to utilise if under heavy traffic load, but all the VLAN traffic was still being passed without issue. Only when the shaping was turned on did things start to fail.

The other linux based product seems to be working very well with the same hardware and network, and with no issues of loading, so we may have to abandon using M0n0wall for a WAN simulator in VLAN configurations in the future.

It is a shame as have always preferred and enjoyed using M0n0wall and has provided many satisfying moments over the years of outperforming other open source and commercial products as well as its many useful features in its small configuration.
« Reply #7 on: November 21, 2014, 14:48:43 »
Lee Sharp *****
Posts: 517

Keep in mind that with vlan in and out is the same nic, so that 200 meg was actually 400 meg.  Also, all nics are NOT equal.  This is why I only use Intel nics.  They do more in hardware, while others use more in the driver.  That is an additional bottleneck.  Lastly, the vlan implementation of many cards is pure crap.  All software and hugely inefficient.

If it were me, I would get another switch so I could have each vlan be a dedicated port.  I would also use something better than a Geod which can not do 100 meg anyway.  If that is not an option, build one using an old PC (Pentium D is fine) and PCI-e based Intel gigabit nics.  This gives you a good card, a good buss, and a full vlan implementation in hardware.
« Reply #8 on: November 21, 2014, 15:44:08 »
JohnJFowler *
Posts: 18

Thanks Lee,

The 200Mb was the combined traffic of both in and out, so definitely not scraping the limits of the gigabit NIC's. There also wasn't a vast amount of traffic (from the other product, there were only maximum 4000 packets per second passing through the trunk reported).

I had already used a more powerful machine (Intel Xeon based), and also used Intel Server grade gigabit NIC's too (both equal versions and models), so most of the testing seems to point to enabling M0n0wall's Traffic Shaper being the bottleneck or issue in this particular case.

As for isolating the VLAN's, the switches were also configured with each port configured for a given VLAN (there were no ports with multiple VLAN access assigned), with the exception of two ports for trunking between the switches, which for our required network and infrastructure simulation cannot be changed at this moment. It may be something in the future to test, but for now it appears that M0n0wall can't be used in our current VLAN environments without any significant investment of time for testing.

Which to be honest, the other product seemed to work well and will probably become the "norm" for our future WAN simulations.

But thank you for your comments and information for this issue and maybe M0n0wall may be able to return as a good "all rounder" going forward  Smiley

Cheers
John
 
Pages: [1]
 
 
Powered by SMF 1.1.20 | SMF © 2013, Simple Machines