News: This forum is now permanently frozen.
Pages: [1] 2
Topic: Filesystem full  (Read 4726 times)
« on: January 09, 2013, 14:53:54 »
Phatsta *
Posts: 12

Hi everyone!

I have experienced my very first problem with m0n0wall, after well over 2 years running at several customers, and in our own env. Imagine if your car would run that long flawlessly Wink

The specific install that I have a problem with resides on a VMWare ESXi server, as a virtual machine. It's version 1.34, the file called "generic-pc-1.34-vm.zip".

The problem I have is that whenever I try to activate Multicast in my network (not in the m0n0 since it doesn't support it), the m0n0 log gets full of "filesystem full" messages. Whenever this happens, m0n0 dhcp server stops handing out ip's, and the users lose connectivity. Considering it was only 26MB to start with and this network is pretty big, I thought it might be worth increasing the size of the harddrive. I used vmkfstools to increase the size to 200MB (this article: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004047). Everything went well, but unfortunally it didn't help much. It just takes longer for the filesystem to fill up, but is still does come back.

To me it sounds more like some sort of problem that floods the m0n0 with data, than a too small filesystem. Just a guess tho. Any help would be greatly appriciated!



Edit: Just throught I'd be a little more clear about the hdd increase since it's not really standard to do this as far as I understand.

I powered down the m0n0 VM and connected through ssh to the esxi box. There I ran the command:
vmkfstools -X 200M -d eagerzeroedthick /vmfs/volumes/datastore/m0n0folder/m0n0wall.vmdk

I then ended up with 2 more files; m0n0wall-s001.vmdk and m0n0wall-s002.vmdk, which is the virtual disk extension. Looking at the startup of the m0n0, it looks as if it only detects 1 disk at 200MB. I can't get a screenshot of it right now, so I got another one from a test machine which actually is 500MB. http://www.linford.se/temp/m0n0_hdd_500mb.png
« Last Edit: January 09, 2013, 15:19:10 by Phatsta »
« Reply #1 on: January 09, 2013, 15:52:34 »
Fred Grayson *****
Posts: 994

AFAIK, regardless of version, m0n0wall writes its logs to RAM disk, not to files on hard disk.

Also, AFAIK, m0n0wall will not use all the memory you throw at it. It will use only some of the installed memory, but I am unsure of the exact  figure as this depends highly on configuration.

The version of m0n0wall I run here fits comfortably on a 32MB CF card. Writing the image to larger CF cards is pointless as the excess space is consumed but not actually used.


--
Google is your friend and Bob's your uncle.
« Reply #2 on: January 09, 2013, 17:31:30 »
Phatsta *
Posts: 12

Okay, I see. It felt right since the breakdown time increased as disk space increased, but it's probably just the difference between the standard 26MB up to whatever m0n0 max can use, then.

The RAM assigned to this m0n0 is the standard 128MB but only 26% is used, so that shouldn't be the culprit.

In that case I could very well be back at too small filesystem to handle this network. This network has 2 VLAN's, about 30 nodes and maybe 6-800 simultaniously connected units. Activating multicast of course comes with a lot of data. But I really don't understand why that should affect m0n0 since it doesn't even support multicast and therefore shouldn't be a part of the topology, but then again I'm not an ace in networking.

So I guess I'll re-phrase the question; Is there a way to get m0n0 to be able to use more space?

This is the exact error message that comes up in the log:
kernel: pid 131 (dhcpd), uid 0 inumber 1173 on /: filesystem full

It gets repeated every 10 seconds or so with only the inumber changing.
« Reply #3 on: January 09, 2013, 19:32:44 »
Fred Grayson *****
Posts: 994

I don't have an answer for you, but perhaps other who read here will.

I have a problem with the log being flooded here also, related to IPv6 routing advertisements.

But it just renders the log useless, other functions continue to work fine.

I haven't gotten a response to my question about this though.

--
Google is your friend and Bob's your uncle.
« Reply #4 on: January 09, 2013, 20:33:37 »
fruit *
Posts: 22

Really don't have any ideas but this looks a similar problem
http://forum.pfsense.org/index.php?topic=43946.0

Googling kernel: (dhcpd)  on /: "filesystem full" may be useful also perhaps
« Reply #5 on: January 10, 2013, 09:25:23 »
Phatsta *
Posts: 12

Really don't have any ideas but this looks a similar problem
http://forum.pfsense.org/index.php?topic=43946.0

Googling kernel: (dhcpd)  on /: "filesystem full" may be useful also perhaps


This looks really promising. Only problem is that I haven't got a clue how to get into the shell to type those commands. I tried to ssh to the m0n0 but it wasn't that easy Smiley

Anyone know how to?

Scratch that, I spoke too soon because I got pressed by work. I know how to do now and this is the output:

$ df -h
Filesystem    Size    Used   Avail Capacity  Mounted on
/dev/md0       15M     15M     63K   100%    /
devfs         1.0K    1.0K      0B   100%    /dev
/dev/ad0a     9.6M    8.9M    692K    93%    /cf

Seems all the files could use some more space. I'll try and find out how to edit the rc.embedded.


Edit: Okay, that doesn't seem possible with m0n0. But at the same time, as someone pointed out, the filesystem is kept in RAM, not on the harddrive. Wonder if increasing the RAM would work, even though it says only 26% used. I'll have to try.



Edit 2: I increased RAM from 128MB to 512MB and rebooted but file sizes remain the same:
$ df -h
Filesystem    Size    Used   Avail Capacity  Mounted on
/dev/md0       15M     14M    1.7M    89%    /
devfs         1.0K    1.0K      0B   100%    /dev
/dev/ad0a     9.6M    8.9M    692K    93%    /cf

It's not full yet, thanks to the reboot, but it soon will be when units start requesting ip addresses  Sad
« Last Edit: January 10, 2013, 11:30:10 by Phatsta »
« Reply #6 on: January 10, 2013, 14:50:22 »
Phatsta *
Posts: 12

I don't understand why, but when I deactivated multicast on all my switches and the wireless controller that manages all the ap's, the problem went away again.

What does multicast do that affects the non-multicast-supported m0n0wall..?
« Reply #7 on: January 10, 2013, 15:37:16 »
Fred Grayson *****
Posts: 994

Multicast broadcasts packets. These packets are directed everywhere on the network segment. m0n0wall would view see them as unsolicted and block them unless otherwise allowed, thus generating a lot of log file activity.

--
Google is your friend and Bob's your uncle.
« Reply #8 on: January 10, 2013, 15:45:00 »
Phatsta *
Posts: 12

Multicast broadcasts packets. These packets are directed everywhere on the network segment. m0n0wall would view see them as unsolicted and block them unless otherwise allowed, thus generating a lot of log file activity.

So then, I'm essentially back at m0n0 not being able to handle the size of my network. And I can find no way of making it able, since I'm not a programmer. That's sad  Undecided
« Reply #9 on: January 10, 2013, 16:04:58 »
Fred Grayson *****
Posts: 994

It would seem to me that no matter how large the file system was made to be that it the log file could be overflowed eventually. I would look into configuring a rule that drops those packets without logging to see if that solves it for you.

As I said, I have a situation here where many thousands of messages are being written to the log, but it doesn't seem to otherwise affect m0n0wall.


--
Google is your friend and Bob's your uncle.
« Reply #10 on: January 10, 2013, 16:18:43 »
Phatsta *
Posts: 12

It would seem to me that no matter how large the file system was made to be that it the log file could be overflowed eventually. I would look into configuring a rule that drops those packets without logging to see if that solves it for you.

As I said, I have a situation here where many thousands of messages are being written to the log, but it doesn't seem to otherwise affect m0n0wall.



Yeah, you're right. I got a similar reply from Manuel Kasper as well, where he suggested there might be dhcp request leaks between the vlans, and if multicast is set to ip multicast, there would be hundreds or maybe thousands of requests. From that, I got a hunch it has something to do with the esxi vswitch being set to promiscous mode. I read somewhere that promiscous mode lets more traffic through, basically. Have to read up on it... anyway just a hunch.

I'll see what I can conjour Smiley
« Reply #11 on: January 10, 2013, 22:43:01 »
iridris ***
Posts: 145

Unless you really know what you're doing, you probably shouldn't be using Promiscuous mode in VMware. I'm betting that is at least part of the problem. Promiscuous mode is typically only used to monitor/sniff network traffic.
« Reply #12 on: January 11, 2013, 05:42:20 »
matguy *
Posts: 28

Also, a "file system" doesn't always mean hard storage, m0n0 uses some RAM drives for temp storage, especially logs.  Although, I wouldn't expect that simply adding more physcal RAM would change the sizes of those.

... Which points back to log size issues.

It may be that you've "outgrown" m0n0wall and need something that can scale bigger.  pfSense can be a fairly easy migration from m0n0wall and it scales up pretty well when larger hardware is thrown at it.  It certainly requires more RAM and hard storage than m0n0wall, but that's kind of the point, where m0n0wall is well suited to small firewall duties, pfSense can scale up pretty large.
« Reply #13 on: January 11, 2013, 13:00:39 »
CamKrist *
Posts: 1

Similar subject was being discussed at yahoo answers last week. I can post the link if needed.
« Reply #14 on: January 17, 2013, 14:55:33 »
Phatsta *
Posts: 12

Similar subject was being discussed at yahoo answers last week. I can post the link if needed.

Please do. Any help is greatly appreciated!
 
Pages: [1] 2
 
 
Powered by SMF 1.1.20 | SMF © 2013, Simple Machines