Re: [Yaffs] YAFFS2 Memory Usage Revisted

Top Page
Attachments:
Message as email
+ (text/plain)
Delete this message
Reply to this message
Author: Andrew McKay
Date:  
To: Charles Manning
CC: yaffs
Subject: Re: [Yaffs] YAFFS2 Memory Usage Revisted
Charles Manning wrote:
> On Wednesday 12 August 2009 16:00:40 Andrew McKay wrote:
>> Hey Charles,
>>
>> When testing with the 2GB NAND I was visited by the OOM killer a few times.
>> This makes me think we're short on RAM for handling a 2GB NAND part. Our
>> board currently has 32MB of RAM, of which 8MB is used for a RAM disk. When
>> I dropped the ramdisk down to 3.5MB for testing purposes, I didn't have
>> issues with the OOM killer any more. We're looking at moving up to 64MB of
>> RAM to avoid this issue. However in the future I'd like to be able to
>> estimate the memory usage of YAFFS2 based on NAND size.
>>
>> I found a thread about YAFFS2 memory usage, and I just want to make sure I
>> understand it correctly.
>>
>> http://www.yaffs.net/lurker/message/20090701.190059.23524635.ca.html#yaffs
>>
>>> * yaffs_Objects: Each object (file, directory,...) holds a yaffs_Object
>> in memory which is around 120 bytes per object.
>>
>> So every file, directory, etc. uses up 120 bytes of RAM. This is all the
>> time? Right from when the filesystem is mounted? So if I have 1000 objects
>> on the device I'll be using up 120000 bytes?
>>
>>> * yaffs_Tnodes: These are the things used to build the trees.
>> The part I'm using is 8192 erase blocks, and 64 pages per erase block.
>> That means there are 524288 chunks in my filesystem. Using your equation I
>> come up with
>>
>> Log2(524288) = 19 bits
>> 19 + 1 = 20 (which is already even)
>>
>> So 20 bits will be used to represent each chunk.
>>
>> Assuming worst case and the filesystem is full I will be using all 524288
>> chunks. This means that I will need 20 * 512K which is 10MB of RAM to
>> store all the Tnodes. Does that seem about right?
>
> Bits mate, not bytes... Therefore that calc should come out to be closer to
> 1.2Mbytes


ARG! This is what I get for trying to work late at night. Oops! =)

> The calculation is somewhat complicated by the fact that Tnodes are managed in
> arrays of 16, ie 40 bytes in this case. Thus:
> * On average larger files will have a wasted 20 bytes of Tnode.
> * Very small files will still need 40 bytes of Tnode, even if they only use a
> small amount of that.
>
> Thus lots of small files will skew the numbers a bit.
>
> If you do a /proc/yaffs you can get the actual numbers in use and use those
>
> nTnodesCreated * 40 bytes
> +
> nObjectsCreated * approx 120 bytes
>
> It would be possible to make a tweak to handle very short files better. Files
> smaller than 1 chunk don't really need a tnode tree since the tnode pointer
> could be stored directly in the object structure.
>
>
>> I was also copying my Linux source tree to NAND. It's about 22527 files.
>> This would mean it would require 120 * 22527, or about 2.6MB of RAM for all
>> of the Objects.
>>
>> Of course as you mentioned in the email there is some other overhead on top
>> of this, but this should be a large portion of the memory required to
>> handle a YAFFS filesystem?
>>
>> Thanks again,
>> Andrew McKay
>> Iders Inc.
>>
>>
>> _______________________________________________
>> yaffs mailing list
>>
>> http://lists.aleph1.co.uk/cgi-bin/mailman/listinfo/yaffs
>
>