On 29/07/10 19:26, Marco Cesarano wrote: > I'm changing the checkpoint subsystem, but when a > file is being deleted and a power loss happens, the old chunks that > belong to the file that I was deleting remain on the disk. Deleting files in YAFFS, at its heart, only consists of writing out a new record to show that the file has been deleted. Bear in mind that this is done to a new chunk: the file's old record is still there but has an older sequence number, and the file's data chunks are still there - with the tags still identifying themselves as part of that file. Logically, the data chunks have become obsolete without you having to touch them. Recovering the chunks that belonged to the file generally happens at some point later. Only when all those chunks have been reclaimed can the deletion record itself be deleted. In the event that power is lost after the deletion record was written but before the chunks were recovered, the mount-time scan should be able to figure out what happened, and I would suggest that it's a bug if it doesn't. If you lose power before, or during, the deletion record is written, then you don't have any record of the deletion and the file remains present. > When the system is rebooted, the checkpoint doesn't know > anything about the deletion, and so some old chunks remain on the nand. If the deletion record was correctly written, then (traditionally) any checkpoint pre-dating it is invalid. (What I term here as "deletion" records are really just file rename records - the file is renamed into the "deleted" directory and simultaneously truncated to zero length.) Any work on incremental checkpoints must, to be semantically correct, take proper account of renames and truncates; if it does, then surely the question of reclaiming blocks comes out in the wash? > If the gc hasn't time to recover them, the space used by such chunks > grow ... for this reason, if many power loss happens during a file > deletion I run out of the space. Whenever YAFFS writes files or metadata, it carries out some gc. If it is running out of space it does so more aggressively. Your writes should not be failing unless the filesystem is really completely full. Ross -- eCosCentric Ltd, Barnwell House, Barnwell Drive, Cambridge CB5 8UU, UK Registered in England no. 4422071. www.ecoscentric.com