Date : Mon, 18 Sep 2006 18:02:54 +0100 (BST)
From : Greg Cook <debounce@...>
Subject: Re: Compressed ROMFS?
On Mon, 18 Sep 2006 09:10:12 +0100, Steve O'Leary
<navalenigma@...> wrote:
> > From: info@...
> > To: bbc-micro@...
> > Date: Mon, 18 Sep 2006 07:50:20 +0100
> > Subject: Re: [BBC-Micro] Compressed ROMFS?
> >
> > In article <4e67ac7d89info@...>,
> > Sprow <info@...> wrote:
> > > In article
<20060916095434.53696.qmail@...>,
> > > Greg Cook <debounce@...> wrote:
> > > > Did anyone ever come up with a system for compressing files in
ROMFS
> > > > images? I sketched out a dictionary based algorithm yesterday
but the
> > > > hard part is finding the repeating substrings.
> > >
> > > I had a quick bash this morning at Huffman coding some data
> > >
> > > For tokenised BASIC you can save 21%
> > > For 6502 binaries you can save 10%
> >
> > As a postscript to my own message, I tried some bitmapped data too
(very
> > high proportion of 0's for black) and that compressed by 61%.
> >
> > Lastly I wrote an RLE compressor using the madness proposed in
> > <4e5b03df85info@...> and that gave
> > For 6502 binaries you can save just 1%
> > For tokenised BASIC you save -2% (ie.bigger!)
> > For bitmap images with high black content 41%
> > Sprow.
http://homepages.tesco.net/~rainstorm/logohist.gif
"'At's more like it!"
> I'm glad you've had a go and got some figures. I can't see LZW
> performing any better than Huffman on binaries and the savings were
> not that brilliant. The thing is for the life in me I cannot actually
> remember why I wanted to do this in the first place now. There was a
> reason to it all, perhaps images, I'm not sure. Maybe if we wrote LZW
> in normal memory (not in a ROM) so we can use the normal full memory
> as workspace and just see if it performs any better. As Greg
> mentioned there could be issues with the limited scratchspace you
> have unless you alter page but we wouldn't want to do that.
>
> However as it's a fairly small algorythm I suppose it could run in
> sideways RAM and use what's left of it's 16K as dictionary space ?
> Limits it's usefulnes for those with[out] SW-RAM though :( And as
> I've never actually written an LZW routine, would we need to check
> for running out of dictionary space ? Obviously depends on the size
> of file to be compressed, do you know what the average dictionary
> size is as a percentage of original file size ?
Thanks for a great response. I was only wondering if it had been done
already before trying to code it myself. What brought it on was, there
were lots of odds and ends Logotron couldn't fit in the Logo ROM, so
they made up for it by throwing dozens of snippets in the manual.
Thought it would be nice to have a library to make up for the language
ROM's shortcomings, but it looked to be more than 16K (and it probably
is, counting all the procedures in the text that are referenced later.)
Nice to see interest in compression Beeb-style, 25 years on :-)
Regarding RAM, if there may not be SWRAM available the ROM could
reserve absolute space for itself and not clobber the user's memory.
As I understand it though, if it doesn't need much RAM it could just
use &C0 to &CF to save state between calls.
Regards,
Greg Cook
debounce@...
http://homepages.tesco.net/~rainstorm/
___________________________________________________________
Try the all-new Yahoo! Mail. "The New Version is radically easier to use"
– The Wall Street Journal
http://uk.docs.yahoo.com/nowyoucan.html