Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did a test a 1024x768 PNG vs the Same PNG at 2048x1536. 299k vs 664k. So for raster images it more than doubles the footprint, however if you use vector assets there should be little impact.


After decompression into memory, that 2048x1536 image will take exactly four times the memory as a 1024x768 image.


But you don't store compressed PNGs in RAM for display by the GPU. Does the GPU even support lossless texture compression? I would imagine it supports lossy texture compression.

Edit: It is also interesting to see that the ~ 2x increase in the size of the PNG can be explained by PNG's run length encoding. When you double the pixels in each direction, RLE should readily compress the horizontal pixel replication but not the vertical pixel replication.


It seems Apple uses a "modified" png standard for iOS (enabled by default) that compresses the images even more when they are added to the App Bundle. http://iphonedevwiki.net/index.php/CgBI_file_format#Decoding


True, though as jevinskie says, we're not talking here about the way images are stored on disk but about what needs to go in the GPU memory.


Totally understood, and I'm well aware of the increased memory footprint required once they are uncompressed. Way back in 08 when I was working on my first iPhone app you had to be very careful with memory a small animation with 120 images of 50k each could quickly max out the 50mb of usable memory the iPhone 3G had unless you cashed properly and released assets as soon as they were not in use. I should have been more specific in my original post.


Yeah, iOS device GPUs support lossy texture compression in the form of PVRTC. There's no lossless texture compression support, as far as I know. (I'm talking in-memory here, obviously, not on disk as PNG or JPEG or whatever else.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: