Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Give your Mac imaginary unlimited storage thanks to Disk Utility’s bug (eclecticlight.co)
96 points by giuliomagnifico on Dec 26, 2020 | hide | past | favorite | 62 comments


Back in BBS days I downloaded a utility that doubled connection speed. Skeptical but highly board, I tried it.

And to my astonishment it worked!

A few minutes later, “the clock seems to be running really slow”

Clever program. Half speed clock makes all download speeds show double speed.


I recall a utility that proxied all requests through another server, and compressed assets (eg large images) down on the fly on the server- so the end client used much less bandwidth to load a heavy web page.

It worked quite well and I think some ISPs offered it out of the box in the end


Opera Mini did (and apparently still does) that.

IIRC the proxy server does way more than just compress assets, it re-encodes the page to an internal binary markup format, and handles restricted JS execution with a simplified interaction model (the client had all of 4 events: unload, submit, click, and change, these events would trigger a server call which executes the corresponding handler and sends the updates back).

Not everything worked on that thing (most obviously the complex web applications which were starting to crop up around that time), but it was absolutely magic on unreliable GPRS connections 15 years ago.


The client was actually even simpler than that, it was just clicks. Doing anything else wasn't practical with the average latency back then.

The rest was done on the serverside using the old Opera Presto core - which scaled very economically, since it had gone through a lot of memory optimization efforts when it was ported to early smartphones/japanese weirdphones. We needed to keep the users's browser "tabs"/windows around until the next click, potentially a few minutes away, so memory usage was a key economical concern at the time.

I ran the product/development team from early on until 2014. Glad you liked it. Opera Mini peaked at about 150M monthly active users in like 2012-2013 or so. Since then I'm sure it's been a steady decline. If I were to guess right now, I'd say they new Chinese owners maybe will keep it alive for another 5-7 years, but who knows, they seem to be really focusing on Africa.


I used to use their proxy to bypass every web filter I ever came across haha. Ta for introducing me to the whack a mole that is bypassing porn filters, Opera! Sorry I never used your browser as intended


Opera mini on my blackberry made the gprs connection usable 15 years ago


Google Web Accelerator [0] did this. Requests were pricked via Google. A side effect of this was that you could access internal Google tools that had been restricted to Google's IP address range – oops!

[0] https://en.wikipedia.org/wiki/Google_Web_Accelerator


I might be remembering this totally wrong but didn't a mobile browser (maybe Opera?) offer this feature as well, like 5 or 10 years ago? I seem to remember trying it out for my Android phone back in the day


Yeah opera did this, I remember being amazed at the results in my Nokia e71 - and viewing the desktop site rather than the terrible 201) mobile offerings, but getting great speeds due to the compressed sizes.


This is what an app called Onavo did. Facebook bought them in order to spy on what people were doing with their phones.

https://en.wikipedia.org/wiki/Onavo


I used their compression VPN before Facebook gobbled them up. It worked pretty well.


That sounds like Squid + data compression proxy. I think I set it up a while back and was impressed by how well it worked.

https://gist.github.com/mayli/7d07c5e2913f9d45f3d6a5b122b797...


One company that I worked at used a NetLi (now part of Akamai). The approach that they used (IIRC) had multiple data centers closer to the end users and then a server in each client's datacenter. The server in the client data center and the servers in the edge data centers would establish a pre-negotiated connection that had its TCP parameters optimized for its traffic... and then serve compressed traffic over that connection.

It was pretty impressive back in the days of CGI and JSP - before the presentation was pushed entirely out to the browser and everything was rendered on the server.


Same! Can't remember what it was called, but there was a Java application I used 15 years ago that proxied HTTP and compressed assets, which was useful in New Zealand at the time because bandwidth was limited and not as fast(I'm sure that's changed by now).


Browsh [1] offers similar functionality via SSH.

[1]: https://www.brow.sh/


I remember AOL (circa version 3 or 4) doing this. If you saved one of these (maybe by normal means, or maybe only by poking through the local cache) they'd have an .art filename extension and be some proprietary format that wouldn't load in most image software.


Are images not already compressed and always have been on the web?

Compressing an already compressed image will surely not do much.


Compression in the context of (jpeg) images often implies lossy compression. You can apply that multiple times ans still gain size benefits. Other ways are stripping metadata and reducing color palettes. Images on websites are often horribly unoptimized, so if you’re willing to compromise a little on quality you can have substantial gains in bandwidth requirements.


I think I know the logic behind this, but could you clarify/confirm it?


Let’s say you normally could download 10 kilobytes in a minute. If your computer clock slowed down by half, the kilobytes would download at the same speed, but only 30 seconds would have passed in computer time, so the computer would calculate that you had downloaded 10 kilobytes in 30 seconds, or 20 kilobytes a minute. Really, a minute passes, not 30 seconds, but the computer doesn’t know


In this case download speed is based on your system's clock. For example, 1 minute on your system takes 2 minutes real time. Therefore, your computer thinks downloads finish in half the time it takes in reality.


Disk Utility has fallen from a reliable professional tool to an unpredictable quagmire, especially when it comes to APFS.

Try to image or clone an APFS drive with Disk Utility, I dare you. Here’s all I can find that works. https://gist.github.com/darwin/3c92ac089cf99beb54f1108b2e8b4...


I had sort of thought this was by design, all the drives are sharing the same amount of free space. If you were to use some of that, it would then show less available space. I think it’s called space sharing. https://en.wikipedia.org/wiki/Apple_File_System


Yeah, this seems the same as creating a bunch of sparse bundles with a large max size.


Seems like a technique for the scammers on Ebay who sell you an external hard drive filled with a heavy bolt and a 1GB flash stick. They program the drive to read 1TB but rewrite the 1GB over and over.

Worst thing is you never know until you need the data you wrote...


> They program the drive to read 1TB but rewrite the 1GB over and over.

My understanding is that they didn't specifically write a "program" to do it, all they did was to "program" a bunch of false configuration data to the USB flash controller and let it silently corrupt the flash. If you can find their internal programming tools on the web, they can even be fixed by program the correct configuration back.

>Worst thing is you never know until you need the data you wrote...

It's worthwhile to run dd+sha256sum or badblocks(8) on suspicious USB drives.



It works until you eject the disk says tfa.


No, that's way easier to accomplish.


I always thought it was funny to make a large imaginary file like this:

    dd if=/dev/null of=huge.img bs=1024 count=1 seek=5242880000
5TB file and instant creation. Doesn't actually take any space.


Yes, they're called sparse files. Most but not all file systems support them.


    truncate -s 100T hugefile.img
Another way to do this.


Linux also has another one.

    fallocate --length 1TB.img
"truncate" is in GNU coreutils, "fallocate" is in util-linux.


They are not the same thing. Check them with 'ls -lsh' and you'll see the sparse file from truncate uses no sectors. Whereas the fallocated file is fully preallocated.

If you have 100G free, truncate -S 1T will succeed, fallocate -l 1T will not.


Ah, yes. I stand corrected. I've mixed up the concept of preallocating an uninitialized file and a true sparse file.


I remember a program that promised to allow B&W Macs to show colors. In actually it used POV trickery to make it look like it was displaying hints of colors. (Sadly I don’t remember the name of the program but I remember being mildly amused by it.)


That works because the RGB receptors in the eye have different transient responses, so by precisely timing flashes of light you can create a color illusion. This is called the Fechner color effect : https://en.wikipedia.org/wiki/Fechner_color


Interesting! I don’t remember the name of the program (it was some shareware program I downloaded in the mid 1990s) but I’m pretty certain it used those Fechner discs to create the illusion.


Perhaps I’m biased because my first assignment as a professional programmer was writing file system utilities, but I’m fairly certain this is the kind of thing you want to make really sure you get it right.


Would a 2 trillion dollar software company allow such bugs to persist deliberately? Is it part and parcel of the carrot wrapped up in the next ‘upgrade/update’?

(2 trillion meaning, they have the money to sort this)


The most hilarious file size bug I recently saw is this post from Reddit r/DataHoarder... "I did file recovery on a 2TB drive and came up with 13875051.999 Petabytes of space..." https://old.reddit.com/r/DataHoarder/comments/kbaf16/i_did_f...


Something similar happened to me once - I used faulty ram for about a year (it worked most of the time, and when it didn't, it typically became apparent within a few minutes of booting the system, so it was a minor annoyance at worst). One of the times it didn't work, I noticed all of my programs were being killed by the OOM reaper, even though I had just started the system. After running free in a spare terminal, I realized it was reporting that I had roughly INT_MAX memory in use, and so it was deciding to kill basically everything to try and get back to having a non-negative amount of free memory left. There must have been memory corruption in some kernel data structure somewhere that deals with memory allocation.

I rebooted and it never happened again, but it was pretty funny running free and seeing those ridiculous numbers pop up. I was kind of amazed the rest of the system was even working at all.


I can't believe you willingly used faulty RAM for a year. It could have silently corrupted an important file.


Seems like a common bug where the macOS GUI isn't in "sync" with the system - https://apple.stackexchange.com/questions/409748/why-does-ma...


On my LCIII my older brother installed software called Times Two. It doubled the storage space of our scsi cartridges.

I assume it either wasn't real or was just a compression algorithm or maybe abused data density at a cost of integrity?


Compression programs like this (e.g. Disk Doubler - https://en.m.wikipedia.org/wiki/DiskDoubler) were popular for a while when storage was more expensive. Some of them were legit and could compress most common workloads pretty well. Some others were scams.


It's too bad that the popularity has waned, considering you can still get a lot out of it. Using the new compression in windows 10 I get to shrink my steam folder more than 25%, but being off the beaten path means I have to deal with a bit of jankiness and manually reapplying it every once in a while.


back in those days, we weren't using file formats that applied some sort of compression to them natively, so compression was effective. Now, most of our space is consummed by compressed media in Images/Audio/Video type files. Most of those files are in a compressed format "natively", so they see no benefit of applying another compression type.


There's also this old gem: https://www.ioccc.org/1993/lmfjyh.hint

tl;dr: Get around disk quota accounting (on some ancient unix) by storing file contents in file names.



Does anyone remember SoftRAM? Basically it was fraud, but claimed to double your ram using software.


SoftRAM was a fraud (https://en.wikipedia.org/wiki/SoftRAM ), but RAM Doubler (https://tidbits.com/1994/01/10/ram-doubler/ , https://www.amazon.com/Connectix-R010836-RAM-Doubler-9-0/dp/... “Currently Unavailable”) worked, and modern OSes may still use the technique of, instead of swapping out memory pages, compressing them so that a page becomes available (https://en.wikipedia.org/wiki/Virtual_memory_compression)


> A number of people have expressed disbelief that such a feat is possible, saying that they’d avoid anything like RAM Doubler because it’s obviously doing strange things to memory, which isn’t safe. The answer to these naysayers is that a program like RAM Doubler either works or it doesn’t – it’s a binary decision.

Cool that the technology was available and weird that the knowledge behind why it would work was not more widespread.

I take issue with the statement here though - it's not binary. It's of course important if it works in normal situations, but how does it work under extreme memory pressure or certain incompressible memory pages? Et.c.


MacOS definitely uses memory compression, it's probably the only thing making an 8gb macbook viable for anything more than pure safari and Zoom.

https://www.lifewire.com/understanding-compressed-memory-os-...


So does Fedora's latest release. zram0 partition. zipped RAM drive. And measuring speed of my /tmp directory, have a feeling it resides on RAM.


It's funny how memory works. Human memory that is.

When I read the title of this article and thread, I thought, "This sounds a bit like that fraudulent DOS memory doubler. What was it called again, RAM Doubler?"

Oops. Thanks for setting the record straight!


It sort of is real, and not.

You can you use your hard drive as memory.

As a kid I found this fascinating, and spent a day getting it working.

Then you learn about thrashing. Oh wait, disk drive is a million times slower than ram.


You can always download more RAM. https://downloadmoreram.com/




Managing disks with Disk Utility has always been a buggy process in my experience.


My favorite is getting GUID partition and FAT partitions wrong. I had partitioned a drive incorrectly on a specific Mac. That Mac had no issues using the drive. However, no other Mac could mount the drive. My second favorite is formatting a drive in Linux without defining a partition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: