"But American business practices, they are very strange. Very strange." said the Japanese engineer, when he figured out they were hiding from the boss ...
It's surreal when you take a step back for a second, like marathon runners at one of those drinking stands, and you take in how far we've come in the last 50, 30, and 15 years even. Then you have to think what's next because computers are getting skinnier and faster from the original design and we all know that eventually something else will take over. I wonder who will do it, what it will be like, and how it will change the world even more.
We aren't just going to stop once our laptops are paper thin and lightning fast (ha ha), so what will it be? The future is a vast frontier even today with how far we've come.
I probably just lack imagination, but I don't see the general model of a digital computer with a Harvard architecture going away for a very long time. Rather, our focuses will be on getting higher power and complexity on increasingly smaller sizes, while ubiquitously integrating them in virtually everything imaginable (in short: ubiquitous computing is the future).
Although we likely will see huge world-changing advancements that will be taken for granted in the future, much like the integrated circuit is now, the concept of a computer as we know it today probably won't disappear.
You need(ed) to drain any lingering charge in the CRT. It was easier and safer to do this with a dedicated tool than e.g. with a screwdriver, but the latter worked for one-off jobs.
Ground the probe or screwdriver, then slide it under the rubber cup shielding the... good-grief, I've forgotten which. Anode connection of the tube.
The tube/circuit could/can carry considerable charge for a significant period of time. Simply unplugging the mains power lead was not sufficient protection for a person working inside the case.
But, the case was "easily" open-able -- aside from complaints about the non-standard screw heads and difficulty separating the halves if you didn't have a tool to insert into the seam and pry them apart.
And once you were inside, things were pretty accessible. There was even an entire cottage industry documenting and recommending/selling replacements particularly for analog components on the power board, some of which were perhaps a bit under-spec-ed in the original designs and prone to noticeable deterioration as well as outright, total failure.
I worked in a TV repair shop in the early 90s. The firstt thing they taught me was to discharge CRT. Leave the TV plugged in but off at the mains, get two big screwdrivers, touch the end of one on the outside of the tube (which is coated in graphite and earthed), slide the tip of the second under the cathode cap and bring the two together. It'll discharge with a sharp crack. We had to do that every time we started work on a TV just in case it'd been switched on.
That was a fun place to work. The guy who ran the place hated people touching his tools so he'd charge up a fat capacitor and drop it in his toolbox. Anyone rummaging unwarily soon learned a lesson.
I'm not saying it was impossible to safely open these machines. After all, there were plenty of people safely working in TV repair shops. If you knew what you were doing and weren't sloppy, you would be perfectly safe.
The problem is, not everyone would know what they were doing. At least with current tech, the worst that can happen is you ruin your computer and not your life.
Well, perhaps in a meta sense, the older designs "repaired" (us) by thinning the herd. ;-)
One the one hand, I take your point. On the other, people wrote books documenting the whole thing and you could replace that defective capacitor yourself. (Significantly less expensive than having a technician swap the power/video board, particularly if you did not have AppleCare. And you could replace it with a more robust part, so that you were not in the same circumstances again 6 months or a year later.)
I guess with the relative decline in price of whole subsystems -- in some cases -- these days, the greater availability of those, and -- again, in some cases -- the personal safety if not always ease of swapping them out, something might be said for "increased repairability". On the other hand, I don't think the Classic Mac design rates a 0/10. Lots of things back then could kill you. These days, you can't even purchase a real chemistry set for your kids, or in some cases hardly glassware for yourself.
You might think a laptop is harmless inside, but it actually contains 500-700 volts to power the LCD backlight. I learned while trying to fix my display - fortunately I found it out by reading, not by getting shocked. Unlike when I stuck my finger across the flash capacitor in a camera - yow! The point is, even harmless-seeming battery powered things can have surprisingly high voltages inside, so be careful.
You might think a laptop is harmless inside, but it actually contains 500-700 volts to power the LCD backlight.
How many CCFL-backlit laptops are they shipping these days?
EDIT: Also, high voltages, on their own, aren't necessarily dangerous. Tesla coils can run into the tens of kilovolts, but can be perfectly safe if they're properly constructed.
For all the sarcastic responses to this post, I can't find a single record of a death by CRT. I found lots of references to people accidentally touching the HV side of CRTs and receiving a painful shock but none of the shock causing any lasting issues.
Death by CRT is definitely possible in theory and the lack of real-world examples could be an issue of shoddy reporting (i.e. cardiac arrest due to electric shock minus a CRT as the specific reason) but I'd think think that across the millions and millions of CRTs sold there'd be at least one concrete report of a death.
Amazing to think that was an 8mhz 68000. I can't even compute how many times faster a new Mac Pro is today (even if you ignore the GPU). Or even how many times more powerful my iPhone 5 is than this.
That 68000 could manage about 1 MIPS. No floating-point unit, so the FLOPS will be painfully low.
The low-end Mac Pro has four cores at 3.9GHz each. I believe they'll do in the neighborhood of 3 instructions per clock cycle at peak performance, so that means the total is 3.9 x 4 x 3 x 1000 = 46,800 MIPS. Roughly. So in the neighborhood of 50,000 times faster when looking at the CPUs.
The GPUs make it outright ludicrous. The low-end Mac Pro does 2 teraFLOPS on the GPUs. Even if we're charitable to the 68000 and consider its 1 MIPS roughly equivalent to 1 MFLOPS, the Mac Pro is still two million times faster there.
And of course the Mac Pro is substantially cheaper when adjusted for inflation. Although it doesn't come with a display, nor any input devices.
And yet my Amiga and my ST display a menu as soon as I click, can keep up with my typing, and scroll documents without lag. My MacBookPro can do none of these things.
+1 I can remember coding up tricky algorithms on 1Mhz embedded processors back in the day which would be instantaneously responsive. Today, although we have supercomputers in our pockets that can do decent speech recognition and other extraordinary things, many really simple I/O operations seem to lag. Is it really possible that all that performance can be effectively neutered sometimes simply by too many layers of software ? If not, what else could it be ?
Software randomness and complexity made OS 'bloated', tapping into the immense resources. Up until linux 2.4 computer still had a smallness feel, there was not many abstraction layers to cross, short response time[1]. Now even with systemd fast boot, v8 jitted javascript and performant machines you notice a slight delay.
[1] of course pushing compositeless full screen cga or even vga doesn't require the same architecture complexity
The 128k Mac was very usable (and a thing of magic and joy to me as a teenager). The only bad part was swapping floppy disks back and forth. That could give you disk swapper's elbow.
Think about the CPU landscape in 1984. You had all these 8-bit home computers running at sub-2MHz. The IBM PC was 16-bits running at 4.77MHz, and most of those were actually using 8-bit buses (the 8088 vs. 8086). The super expensive IBM PC/AT was the top of the line: a 6MHZ 80286.
In comparison to all of this, a processor running at 8MHz that could do full 32-bit operations (though, yes, the external bus was half that) and with 16 32-bit registers. and no segments...mind blowing!
It's not quite that cut and dry. The 80286 was actually significantly faster than the 68000 at what-at-the-time-was-considered typical code[1]. The 68000 had a much cleaner and forward-looking ISA of course, but it paid a cost for that 32 bit architecture in more elaborate microcode that the 286 didn't need to worry about.
[1] That is, things that fit in mostly-16-bit data sets. Once framebuffer manipulation became the dominant operation a few years later, that status would flip. Nonetheless if you were trying to compile your code, model your circuit or calculate your spreadsheet as fast as possible in 1984, you'd probably pick a PC/AT over a Mac (if you couldn't get time on a VAX).
Yes. The 68000 was very nice to program for, but internally it was obviously from an earlier era/too far ahead of its time/a big pile of shit (delete as you see fit). You'll hunt far and wide for an instruction that takes less than 4 cycles, long instructions take more time again, variable-width shifts take 2 cycles per shift, a division can take 150 cycles, and with the effective address calculation timings on top things can really mount up. (See, e.g., http://oldwww.nvg.ntnu.no/amiga/MC680x0_Sections/mc68000timi...)
If you look at the cycle counts for 8086 instructions - see, e.g., http://zsmith.co/intel.html - they're much closer to the 68000 ones. Compared to the 68000, the 286 is just on another level.
Another cost of the "more elaborate microcode" is that there was no way to resume or restart execution of the current instruction after a bus error exception. The 68010 fixed that by dumping internal state onto the stack. I wished that after 68010 was released that Motorola promoted the 68010 for new development and the original 68000 only for situations where the system software cannot be modified (since to fix the problem the exception stack frames had to be changed). Motorola had some patents describing how it worked: https://www.google.com/patents/US4493035
This is true, but the '10 wasn't released until the Mac was well into development (and the Lisa was nearing release). Motorola sort of missed the window there. And in any case no 68k Macs would ever end up making significant use of an MMU anyway. By the time that became possible the platform had moved on to 68020/30 parts.
Obviously all the Unix vendors jumped on the '10 instantly. The MK68000 itself was never dominant there.
Its amazing to me how the later Apple platinum color and aluminum make the original Mac look sickly green. I wonder if that was the intent of the platinum change.
Plastic cases often yellow over time due to flame retardants in the plastic, which is the cause of the sickly color. The Mac case was a originally a much more pleasant beige than it appears in the teardown photos. For more info on case yellowing, see http://hackaday.com/2009/03/02/restoring-yellowed-computer-p...
A hardware note: the iFixit teardown points out the "74LS393 Video Counter" chips - these are just plain TTL binary counter chips, not special video chips, so I don't know why it's pointed out as notable. The 6522 Versatile Interface Adapter (bottom center) seems much more notable.
Thanks. That actually answered my main question: how did they manage to remove the big scary red wire. It shows how to use a discharge tool connected to ground to discharge the anode wire. Even before doing such a thing, I'd do some more research on servicing CRTs before attempting such a thing.
"But American business practices, they are very strange. Very strange." said the Japanese engineer, when he figured out they were hiding from the boss ...