A spec is a contract between programmers, and in the long run, following specs is good for users. Letting Adobe ignore specs and then bending over backwards to accomodate them doesn't seem like a good policy.
And even if following the spec weren't better for users in the long term, it is technologically superior, and Fedora is free to choose the technologically superior alternative over pleasing the masses if they want. Despite what Linus suggests, there is no Ten Commandments of software that dictate the rules here.
I guess Linus could have used this opportunity to convince a naysayer like me why I'm wrong, but instead he was just insulting and didn't address the real points at all.
So, glibc changes, breaks adobe, and it is adobe's fault? Even if adobe was relying on bad behavior, it was glibc's bad behavior it was depending on.
These are the kinds of problems that finally convinced me to move from Linux to OS X. I get the Unix without the egos (just the fanbois, but I can usually ignore them ;).
So, glibc changes, breaks adobe, and it is adobe's fault?
This reminds me of Joel Spolsky's article, "How Microsoft Won the API War:"
"There are two opposing forces inside Microsoft, which I will refer to, somewhat tongue-in-cheek, as The Raymond Chen Camp and The MSDN Magazine Camp.
Raymond Chen is a developer on the Windows team at Microsoft. He's been there since 1992, and his weblog The Old New Thing is chock-full of detailed technical stories about why certain things are the way they are in Windows, even silly things, which turn out to have very good reasons. . .
The other camp is what I'm going to call the MSDN Magazine camp, which I will name after the developer's magazine full of exciting articles about all the different ways you can shoot yourself in the foot by using esoteric combinations of Microsoft products in your own software."
He thinks the MSDN camp won, and that's bad, and he contrasts with, say, Apple in historical times:
"A lot of developers and engineers don't agree with this way of working. If the application did something bad, or relied on some undocumented behavior, they think, it should just break when the OS gets upgraded. The developers of the Macintosh OS at Apple have always been in this camp. It's why so few applications from the early days of the Macintosh still work. For example, a lot of developers used to try to make their Macintosh applications run faster by copying pointers out of the jump table and calling them directly instead of using the interrupt feature of the processor like they were supposed to. Even though somewhere in Inside Macintosh, Apple's official Bible of Macintosh programming, there was a tech note saying "you can't do this," they did it, and it worked, and their programs ran faster... until the next version of the OS came out and they didn't run at all. If the company that made the application went out of business (and most of them did), well, tough luck, bubby.
To contrast, I've got DOS applications that I wrote in 1983 for the very original IBM PC that still run flawlessly, thanks to the Raymond Chen Camp at Microsoft."
The thing is, Apple is still mostly like that. Want the new hotness? Upgrade. I've been using OS X since 2004 and have trouble remembering all the stuff that broke because of OS updates (NetNewsWire and printing were particularly common). I don't know if it's because of the MSDN camp in Apple, but I do find it ironic that you cite things breaking as a reason to move to Apple. There are plenty of them, but I'm not sure that's one.
I've got Mac apps from the late '80s that ran fine on the last non-Unix Mac OS, and ran fine in the Classic environment on OS X up until that was finally dropped.
Apple did in fact do a lot of bending over backwards for compatibility, at least when it was a major developer breaking the rules. System 7 had special code in the memory manager for Microsoft applications to make them work with the 32-bit memory manager and virtual memory.
That said, Microsoft does do an outstanding job in this area. I remember when Win98 was coming out, we were not in the beta program at work. We got a call from Microsoft telling us that a VxD of ours was not working on Win98, telling us what assumption it was making that was no longer valid, and inviting us into the beta program. We were not a large, well-known company. That was pretty cool.
Oddly enough, MacOS had better back-compatibility with the old applications than the newer ones. It seemed like pretty much every app broke at least once between Systems 7 and 9, even while the 1985 apps ran fine. (Perhaps the developers were being trickier in how they abused the OS.)
Incidentally, I think this figured into Apple's thinking regarding limited back-compat. They had already unintentionally forced a number of application upgrades, why not do something positive like move everyone to a new OS/CPU/API in the process?
"I do find it ironic that you cite things breaking as a reason to move to Apple."
To be clear, things breaking wasn't what I was referring to. Generally, things worked very well on Linux (except [at the time] suspending and wifi on my laptop) and I know they've gotten better.
On the other hand, there was far too much focus on software for the developer's sake and not software for the user's sake for my tastes (even as a developer). I'm fine with the people who are developing the (almost exclusively) open source software developing it for themselves, but it was too much headache for me, so I did the proverbial "voted with my wallet".
I still use Linux a lot, it is the platform my startup is deploying on, and it amazes me every time I ponder the changes to the entire community since kernel version 0.9'ish and slackware when I first started using Linux. I actually trust the Linux community far more than Apple to keep things working over a longer-term timeline (which is great for servers, but I don't care much about on my desktop).
Adobe had a bug which worked because of a Bug in memcpy(), the bug was fixed and Adobe's code broke.
win95 had a similar one with the game Civilisation, they actually put code into win95 to detect the game and change the way the OS worked - doesn't sound like a good solution
Read the spec for memcpy: memcpy's behavior on overlapping memory regions is undefined - not "required to corrupt memory", but undefined. Changing memcpy from not breaking on overlapping memory regions to breaking does not fix any bugs.
Adobe should not rely on non-spec-defined behavior, but there's no reason why glibc <i>should</i> be making this change without making a major version number change.
What? The whole raison d'être of Windows 95 was backwards-compatibility with primordial PC junk. The entire thing was a hack from top-to-bottom, far beyond a workaround for a particular game.
Also, to quote Linus:
"And what was the point of making [an OS] again? Was it to teach everybody a lesson, or was it to give the user a nice experience?"
> I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it [...] the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.
Hmm. Well, I was assuming that glibc was following the spec all along but just changed some implementation detail that mattered because Adobe wasn't following the spec all along.
I think there's an argument to be made that the way an API is implemented is an implicit contract that ought to be upheld. But that's not the arugment I see being made.
Anyway, this level of detail is below the scope of the "specs vs pragmatism" debate that's going on.
I think there's an argument to be made that the way an API is implemented is an implicit contract that ought to be upheld.
That sounds wrong to me. If your consumers have to rely on assumptions beyond what's provided in the contract of the API, your API is leaky and/or broken.
[edit] I understand the pragmatism necessary in the case of the glibc issue, but to clarify my point I disagree with the general assertion I'm quoting.
Right, the memcpy() API is leaky because it leaves some things unspecified. You can deduce the implementation by providing various inputs to it that have "undefined behavior". This is a common problem in C, and there are usually functions that avoid it (Linus recommends memmove).
The trouble with this whole argument is that while the user may not care today, things like standards and defined interfaces are all about keeping things working tomorrow. Your user will surely be just as upset at something breaking tomorrow as they are today, and it's increasingly likely that such breakages will (a) occur and (b) cost more to fix, the longer you implicitly support deviations from the standards.
In this particular case, however, simply making memcpy() handle overlapping moves correctly would not break anything. Well, I suppose there's a theoretical possibility that someone is counting on the old behavior in the backwards-overlapping case, but that would be bizarre; surely this is the kind of code that should get broken, if any of it even exists.
If memcpy() had been fixed 30 years ago to do overlapping moves correctly, as it could and should have been, that would have been the end of it; we would not be having this conversation.
How hard can it be for Adobe to write standards conforming code?
The user doesn't care, but professional software developers, which I assumes Adobe's developers are, should make it a priority to follow the relevant standards.
Pretty hard. C is a minefield of undefined behavior. Integer overflow is considered undefined behavior. Which means its perfectly valid to call abort(), wipe your hard drive, then light your system on fire. Or, have the number quietly wrap around.
Oh, and its perfectly fine for me to change the behavior from one to the other, or even have a lookup table of random responses to undefined behavior. Because, like, you're not following the standard. And its so easy.
For what it's worth, I agree. Every time I hear about problems that arise from programming in C, with its undefined behaviour etc., I think to myself, there must be a better way to do it. But I don't know of any way that wouldn't involve scrapping 90%+ of software we use every day.
It also indicates that Adobe don't run their code through valgrind, which would have picked this problem up.
Considering that flash is (a) security critical and (b) often full of security bugs, you'd think they might run valgrind over it once in a while.
Entirely Adobe's fault this one.
BTW with Firefox 4 the need for flash has virtually gone. All the popular video sites can play most of their videos using the native video support in the browser.
Generally no, out of the box it doesn't, things like JITs and GCs can confuse it, however it's got a bunch of flags and config options and what not to allow you to use it.
It's always a trade-off, but in this case the cost is minimal. If your program is limited by the speed of memmove/memcpy(), and you absolutely must copy (rather than alias, or whatever), you probably want to use a 128-bit aligned, widely unrolled SSE copy or something like that. That is, take advantage of the constraints of your precise situation. You can't do that with an API as generic as memcpy().
In newer versions of glibc, memcpy() does take advantage of SIMD instructions if they are avaialble. It works something like this:
The memcpy() function is marked specially in the ELF file. On the first invocation, the dynamic linker actually calls the function and then takes the return value and treats it as a function pointer. This function pointer is then used for linking, so that subsequent invocations will call that pointer instead.
The memcpy() function in glibc then simply checks which SIMD extensions are available and returns a pointer to the appropriate real memcpy to use.
Yep, except it still needs to check the alignment of the pointers passed into the function to see if it can use the aligned mov instructions. If you're that stuck for speed, you'll want to make sure all your buffers are already aligned, and then call a function that doesn't do any checking.
And even if following the spec weren't better for users in the long term, it is technologically superior, and Fedora is free to choose the technologically superior alternative over pleasing the masses if they want. Despite what Linus suggests, there is no Ten Commandments of software that dictate the rules here.
I guess Linus could have used this opportunity to convince a naysayer like me why I'm wrong, but instead he was just insulting and didn't address the real points at all.
(And FYI, I'm a diehard Linux user)