Hacker Newsnew | past | comments | ask | show | jobs | submit | dnlrn's commentslogin

In the west, Riichi Mahjong seems to be the most well-known form of Mahjong. After living in China, I started to notice that the most popular form of Mahjong here is called "Sichuan Bloody Rules". It is completely different from other forms of Mahjong:

1. The game doesn't end after the first person finishes

2. Many tiles like flowers and wind directions are removed from the game

3. At the start of the game you have to decide one color which you ban from your hand. Your final hand isn't allowed to have a tile of this color inside.

4. You are not allowed to 吃 chi from players

5. There is a pretty complex scoring system

I have the impression that this form of Mahjong is the most unique one. The rules are very different than most other forms of Mahjong, including Riichi Mahjong, and make for really interesting games. So if you're interested in Mahjong feel free to try it out.


I think your sample might be a bit skewed. In the west, I find people typically associate "mahjong" with mahjong solitaire.


vkontakte, a russian facebook like site.


I don't get what the big deal is. Does Linux add support for newer Intel and AMD chips in old kernels? No. Does Apple add support for newer Intel and AMD chips in older versions of Mac OS? I'm sure no. So why does Microsoft have to do something the other companies aren't doing either? I mean it's not like they actively prevent older Windows versions on running on these chips, it's just that they don't add support for the newest chip features.

Sometimes when I see these newspapers bashing Microsoft I question whether they even think about what they are writing before pressing the publish button. Headline: Microsoft is doing what all other OS companies are doing too.


> Does Linux add support for newer Intel and AMD chips in old kernels? No.

Yes. All big distributions backport drivers to older kernels they still support. And Microsoft still supports Windows 7 and 8, and has to do so until 2020/2023 or heads will roll, no matter what their marketing wants people to think.


Windows 8 has almost no corporate adoption, so the interesting data point is 2020 and 7, which is just a bit more than three years away. Migration targets should be somewhere around 2019 (last minute really), but procurement for new machines with Windows 10 builds might start being rolled out in the first half of 2019 with most corps, which is practically just 2 years away. Factor in that the next generation chips still need some time to get to marked and you have a rather narrow gap, especially with corps that have a more conservative hardware policy. I don't see the issue here. I just hope that the Windows 10 rolling update will reduce the enterprise upgrade anxiety.


My guess is that corporates will stick to Win7 the way they sticked to WinXP. I just go upgraded last month from XP to 7...

Though the 32bit to 64bit was probably a big part of the big step between XP and 7.


small nitpick: the 2020/2023 dates are for extended support, where they only provide security patches. But you're right in that windows 8.1 at least is still in mainstream support, so they will accept feature requests (until 2018).


Microsoft supporting Windows 7 and 8 does not mean implementing support for all future HW. They had HW requirements on launch and Microsoft supports that set of HW.


Supporting Windows 7 on hardware from that era and making it forward compatible are two different things.

If Windows 7 came with your machine you're covered. If you're looking to build or buy a new system you need to go Windows 10. If you don't like that, you have other non-Windows options.


Red Hat does "hardware enablement" for newer chipsets in older versions of RHEL, by backporting the necessary changes to the older kernel.

However your point is still correct: RHEL 6 which will support these new chips was released a year after Windows 7. RHEL 5, released two years before Win7 is no longer getting hardware enablement, only security fixes and other critical stuff.

https://access.redhat.com/support/policy/updates/errata


> However your point is still correct: RHEL 6 which will support these new chips was released a year after Windows 7. RHEL 5, released two years before Win7 is no longer getting hardware enablement, only security fixes and other critical stuff.

How does that make his point correct? There's still Windows 8, which neither gets support, and which was released two years after RHEL 6.

Windows 8.1 was technically also its own version and released 3 years after RHEL 6, but I'd even be okay with ignoring that one.


Whether or not you want to ignore it, Windows 8.1 is still in the full mainstream support phase until January 2018. This is the equivalent of Red Hat's hardware enablement phase, and totally ridiculous that _at least_ that version is not getting support for the newer hardware platforms.


> Does Linux add support for newer Intel and AMD chips in old kernels?

The kernel devs may not, but distros backport new stuff to old kernels constantly. For example, RHEL/CentOS 6 is kernel 2.6.32, release 7 years ago. But it supports modern CPUs.


More succinctly: Linux doesn't have this problem at all because it's open source. You can stitch together the kernel in any way you want, and layer any user space on top of that to get an OS.


It's actually not a big deal but with the issues Microsoft has had with Windows 10 over the past year, it makes for good media fodder to stir up the masses. When this first hit the tech news sites earlier this year, my answer to anyone shouting "OMG I can't have my Win7 on my Skylake, die Micro$oft!" was to politely ask them to attempt to install Windows 98 on a Core i-series machine. You could see the gears turning in their skulls and revelation would dawn upon them that yes, this has happened before, many times, and is a perfectly normal progression.

It's not even limited to Microsoft; you can't install Mac OS X 10.7+ on anything older than a 2nd generation Core2 Duo, and with good reason. OS X 10.6 ran like crap on the first gen Core Duo and Core2 Duo machines, despite being fully supported by Apple.

There comes a time when the software exceeds the capabilities of the hardware, and this is no exception.


my answer to anyone shouting "OMG I can't have my Win7 on my Skylake, die Micro$oft!" was to politely ask them to attempt to install Windows 98 on a Core i-series machine

Done that, and it works (as well as Win98 can, in any case.) Other apps from around that time will work too. Of course it can only use one core, but it's interesting to see just how ridiculously fast even a single core can be after 10 years of hardware improvement if the software hasn't "grown to fill the space".

https://www.youtube.com/watch?v=YOWzorOD-II (not me)

It's not even limited to Microsoft; you can't install Mac OS X 10.7+ on anything older than a 2nd generation Core2 Duo, and with good reason. OS X 10.6 ran like crap on the first gen Core Duo and Core2 Duo machines, despite being fully supported by Apple.

That's the opposite situation; newer software on older hardware.


Sorry, I should have expanded on the Mac example. My point was that just as you wouldn't expect Windows 10 or the latest Mac OS to run on ancient hardware, you can't expect the latest hardware to continue support for old software past a certain point. x86 hardware and operating systems aren't created in different universes, they are designed alongside one another to work together.


They are two entirely different situations. Non-techie enteprise users have no afinity for their processors, only the applications they rely on and the interface they're familiar with.


> There comes a time when the software exceeds the capabilities of the hardware, and this is no exception.

Are you saying that Windows 7 / 8 / 8.1 exceed the capabilities of latest Intel and AMD chips?


No, that's nonsense.

No one liked Win 98, and it was comprehensively EOL'd by Win XP - which always ran fine on iX machines.

Now, many people still prefer Win 7 to the creeping user-hostile horror that is Win 10 - if only because it's possible to use Win 7 with relative confidence that an update won't suddenly kill your machine, or your webcam, or your Kindle, or whatever else MS manages to screw up in the next year or two.

That's not a trivial difference. MS+Intel are attempting to force users towards an OS that is inherently broken, and - given the level of competence on display in the Windows division at the moment - is unlikely to ever work reliably.


> No one liked Win 98,

As I recall Win 98 and 98SE were hugely popular. People may not have loved them, but I don't think it was generally disputed that they were a huge improvement over Win 95. In fact, hardly anyone liked Win ME, and many clung to 98 until XP arrived, much like people clung to XP and avoided Vista until Win 7 arrived.


My dad would still use 98 today if he had the choice. He's increasingly hated every version of Windows since 98.


"No one liked Win 98, and it was comprehensively EOL'd by Win XP"

That's not true at all. Windows XP was based on NT, so had a very different technical base than 98. It had a completely different driver model, and could not run real-mode DOS apps. There was tons of hardware and software that it couldn't run. There were lots of people running 98 for a very long time to run legacy apps and hardware after XP came out. There probably still are.

The big difference is that those machines aren't on the internet, so no software maintenance is required.


This is just completely wrong on every level. Plenty of people liked Windows 98. There was the general "I don't want to upgrade" crowd, but more specifically, there were the gamers who wanted to keep playing the games they had already paid for. Windows XP wasn't great for that.

That gamer inertia was powerful enough that Windows 98 got DirectX 9 in December 2002, well after the release of Windows XP, and Microsoft released their last DirectX on 98 in December 2006.


> No one liked Win 98

People freaking loved win98 once a few service packs got released, especially if they had the plus pack.

They didn't like Windows ME. Or Vista.


If memory serves, Win 98SE fixed many vanilla 98 problems before the love. Same with XP pre service packs; #1 fixed many stability issues, #2 fixed many back-compatibility issues(or vice-versa). Same comparison can be made about the Vista & 7(aka Vista SP7) releases. What also came with each new release was bloat, poorly implemented features(some initially, others perpetually) and phone-home functionality: XP=4, Vista=32, 7SP1>40. Win NT was the last OS MS created, it's been feature rich-er iterations ever since.


Revisionist bullshit. Every new version of a Microsoft OS had been met with derision and complaints that "the previous version was the best" up until another version is released. People thought XP was going to be the ME release of Win 2000 with it first came out.


Your forgetting many versions sucked before the first or second service pack. MS tends to release things ~2 years before they are ready. Sometimes they still suck after those service packs.

Case in point at release win 98 was rather iffy, 98se was solid. XP was much better after SP1, win 7 was ok to start with but defiantly got better.


AFAIK the requirement for 10.7 is just 64-bit support, the requirements for 10.8 are higher.


You seem to have this the wrong way round.

->There comes a time when the software exceeds the capabilities of the hardware, and this is no exception.

This is not "new software on old hardware". OK, It's happened before with other companies. For example Sony saying PS3 games will run on the PS4. But that kind of nonesense is why you don't buy Sony hardware, and their stock price is trundling along at record lows..

My advice is more specific. Anyone holding AMD or Intel stock. Sell now.


It's not what Microsoft is doing, it's what they have decided not to do, which is support their older operating systems. Apple doesn't have this problem because their supported hardware is very narrow in scope, usually only works for a small range of OS releases, and no one tries to install Leopard on their brand new Macbooks.

As I've said before, the issue is with the removal of legacy hardware. Intel removed EHCI support for USB in it's chips with Skylake in favor of xHCI. This has been a long time coming and not a malicious or insidious act. Windows 7 doesn't natively have xHCI support and Microsoft isn't adding features to Windows 7. So, on Skylake, you can't install Windows 7 via USB and you can't use a USB keyboard or mouse during installation because the USB drivers are incompatible.


That said you can probably make your own slipstream iso with the drivers installed and setup windows 7 that way.


Probably and it will not be supported by Microsoft or Intel/AMD.


They will actively refuse to issue new Windows 7 licenses to OEM on these new chips. They are kind of actively preventing older Windows versions running on these chips, yes.


That kind of makes sense. Why would they issue licenses for hardware that they know won't work?


I guess a more expensive workaround then would be to buy Windows 7 retail if it's true it will still work, just wont use the new features of the chip.


Yeah but the problem with installing a retail windows 7 on an OEM hardware (as in mostly laptop) that wasn't meant for this OS is that this is mostly custom hardware with no driver available


I recently was forced to drop windows onto a partition on my main laptop. As 8, which came with it, is a PITA I decided to put on 7.

Install goes OK, system comes up and I have no Wireless Drivers. Fine. I plug in Ethernet. Doesn't work. Urgh. Ok, grab drivers on phone, use phone to transfer it. Nope, won't recognise anything in the USB3 ports.

I end up booting into linux and copying the driver install files directly onto the windows partition. If the drivers hadn't been available for windows 7 I would have been completely stuck.


I think you can just install a pirated version, run the Windows Genuine Advantage check (or whatever it's called). It will report that it's a bootleg version then give you an offer to pay for the license to make it legal.


Why bother? If Microsoft doesn't want my Windows 7 money, they won't get it.


The two OSes you mentioned are used by less than 10% of the market - 6,49%, according to the link below. Windows 10 is used by 23% of users.

https://www.netmarketshare.com/operating-system-market-share...

There are many people who consider the older OS versions to be "better" than the latest Windows X. What "better" means is quite subjective - familiarity, stability, privacy, compatibility, etc. I really don't know, because I haven't used Windows in many years.

Finally, there's the "pirates" - hundreds of millions of people who run older versions of Windows, which are easier to crack because they're not as 'cloud-enabled' as Windows X.

But all in all, I agree that it's not such a big of a deal..


>Finally, there's the "pirates" - hundreds of millions of people who run older versions of Windows, which are easier to crack because they're not as 'cloud-enabled' as Windows X.

Windows 10 is just as easy to crack as windows 8.1. Just install a local KMS server and you're good to go.


Windows 10 is just as easy to crack as windows 8.1. Just install a local KMS server and you're good to go.

I wonder if the obligatory updates will eventually detect and defeat those cracks, or have the pirates thought of that too? (Or perhaps, given the aggressive push by MS to get everyone on Win10, even giving it away for free, they won't really care.)


Forced obsolescence is kind of a big deal.


Dunno. With Linux being a loosely coupled collection of parts, it is quite possible to upgrade the kernel without disrupting userspace.

And with Apple, you are dealing with one vendor support a small set of their own products. The only way to get into this issue would be with a hackintosh rig.


>So why does Microsoft have to do something other companies aren't doing either?

The bigger the market share, the bigger commitment to users. Microsoft doesn't "have" to ensure their older OSes work on newer chips. That sentiment is just another way of indicating that the userbase may take action (like choose to keep systems even longer, leading to MSFT having to support Win7 even further into legacy than they did for XP).

As well, consider the enterprise customers who are entrenched in Win7 but absolutely need new machines for expansion, but can't commit to replacing all machines. What was simply buying new machines now turns into supporting a new OS in addition.


This! Exactly this. When Apple is doing it that's fine but when Microsoft does it then they are awful. And yet you still get to use tons of old software on w10. I still remember when El capitan came out last year and xcode stopped suddenly working. We all had to update everything. No explanations and no choice in what you gonna run.


I agree, this feels like Microsoft bashing.

All they're saying is that if you go buy a new computer (which will probably have Win10 pre-installed anyways) you need to make sure you have software versions supported on those chipsets/CPUs.

And they did offer free upgrades.


Sometimes doing something different is expected when you are the market leader.


Especially also when your market lead causes a lot of exclusivity like it does in the software world.

I.e. many people have to run Windows to do their job and often even have to run Windows 7 specifically, but might still need new hardware in the future.


> many people have to run Windows to do their job and often even have to run Windows 7 specifically, but might still need new hardware in the future.

Microsoft is under absolutely no obligation to support Windows 7. Those "many people" should and will be forced to upgrade, otherwise a substitute will appear and rake over their jobs/company/market.


Unfortunately for the coherency of that argument, the same point applies to MS itself.

Microsoft is under no obligation to cater to the needs of any customers whatsoever.

Which is fine. MS can shoot itself in the head if it wants to.

But there will be consequences for the company.


I have been developing for Windows since the 3.1 days.

Every year is the desktop year of something. Meanwhile I got fed up to keep trying to run GNU/Linux on my laptops.

Even the Asus Netbook I bought with Ubuntu support out of the box had wlan issues that took around 6 months to get sorted out.

People keep complaining, but the desktop market hardly changes.

Now the lower margins hybrid tablets/laptops is flooded with 2GB/32GB eMMC Windows 10 netbooks.


People keep complaining, but the desktop market hardly changes.

Exactly. The desktop/laptop market has been stagnating for some time. That's partly because the average hardware in those categories reached the point of being good enough for the average user. Personally, I think it's also partly because much of the PC software industry has been stuck in a rut for the past few years. Overall, for most users, the platform simply hasn't offered anything new that they couldn't already do with their 5+ year old gear.

In areas that do benefit substantially from newer hardware, like gaming or CAD or multimedia creative tools, the traditional PC has still been doing pretty well. There have been a lot of significant advances in areas like SSDs, graphics cards and monitors. There have been lots of advances in smaller, low-energy versions of relatively powerful components that have enabled high-end laptops to do things only chunky desktop workstations could do a few years ago. But these areas are only relatively small parts of the overall PC/laptop sector.

Meanwhile, entire sectors like smartphones, tablets and web apps have taken off like a rocket, by providing hardware that supports new and very different use cases, software that takes advantage of those new opportunities and, almost as importantly I suspect, software that typically is cheap and "just works".

Microsoft had well over a decade of almost totally unchallenged market dominance to figure out user-friendly installation, maintenance, removal and security/sandboxing of applications on Windows, and it rearranged the deck chairs a bit here and there. Apple came along with iPhones, almost one-touch installation from an app store and a [dumbed down|simplified] interface that anyone could use effectively, and they became the biggest company in tech in a fraction of that time.

What concerns me most about Microsoft's current direction is that they seem so determined to chase the cheap/easy sector and alternative revenue sources, which have been so effective for the likes of Apple and Google, that they're losing the default powerful/flexible platform that they've provided for the past two decades in the process, effectively stepping a long way backwards in that sector. The trouble is, because Microsoft have been so dominant in that sector for so long, where do those who still value that power and flexibility go instead, even if they are willing to pay a premium to get it?


It will happen just with any other market before, after reaching a certain plateau, only a niche will care about.

How many people do actually tune their cars, specially modern ones that require all sorts of on-bord computers?

Or customize their VCRs, TVs and so on?

PC have become what every other home computer system already was, plain appliances.

Before the PC all other home computer systems had all their OS, or at least part of it in ROM and where mostly only expandable via external devices on their connection port, very few models had internal expansion bays.

The market has come to realize that the PC flexibilty doesn't pay any more in the age of "good enough hardware" and razor thing margins, so back to the old appliance model.

As for alternative OS, Apple isn't an alternative in the majority of the world. On my home country people earn on average 500 euros, only the upper layer can afford Apple computers.

ChromeOS is hardly practical, and never saw anyone using one in Europe. Just a few shops in Germany trying to get rid of them at any price.

Android might be a solution, but it remains to be seen how the desktop version of Android N really works out in the wild.

As for GNU/Linux systems, I stop considering them as I am yet to see the typical stores that average people go to buy computers invest in a proper packaged whole stack experience.

So this leaves us with Windows, bad or not, that the majority of people already know.


I think it's notable too because maintaining compatibility is something that Microsoft seemed to value and spend significant resources on.

Are there any Windows APIs that have been actually dropped and not just deprecated?


Linux lets you run new kernels on old distros, though.


yea but you are on your own if you use newer stuff than what distro provides. distros and microsoft probably picked stable versions that they tested and know that they work.


The best Linux analogue I can think of is systemd. It's not about resisting minor changes or sticking to an obsolete OS and expecting the vendor to support you indefinitely. It's about the rug being pulled out from under your feet and completely fucking over everything you liked about the platform, while all the people you hoped would take a stand, bow their heads and step meekly into line.

With Linux, you can still install an alternative distro that doesn't use systemd. Or you could switch to a BSD. With Windows, you could keep using 7, up until now. Now you will eventually have no choice. It's as if systemd were completely embedded in the kernel and every alternative distro and *NIX nuked from orbit. Only, rather than mostly affecting sysadmins, this affects hundreds of millions of ordinary users.


> The best Linux analogue I can think of is systemd. It's not about resisting minor changes or sticking to an obsolete OS and expecting the vendor to support you indefinitely. It's about the rug being pulled out from under your feet and completely fucking over everything you liked about the platform, while all the people you hoped would take a stand, bow their heads and step meekly into line.

I'm not a fan of systemd, but I think you're way off base. First, this is more of a hardware support issue than a revamped init rapidly growing into a full OS like a cancer. Second, this is the way it's been done since the early days of Windows; it's only in the headlines because Windows 10 is a good punching bag.

> With Windows, you could keep using 7, up until now.

Bullshit. You can keep using Windows 7 until the day your computer dies. There's no magical switch being flipped that suddenly renders your existing Windows 7 installation inoperable the day a Kaby Lake CPU hits the local reseller's shelves. You're being deliberately disingenuous here.

> It's as if systemd were completely embedded in the kernel and every alternative distro and ∗NIX nuked from orbit.

No, no it isn't. Again, this is a hardware support/driver thing, and has nothing to do with what's going on in Linux land. For that matter, who exactly is going to "nuke every alternative distro and ∗NIX from orbit"? While systemd has taken over most of the major Linux distros, there will always be some like Slackware and Alpine that run perfectly fine without it, and no one from the systemd cabal will care. I'd certainly love to know how you think the BSDs will be wiped out by the spread of systemd too, considering that Poettering and Sievers intend for systemd to remain Linux-only.


[flagged]


We've banned this account for repeatedly violating the guidelines after we've asked you not to do so. We're happy to unban accounts if you email hn@ycombinator.com and we feel you'll comment only civilly and substantively in the future.

https://news.ycombinator.com/newsguidelines.html


Depends on what kind of production you mean. In your own projects you should set the C++ standard yourself anyway so this is a non-issue. If you are a distribution maintainer, it probably means you have to patch some makefiles that assume that gcc always compiles with c++98


This will be fun (not) for distributions. Clang and G++ also seem to be slower when a newer C++ standard is enabled. I don't really buy the argument that changing the default will help new users who want to use C++11 features. Why not make C++14 the default then?


>Why not make C++14 the default then?

I might be misunderstanding something, but gcc 6.1 does make C++14 the default.


No, you're right, I misread it.


This depends on the distribution. Fedora for example has GPU switching using PRIME working fairly well and automatically disabled the dGPU by default.


Wasn't aware of that, though the current performance of nouveau is pretty horrendous. At least for newer cards. It's a bit unfortunate but to essentially have a working card you must have the proprietary drivers installed.


This is just a mirror and not the repo the devs actually work on, so I don't think it's very taxing on the resources.


What also hinders open Wi-Fi adoption in Germany is a law called "Störerhaftung", which basically states that the owner of the Wi-Fi is liable for all damages that users of his Wi-Fi do, for example filesharing, hacking, whatever. Keeping track of all users is not an option that all free Wi-Fi operators have. I'm sure other countries have similar laws. Do The Netherlands not have such a law?


I just came across Airfy which apparently gets around this by tunnelling (I assume) all traffic and taking all legal liability and risk. I've also read that this Stoererhaftung law is set to change this year, specifically to allow things like free WiFi in businesses.


IIRC there is an exception in the law for ISPs and Airfy registered as an ISP.


Microcode is not written to the CPU, it gets loaded on every boot. This can happen during the BIOS POST, during the OS bootloader or even while the OS is booting. Therefore, yes its possible to run older microcode (at least on Linux), since you just have to not write the newer version on boot. If the BIOS contains the new microcode, you can flash the previous version of the BIOS.


> Microcode is not written to the CPU, it gets loaded on every boot. This can happen during the BIOS POST, during the OS bootloader or even while the OS is booting. Therefore, yes its possible to run older microcode (at least on Linux), since you just have to not write the newer version on boot. If the BIOS contains the new microcode, you can flash the previous version of the BIOS.

Did you read the last paragraph of my message? Because you're not really disputing anything I said. (to clarify, when I say "You cannot load old microcode anywhere", I define "old" to mean "older than the currently running microcode", I.E. you cannot downgrade it at runtime after it's gotten a new one loaded to RAM.

If you're willing to run outdated system firmware (with associated bugs, security vulnarbilities, etc), you can do it - just like I said in the message you're replying to. But that's not what I'd call a good solution.


CPUs are complicated pieces of technology. During the manufacturing process, some parts have a better quality grade than others. The better quality parts allow some overclocking without producing errors and therefore they get put into the overclockable K-processors. The worse parts get put into non-overclockable processors and run fine using the default voltage.

Some of the non-overclockable cpus might work fine after overclocking, some might not. Intel definitely doesn't want the negative press when some kids decides to overclock their non-K CPU and break it during the process. So I understand the decision.


Some of the non-overclockable cpus might work fine after overclocking, some might not. Intel definitely doesn't want the negative press when some kids decides to overclock their non-K CPU and break it during the process.

Have you participated much in the overclocking community? The whole point is that every CPU chip is different and can be overclocked by different amounts, some almost not at all. There is no "negative press", since anything past stock speed is a bonus which is what overclockers are trying to get. If CPUs were not working at stock speeds, that would be a reason for "negative press".


> Have you participated much in the overclocking community? The whole point is that every CPU chip is different and can be overclocked by different amounts, some almost not at all.

On the other hand, there is no way that you can actually determine how far a CPU can be overclocked and still maintain full functionality, so it might be best to limit overclocking to systems that will not be used for something of high financial or safety value.

The problem is that fundamentally the hardware is still analog. Digital is an abstraction on top of the underlying analog system. In the digital abstraction, a signal changes instantaneously from 1 to 0 or from 0 to 1. In the underlying analog system, the components carrying the signal have capacitance and resistance. Changing the high voltage that represents 1 to the low voltage that represents 0, or vice versa, involves discharging or charging that capacitance through that resistance, and that takes time.

This sets an upper limit on how quickly that signal at that particular point in the circuit can change digital state.

There are also other ways the analog nature of the underlying circuit leaks into the digital realm. Neighboring components that are in the digital abstraction completely isolated from each other (except through intentional connections) might be coupled by stray capacitances and inductances. This can let signals on one cause noise on the other, or the state of one could change how fast the other can change state.

When a chip is designed the designers can figure out what areas are the most vulnerable to potential analog problems. They can incorporate into their tests checks to make sure that these areas are OK when the chip is operated in spec.

The ideal scenario is that if you clock a chip fast enough to break something, the chip blatantly fails and so you find out right away, and can slow it down a bit.

The frightening scenario is a data dependent glitch, where you end up with something like if the ALU has just completed a division with a negative numerator and an odd denominator and there has just been a branch prediction miss, then the zero flag will be set incorrectly.


Sorry, the negative press argument is utter nonsense.

If you run any hardware outside specifications, you expect it to fail. People brick phones and ruin engines but there isn't a backlash against people trying to jailbreak their phone or modify their cars. If anything, the people that matter —the enthusiast market for these devices— are demanding that their devices be more customisable. The press and other consumers don't give two hoots about little Jimmy trying to rice 5GHz out of his $100 CPU and turning it into liquid magma. Stupid kid was stupid.

The opposite is true though. If lil Jim manages to get a $5000 part for $100, other consumers are going to factor that into their purchasing decisions.

What is most concerning is that this is a part that has been out and about for a little while. There are dozens of guides recommending certain CPUs for this that Intel are going to patch up now. The articles and their recommendations will remain out there though. It's false advertising by the back door.


And that's just not true.

There would be no negative press for intel. Everyone with the shlightest knowledge about overclocking knows that overclocking can damage your parts. And like stated, parts breaking without voltage increase is highly unlikely. But still: Assume I buy an Intel Non-K processor, base-overclock it and it breaks. How on earth would I be able to produce negative press for Intel by publishing that?

It's simply a profit optimization. K-processors cost more, people who want to overclock had the option to buy non-K, that reduced sale numbers of the K line. Also the i7-6700 is clocked way below the i7-6700K, it was a nice option to get the cheaper version and up it to K level, saving 100€ for some time (prices changed).

Behavior like that is why I buy AMD.


"Everyone with the slightest knowledge about overclocking knows that overclocking can damage your parts."

It is not necessarily about damaging your parts; that would be the least of their worries. Unreliability of the CPU is the real problem. Your CPU might be 20% faster, but if it incorrectly computes some number in your spreadsheet, corrupts the file, or, worse, corrupts your file system without immediate consequences, or, even worse, makes hardware (drone, self-driving car, nuclear facility) behave incorrectly, they will not just have angry customers, but likely also lawsuits started against them (yes, they might win them, but not necessarily easily; people would complain that they should have closed that hole, given that they knew it was misused)

Also, that 'everybody with the slightest knowledge about overclocking knows' is only relevant as long as overclocking remains a niche thing. If it were mainstream, many of its users would not have 'the slightest knowledge about overclocking'. This microcode update helps keep it that way.


> they will not just have angry customers, but likely also lawsuits started against them

Which is why they've prevented over-clocking the whole time, except they haven't.

Their chips that allow over-clocking and the chips that don't are physically the same chips, there's nothing special about them. Intel is doing this only so that you can't buy a cheaper processor and get a more expensive processors performance.

I bought a 600MHz Celeron once, when the range was around 600MHz-1GHz. It overclocked stably to 900MHz. In essence I got a much faster processor for a much lower price. That is what Intel is fighting against, not some mythical lawsuit over life and limb.


Overclocking isn't just a joy ride. It gives you more for your money, reducing the demand for the more premium version.


I've never heard of a CPU "breaking" during non-LN2 Level OC.


I have, but that was something like 15 years ago now


Well, a theoretical attack is worse than no theoretical attack. Especially if there are perfectly fine protocols available that are IND-CCA2 secure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: