The ECC information is stored in separate DRAM devices on the DIMM. This is responsible for some of the increased cost of DIMMs with ECC at a given size. When marketed the extra memory for ECC are typically not included in the size for DIMMs so a 32GB DIMM with and without ECC will have differing numbers of total DRAM devices.
I think you responded to the wrong person, unless you think I was implying that the extra bits needed for ECC didn’t need extra space at all? I wasn’t suggesting that - just that they aren’t like a checksum that is stored elsewhere or something that can be ignored - the whole 72 bits are needed to decode the 64 bits of data and the 64 bits of data cannot be read independently.
If we're talking about standard server RDIMMs with ECC (or the prosumer stuff) the CPU visible ECC (excluding DDR5's on-die ECC) is typically implemented as a sideband value you could ignore if you disabled the correction logic.
I suppose what winds up where is up to the memory controller but (for DDR5) in each BL16 transaction beat you're usually getting 32 bits of data value and 8 bits of ECC (per sub channel). Those ECC bits are usually called check bits CB[7:0] and they accompany the data bits DQ[31:0] .
If you're talking about transactions for LPDDR things are a bit different there, though as that has to be transmitted inband with your data
We are talking about errors happening in user space applications with ECC operating normally and what the application ultimately sees.
My point is that when writing an app you wouldn’t be able to “not use” ECC accidentally or easily if it’s there. It’s just seamless. I’m not talking about special test modes or accessing stuff differently on purpose.
Interesting that DDR5 is different than DDR4. 8 bits for 32 is doubling of 8 for 64 so it must have been warranted.
Wow, I feel like you must have a pretty unique specialization because you are doing incredibly well for ICT5 (Congrats!) There are ICT levels above 6 but I feel like in practice those are very rare and you could easily spend decades working and not make it above ICT6 in many divisions.
I reached out to Mercury Research (the company whose research is being quoted) and the articles are missing some clarifying information.
The overall CPU Market Share number also includes both IoT and SoC numbers which are not included in the other numbers and in which Intel's shipments have declined substantially while AMDs SoC products (including game consoles) are making up a large number. This is what's responsible for the disparity.
They also added that the data was distributed with a helpful clarifying note the news articles are mostly omitting:
* Please keep in mind that due to the inventory corrections taking
place, that the statistics and share movements reported here in
the past few quarters -- and likely for the first half of 2023
-- are more reflective of the suppliers differing in the depth
and timing of their inventory corrections, rather than
indicating sales-out share of the PC market, which is something
we probably won't know with any accuracy until late in 2023. *
I’m guessing ‘SoC and IoT’ includes things like the CPUs in NAS appliances. I have noticed recently a huge proliferation of AMD Ryzen CPUs in these devices when a couple of years ago it was Celerons/Pentiums.
But I suspect they rejected it because they don't care about IGP performance, just as they rejected Intel's "Iris" Broadwell CPUs and the hybrid amd/intel chip.
I love my Synology and love that they are going AMD. While some argue that having video transcoding on a NAS is wrong, it’s also very handy, but the AMD options I’ve seen aren’t a patch on quicksync.
That tech is unbelievably good and might be the only bit of Intel I like.
I bet they will in due time. They have to recompile their whole stack to ARM, which probably isn't easy, but ARM is eating the world. It's only a matter of time.
The 80-core ARM servers Hetzner provides are great. A nice midpoint between a slower general-purpose x86_64 server and an expensive and brittle GPU server.
I can easily see datacenters with these things in the future.
Low-end NASes have been using ARM for a long time.
The higher end x86 NASes are advertised closer to "home datacentre in a box" solutions that offer VM/container runtimes and third party software markets.
So far, x86 is significantly more user friendly and performant for these use cases: the "desktop-ish performance for desktop-ish prices" range doesn't really have many hardware options (Apple certainly won't sell theirs to OEMs, and the Snapdragon 8c is a bit on the low end, and the real data centre ARM many-core monsters are too big); and the software offerings aren't quite as user friendly either.
Measure Response: Determine what an adversary nation does in response to the violation (what resources are scrambled, where do they come from etc)
Messaging: All the permutations of two sets of government folks trying to send a message the pair mutually understand re: defense etc
Tie up resources: Low cost provocation may divert higher cost resources and tie them up for a longer period of time since its a dwelling threat
Acquire signal information: Use sensors and measurement systems outside of the threat to measure the locations and signatures of tracking systems deployed to assess and monitor the threat
Deploy lightweight physical payloads: dust something of interest over an area etc
I would imagine most of the very large companies have a treasury group that's managing their excess cash etc and that those people are probably offering lots of opinions on general economic direction. Couple that with contact with lots of their peer group doing the same, the economic consulting groups they're all hiring and contact with a bunch of bankers and I'd assume they'd start feeling they have an opinion. Add to that what they see in their own business when they're broad enough and I'd see why they'd start expounding on the topic
Maybe its better in some programming languages, but my experience with verilog/systemVerilog output is that it generates a design with flaws almost every time (but very confidently). If you try to correct it with prompting it comes up with reasonable sounding responses about what its fixing then just creates more wild examples.
One pretty consistent way to see this is to ask for various very simple designs like a n-bit adder, it will almost always do something logically incorrect or syntactically incorrect with the carry in or carry out
ChatGPT has acted as an advanced rubber duck for me. It outputs a lot of bullshit but so often it gives me the prompt or way of thinking needed to move on.
And it’s so much faster than posting on stack overflow or some irc. It doesn’t abuse you for asking dumb questions either.
No, that tweet predates the more recent ban tweet by 2 days. I just checked and ElonJet and a couple of the other accounts definitely show as suspended/banned to me.
Make no mistake, the issue is in no way resolved on new manufactured 1st gen pros. New manufactured stock are more likely to be covered by the original purchase warranty and so there is less pressure on Apple to extend support.
I recently had my pre-Oct 2020 pros replaced with new manufactured stock under this support program. Within less than a month the replacements developed the same sound issues, supported by Apple's diagnosis and another free replacement.
I don't think I use them in an unusual way - I don't even wear them during exercise. There is just something inherently flawed in the design that causes the sound quality to subtly degrade by a significant amount.
There's a pretty good set of diagrams and descriptions of the faults in this paper https://dl.acm.org/doi/10.1145/3725843.3756089.
Also to the parent: there's an updated public paper on DDR4 era fault observations https://ieeexplore.ieee.org/document/10071066