Reuters earlier this year - "The development of the 4680 battery has been facing troubles, with the company losing 70% to 80% of the cathodes in test production compared with conventional battery makers, which lose fewer than 2% of their components to manufacturing defects, the report said."
The company L&F referenced in this article were supplying said cathode material.
This forum discusses information freedom pretty much all day every day. Now we have a real world example of suppression of information in the US which is rather rare and people (see comments) using technology to evade it.
Most people with ad blockers don't realize how unusable the web is for those that don't have ad blockers. I think most would agree this is a poor state that industry incentives have landed us in, and with the web being distributed, it's hard to know how to fix.
Similarly those who use Linux probably don't realize how bad Windows has got recently.
Microsoft has managed to replicate this awful ux problem on a system that they entirely control...
Yes. It's a slow boiling frog thing. Kinda like a bad relation ship. You get used to the toxicity. But when you get out of it, it's soooo refreshing. Thank you everybody who made Linux on the desktop possible.
> Linux was designed to run on potatoes and has very little bloat over the years.
I think it's more that it was designed in the 80s-90s for hardware at the time, and hasn't added bloat or "requirements" since then. So as computers have gotten more capable Linux takes less of the overall capacity.
Well, I'd say it's almost the reverse of how it is with windows.
In windows, the bloat is built in by default. You don't get to chose how the start menu works, you get the windows default start menu and you better like the ads in it. It takes work to pull that garbage out.
In linux most stuff is opt in.
The other part of linux is most stuff isn't simply there running in the background by default. Firefox eats a decent amount of memory, but it's not doing that when I don't have my browser open.
Any Linux distribution comes with a lot of bloat, which is why it requires 30GB or so, rather than 30MB or so. Even the kernel is much bigger than it once was.
Nobody is yelling at you not to remove it, or trying to prevent you from removing it, or obscuring where it is and cross-linking everything to make it harder to remove, but it's still there and requires substantial work to remove, just like in Windows.
Samsung makes fast expensive storage but even cheap storage can max out SATA, hence there's no point Samsung trying to compete in the dwindling SATA space.
Does this mean that we'll start to see SATA replaced with faster interfaces in the future? Something like U.2/U.3 that's currently available to the enterprise?
The first NVMe over PCIe consumer drive was launched a decade ago.
It's hard to even find new PC builds using SATA drives.
SATA was phased out many years ago. The primary market for SATA SSDs is upgrading old systems or maybe the absolute lowest cost system integrators at this point, but it's a dwindling market.
We used to have motherboards with six or twelve SATA ports. And SATA HDDs have way more capacity than the paltry (yet insanely expensive) options available with NVMe.
We used to want to connect SSDs, hard drives and optical drives, all to SATA ports. Now, mainstream PCs only need one type of internal drive. Hard drives and optical drives are solidly out of the mainstream and have been for quite a while, so it's natural that motherboards don't need as many ports.
It's admittedly been harder than it used to be... I've been less inclined to buy CDs over just using streaming audio, since I pay for YouTube to go ad free, I use the music streaming kind of as a bonus.
On the Blu-ray front, I've tended to buy Blu-ray where available, but have bought DVD sets as well. There's also the high seas, so to speak for content that is not available for purchase/rent. I'd actually pay for a good AI upscaler software for DVD content if it worked under Linux (natively or via WINE). I left Windows outside of work a few years ago and not going back... I'm perfectly happy to pay for good, useful software even if I'm more inclined to look for open-source solutions first.
This article is talking about SATA SSDs, not HDDs. While the NVMe spec does allow for MVMe HDDs, it seems silly to waste even one PCIe lane on a HDD. SATA HDDs continue to make sense.
And I'm saying assuming that m.2 slots are sufficient to replace SATA is folly because it is only talking about SSDs.
And SATA SSDs do make sense, they are significantly more cost effective than NVMe and trivial to expand. Compare the simplicity, ease, and cost of building an array/pool of many disks comprised of either 2.5" SATA SSDs or M.2 NVMe and get back to me when you have a solution that can scale to 8, 14, or 60 disks as easily and cheaply as the SATA option can. There are many cases where the performance of SSDs going over ACHI (or SAS) is plenty and you don't need to pay the cost of going to full-on PCIe lanes per disk.
> And SATA SSDs do make sense, they are significantly more cost effective than NVMe
That doesn't seem to be what the vendors think, and they're probably in a better position to know what's selling well and how much it costs to build.
We're probably reaching the point where the up-front costs of qualifying new NAND with old SATA SSD controllers and updating the firmware to properly manage the new NAND is a cost that cannot be recouped by a year or two of sales of an updated SATA SSD.
SATA SSDs are a technological dead end that's no longer economically important for consumer storage or large scale datacenter deployments. The one remaining niche you've pointed to (low-performance storage servers) is not a large enough market to sustain anything like the product ecosystem that existed a decade ago for SATA SSDs.
Is it not fair to say 4x4 TB SSD is an example of at least a prosumer use case (barrier there is more like ~10 before needing workstation/server gear)? Joe Schmoe is doing on the better half of Steam gamers if he's rocking a 1x2 TB SSD as his primary drive.
On top of what the others have said, any faster interface you replace SATA with will have the same problem set because it's rooted in the total bandwidth to the CPU, not the form factor of the slot.
E.g. going to the suggested U.2 is still going to net you looking for the PCIe lanes to be available for it.
My desktop motherboard has 4... not sure how many you need, even if 8tb drives are pretty pricey. Though actual PCIe lanes in consumer CPUs are limited. If you bump up to ThreadRipper, you can use PCIe to M.2 adapters to add lots of drives.
The MSI motherboard I use has 3, and with the PCIe expansion card installed, I have 7 m.2's. There are some expansion cards with 8 m.2 slots.
You can also get SATA to m.2 devices, or my fav is USB-c drives that hold 2 m.2.
Getting great speeds from that little device.
It's more likely that third party integrators will look after the demand for SSD SAS/SATA devices, and the demand won't go away because SAS multiplexers are cheap and NVMe/PCIe is point to point and expensive to make switching hardware for.
Likely we'd need a different protocol to make scaling up the number of high speed SSDs in a single box to work well.
SATA just needs to be retired. It's already been replaced, we don't need Yet Another Storage Interface. Considering consumer IO-Chipsets are already implemented in such a way that they take 4 (or generally, a few) upstream lanes of $CurrentGenPCIe to the CPU, and bifurcating/multiplexing it out (providing USB, SATA, NVMe, etc I/O), we should just remove the SATA cost/manufacturing overhead entirely, and focusing on keeping the cost of keeping that PCIe switching/chipset down for consumers (and stop double-stacking chipsets AMD, motherboards are pricey enough). Or even just integrating better bifurcation support on the CPU's themselves as some already support it (typically via converting x16 on the "top"/"first" PCIe slot to x4/x4/x4/x4).
Going forward, SAS should just replace SATA where NVMe PCIe is for some reason a problem (eg price), even on the consumer side, as it would still support existing legacy SATA devices.
Storage related interfaces (I'm aware there's some overlap here, but point is, there's already plenty of options, and lots of nuances to deal with already, let's not add to it without good reason):
- NVMe PCIe
- M.2 and all of it's keys/lengths/clearances
- U.2 (SFF-8639) and U.3 (SFF-TA-1001)
- EDSFF (which is a very large family of things)
- FibreChannel
- SAS and all of it's permutations
- Oculink
- MCIO
- Let's not forget USB4/Thunderbolt supporting Tunnelling of PCIe
I think it's becoming reasonable to think consumer storage could be a limited number of soldered NVMe and NVMe-over-M.2 slots, complemented by contemporary USB for more expansion. That USB expansion might be some kind of JBOD chassis, whether that is a pile of SATA or additional M.2 drives.
The main problem is having proper translation of device management features, e.g. SMART diagnostics or similar getting back to the host. But from a performance perspective, it seems reasonable to switch to USB once you are multiplexing drives over the same, limited IO channels from the CPU to expand capacity rather than bandwidth.
Once you get out of this smallest consumer expansion scenario, I think NAS takes over as the most sensible architecture for small office/home office settings.
Other SAN variants really only make sense in datacenter architectures where you are trying to optimize for very well-defined server/storage traffic patterns.
Is there any drawback to going towards USB for multiplexed storage inside a desktop PC or NAS chassis too? It feels like the days of RAID cards are over, given the desire for host-managed, software-defined storage abstractions.
I wouldn't trust any USB-attached storage to be reliable enough for anything more than periodic incremental backups and verification scrubs. USB devices disappear from the bus too often for me to want to rely on them for online storage.
OK, I see that is a potential downside. I can actually remember way back when we used to see sporadic disconnects and bus resets for IDE drives in Linux and it would recover and keep going.
I wonder what it would take to get the same behavior out of USB as for other "internal" interconnects, i.e. say this is attached storage and do retry/reconnect instead of deciding any ephemeral disconnect is a "removal event"...?
FWIW, I've actually got a 1 TB Samsung "pro" NVMe/M.2 drive in an external case, currently attached to a spare Ryzen-based Thinkpad via USB-C. I'm using it as an alternate boot drive to store and play Linux Steam games. It performs quite well. I'd say is qualitatively like the OEM internal NVMe drive when doing disk-intensive things, but maybe that is bottlenecked by the Linux LUKS full-disk encryption?
Also, this is essentially a docked desktop setup. There's nothing jostling the USB cable to the SSD.
USB, even 3.2 doesnt support DMA mastering thus is bad for anything requiring performance.
USB4 is just passing PCIE traffic and should be fine, but at that point you are paying >$150 per usb4 hub (because mobos have two at most) and >$50 per m.2 converter.
As @wtallis already said, a lot of external USB stuff is just unreliable.
Right now I am overlooking my display and seeing 4 different USB-A hubs and 3 different enclosures that I am not sure what to do with (likely can't even sell them, they'd go for like 10-20 EUR and deliveries go for 5 EUR so why bother; I'll likely just dump them at one point). _All_ of them were marketed as 24/7, not needing cooling etc. _All_ of them could not last two hours of constant hammering and it was not even a load at 100% of the bus; more like 60-70%. All began disappearing and reappearing every few minutes (I am presuming after overheating subsided).
Additionally, for my future workstation at least I want everything inside. If I get an [e]ATX motherboard and the PC case for it then it would feel like a half-solution if I then have to stack a few drives or NAS-like enclosures at the side. And yeah I don't have a huge villa. Desk space can become a problem and I don't have cabinets or closets / storerooms either.
SATA SSDs fill a very valid niche to this day: quieter and less power-hungry and smaller NAS-like machines. Sure, not mainstream, I get how giants like Samsung think, but to claim they are no longer desirable tech like many in this thread do is a bit misinformed.
I recognize the value in some kind of internal expansion once you are talking about an ATX or even uATX board and a desktop chassis. I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling. Is it an intrinsic problem with the controllers and protocol, or more related to the cheap external parts aimed at consumers?
Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right? For an SSD scenario, I think some multiplexer card full of NVMe M.2 slots makes more sense than trying to stick to an HDD array physical form factor. I think this would effectively be a PCIe switch?
I've used LSI MegaRAID cards in the past to add a bunch of ports to a PC. I combined this with a 5-in-3 disk subsystem in a desktop PC. This is where the old 3x 5.25" drive bay space could be occupied by one subsystem with 5x 3.5" HDD hot-swap trays. I even found out how to re-flash such a card to convert it from RAID to a basic SATA/SAS expander for JBOD service, since I wanted to use OS-based software RAID concepts instead.
> I just wonder if the USB protocol can be hardened for this using some appropriate internal cabling
Honestly no idea. Should be doable but with personal computing being attacked every year, I would not hold my breath.
> Once you get to uATX and larger, this could potentially be via a PCIe adapter card too, right?
Sure, but then you have to budget your PCIe lanes. And once you get to a certain scale (a very small one in fact) then you have to consider getting a Threadripper board + CPU, and that increases the expense anywhere from 3x to 8x.
I thought about it lately and honestly it's either a Threadripper workstation with all the huge expenses that entails, or I'd probably just settle for an ITX form factor, cram it with 2-3 huge NVMe SSDs (8TB each), have a really good GPU and quiet cooling... and just expand horizontally if I ever need anything else (and make VERY sure it has at least two USB 4 / Thunderbolt ports that don't gimp the bandwidth to your SSDs or GPU so the expansion would be at 100% capacity).
Meaning that going for a classic PC does not makes sense if you want an internally expandable workstation. What's the point in a consumer board + a Ryzen 9950X and a big normal PC case if I can't put more than two old-school HDDs in there? Just to have a better airflow? Meh. I can put 2-3 Noctua coolers in an ITX case and it might even be quieter.
My partner is / was a copywriter. She was already a bit fatigued by it even before the whole AI thing. She's still finding bits of work but is pivoting into AI herself now.
I think of it like all the other jobs from yesteryear that you hardly ever see. Lamplighter, elevator operator, farrier. People used to form gangs and smash up the mechanical spinning looms.
Usually the implication of this (very common) analogy is that people in the past were somehow behaving wrongly, despite the fact that anybody is right to fight savagely against dramatic disruption to the life they've built, regardless of what the best solution is theoretically. Though even beyond that, the comparison is thin. With AI disruption, the size of the total affected jobs in comparison to the entire economy, as well as the speed of the change, is much more significant.
I think they were behaving wrongly yes because the one constant in life is change whatever you do and whatever species you are. Adapt or die surely? The universe isn't a museum.
> anybody is right to fight savagely against dramatic disruption to the life they've built
Yeah, I'd built a whole lifestyle around armed robbery, and the cops had the gall to arrest me. It was dramatically disruptive!
Seriously, you do not have a "right" to keep doing whatever you've been doing, even if it wasn't destructive. Nobody owes you that. People aren't your serfs.
The village blacksmith of 1934, it was clear, had to learn new skills to survive.
“The influence of the automobile has driven the horse from the city’s streets,” according to the article. “The blacksmith now earns his livelihood by straightening automobile axles, repairing broken springs and welding frames.”
It's telling that you compare specialized creative work, like making art, to "jobs" like standing in an elevator.
Nobody would miss washroom attendants disappearing either. That is different from automating away the stuff that makes life interesting. Like AI startups telling you that their robot will spend time with your friends and family, so you don't have to. Being disgusted by that is not being a luddite, it's being a well adjusted human with aspirations beyond doomscrolling AI slop on tiktok/youtube.
Interesting - surely you'd have to trick Google into visiting the /search? url in order to get it indexed? I wonder if them listing all these URLs somewhere are requesting that page be crawled is enough.
Since these are very low quality results surely one of Google's 10000 engineers can tweak this away.
> surely you'd have to trick Google into visiting the /search? url in order to get it indexed
That's trivially easy. Imagine a spammer creating some random page which links to your website with that made up query parameter. Once Google indexes their page and sees the link to your page, Google's search console complains to you as the victim that this page doesn't exist. You as in the victim have no insight into where Google even found that non-existent path.
> Since these are very low quality results surely one of Google's 10000 engineers can tweak this away.
You're assuming there's still people at Google who are tasked with improving actual search results and not just the AI overview at the top. I have my doubts Google still has such people.
I messed around with our website trying url encoded hyperlinks etc but it was all escaped pretty well. I bet there's a lot of tricks out there for those with time on their hands.
Why anyone would bother creating content when Google AI summary is effectively going to steal it to intercept your click is beyond me. So the whole issue will solve it's self when google has nothing to index except endless regurgitated slop and everyone finally logs off and goes outside.
I often find myself in the bizarre situation of backing out of a suppliers website to google their contact number. A bit like when you want pricing on something without falling into a sales funnel.
sign up for the enterprise plan, get an account manager assigned to your account, request support from them, they’ll say you need to upgrade your plan to have a solutions engineer assigned to the account, upgrade your plan, then BOOM… you get your support query answered in only 3-5 business days.
Since there is a lot of space out there in the ocean I wonder if some kind of big floating energy station could be a thing, using middle of the ocean wind, tidal or solar. I guess you don't have to pay anyone for the space or worry about too many regulations etc.
> I guess you don't have to pay anyone for the space or worry about too many regulations etc.
I'm ammused you think offshore energy is lawless. It's the same assumption that had the entire maritime community laughing at the clowns behind 'Seasteding" and the amusing MS Satoshi 'cryptoship'.
I found your reply unnecessarily snarky. The possibility I'l pondering would be to have a facility in deep ocean, far away from any countries coastline, but near shipping lanes.
I like your idea. We can now generate substantial amount of power from floating wind turbines. Coupled with floating batteries (ie on cargo ships) we perhaps build floating charging stations along major shipping routes. There is no need for nuclear or to only charge at ports. Would it work?
The company L&F referenced in this article were supplying said cathode material.
ref https://www.reuters.com/technology/tesla-plans-four-new-batt...
reply