Hacker Newsnew | past | comments | ask | show | jobs | submit | bahahah's commentslogin

Doubtful. Their terms are explicit and iron clad. Make no mistake, they are crafted to protect themselves first.

What group of backers would have the resources and organization to win a case against the crowdfunding sites?


The proceeding is unlikely to change anything. The company will just dissolve. Most of the big scams aren't even in U.S. Jurisdictions.

Only people being less ignorant and gullible will prevent them from being scammed.

There might be a lawsuit against the crowdfunding sites, however their terms are ironclad in the way they shed liability -- this is their lifeblood. Any judgement will just result in slight rule or term changes. They are making lots of money now and can afford the lawyers to remain free of liabilities.


They won't likely see the money either way. If it was spent on development it is gone -- so if this a new company it will just file bankruptcy or dissolve. Washington state does not likely have a way to force restitution on a company in another state either.

True scammers will be long gone anyway, so this will only hurt people that want to try to build something via crowdfunding.

Washington state is now considered hostile to businesses that will use crowdfunding so you should probably not start these types of projects or businesses there.

Will be interesting if any companies will refuse backers from Washington.

More careful wording of backer reward commitments can likely get around such legal liabilities.


> Washington state is now considered hostile to businesses that will use crowdfunding

I think the WA AG is acting perfectly reasonably here and fully expect more states to follow suit. The CPA applies to crowdfunding campaigns as much as it applies to anything else.


The timing is certainly right to take off with the ongoing California drought.


From a marketing perspective, sure, but are showers a major portion of the water used in California? Spend that $400 xeriscaping.


The only bad press is no press. Especially so when no one knows who you are.

And now you know who Pando is, mission accomplished. They just want to join in riding the demo day media wave, but were denied access to actually write about the demos.


This is the way media/PR works in the US.

The media markets and develops a consumer base, and then sells or exercises its influence over those consumers to its customers. You generally think of this as just advertising mixed in with the medium, but most articles are specifically placed. Other articles are just fluff to reinforce the marketing identity.

Institutions that stick to authentic journalism can not compete against those supporting capital interests, and most consumers can not tell the difference.

I am just really curious how the cash or influence flows. Does a YC partner have a majority or significant investment in TC? Do you just pay flat out for coverage? Do high ranking journalists, editors, or partners at TC have capital interests strongly connected to YC?


There are several storage class memories that are nearing commercialization. Intel is betting big on at least one of them. Most technologies in this class are orders of magnitude faster and have orders of magnitude better endurance than flash memory, while being only slightly slower the DRAM, yet non-volatile.

It is plausible that with another layer of in-package cache they could eliminate DRAM altogether, replacing it with ultrafast NVM. Imagine the resume/suspend speed and power savings of a machine whose state is always stored in NVM.


> There are several storage class memories that are nearing commercialization.

I'm very interested in this. Could you point out which technologies that are near ready for commercialization?

My understanding is that the current cost is orders of magnitude higher per unit of storage for these new technologies compared to NAND flash or even DDR3 RAM. But of course, a dedicated fab could change that very quickly.


Well nvDIMMs are available right now (from companies like Netlist, Agigatech, Viking, Smart, Micron). This is DRAM with an analog switch, a controller and flash memory. When you lose power, the DRAM is disconnected from the processor and the contents are copied to the flash. The newer technology might be cheaper, but I thought so far the write performance is not as good as DRAM.

The issue is the cache: the data is not non-volatile until it has been written back to DRAM. Even then, you need some advanced warning of a power outage for it all to work.

Unibus (bus for PDP-11 core memory systems) had an early warning signal, to give the memory controller a chance to write back the previous (destructive) read.


Components are available on the market now based on PCM, MRAM, and FRAM. I know that Intel has large productization, not research, teams working on a variant of SCM. Near means 2-3 years though. Research exit to market ready is always a 3-5 year cycle when process engineering is involved.


Is this basically memristors coming to market or are memristors still a few years off?


This should be useful for any type of NVRAM, be it battery-backed DRAM, MRAM, memristors or DMA-mapped flash.


SD cards are the lowest bin tier as well, given their low performance requirements and low margins relative to SSDs, embedded designs, etc. The leftovers and rejects tend to end up in that channel.

In that vein, the 64Gbit micron devices may in fact be 128Gbit die with half dead arrays -- so they may have a similar process node and reliability to the samsung device.

The MLC is undoubtedly superior to TLC however.


Due to close physical proximity, there will always be some degree of capacitive coupling between the cells. This coupling will cause a cell's potential to increase slightly when its neighbors are programmed. Having all of your neighbors programmed to the highest potential state is the worst case, as your delta V from coupling is greatest. If it is shifted enough, there would be a bit error at that cell.

Data randomization seeks to mitigate this issue by normalizing the distribution of states across the page. Having a single XOR key wouldn't do a very good job for the reasons you noted. When I worked on flash, we used elements of the address to seed a PRNG for data randomizing. So the XOR key varies across the entire device.

There are other systems in place in flash to further mitigate these issues. All programming is adaptive, using feedback between programming pulses to hit the target. The pages within a block are intelligently ordered so that a programmed cell cannot possibly have all of its neighbors programmed from lowest to highest potential.

But yes, in general, if you had the right data stream, you would be able to slightly degrade the BER, possibly past what the ECC can repair. There are a lot of systems in place though, as NAND is inherently lossy to begin with. These issues are compounded by MLC designs which have tighter margins per cell.

SSDs have yet another layer of system mitigation. I know of at least one manufacturer that disables NAND level randomizing in favor of encrypting every bit of data that is programmed. Some drives have enough redundancy that they can lose an entire flash die without losing data -- as if losing a disk in a raid setup.

You probably shouldn't be storing anything important long term on a device that programs NAND raw. i.e. flash drives and sd cards. They aren't designed nor spec'd for high reliability.


This whole XOR scheme seems destined to fail! Why not just use a 64b/66b (or similar) encoding scheme?


The XOR scheme is perfectly good enough. If it were a real issue affecting customers it would be replaced (but it isn't).

The XOR scheme is extremely cheap (compact) and does not need to operate serially on the data stream (good for performance). The only applications that use the NAND provided randomizer are the cheapest of controllers. In fact, even the SD controller in the linked article used their own XOR scheme. A system designer can always turn off the builtin randomizer, and replace it with whatever method they choose -- they all do for various reasons. At the controller level it can be implemented in, typically, higher performance and more compact logic processes. It does not need to be duplicated for multichannel devices, as it would if it were in the NAND.


The XOR scheme is perfectly good enough

...until someone finds a way to exploit it, as has happened with CD's "weak sector" copy protection schemes. It's only a matter of when it will happen, not if.


Corrupting the storage of a test pattern isn't particularly useful. MAYBE, you could cause premature tagging of bad blocks wearing out a flash drive/card faster. If the system you are using is allowing these kinds of writes to your storage device you have more pressing issues.

Only the most primitive SD/flash drive controllers actually use this scheme anyway -- encryption is much better at randomizing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: