I was thinking just yesterday that the research that Anthropic was sharing regarding how it's easy to poison training was unlikely to be conducted out of goodness of the heart.
LLMs are one thing, but when you bring ES in AWS example, as outlined in the article, the problem is not the software being used; it's being _made proprietary_. It's about free and open software remaining free and open. Especially to the end user.
Arguably it didn’t see widespread commercial adoption for 30 years, and you wouldn’t expect fundamental design flaws regarding commercial incentives to manifest before that.
A flaw can be fundamental but not immediate. It's probably better to say it's a fundamental flaw of the open web, that is the system collapses as the number of bad actors increases, and there is no way to prevent bad actors and have the system keep the name as open web.
For non-technical, the current meteoric rise of AI is due to the fact that AI is generally synonymous to "it can talk". It has never _really_ spoken to the wider audience that the image recognition, or various filters, or whatever classifiers they could have stumbled upon are AI as well. What we have, now, is AI in the truest sense. And executives are primarily non-technical.
As for the technical people, we know how it works, we know how it doesn't work, and we're not particularly amused.
Technically no (except for the gradual performance drop they introduce, + occasional TPM bullshit), but of course in practice, companies see this as a choice of spending money on back-porting security fixes to a growing range of hardware, vs. making money by not doing that and forcing everyone to buy new hardware instead.
reply