No single point of control over the means of production.
If Microsoft have cheap labor from bots, and you do too, it levels the playing field dramatically. You won’t have to pay money for access to similar capabilities
The Luddite’s were angry because they were replaced by machines they couldn’t afford to own themselves and profit from. With open source machines, this doesn’t happen. We all have machines and then we all have more equal opportunity.
I mean, I've got machines, but I don't have Microsoft's bank account, can't they leverage that in infrastructure, regulations to roll over that.
I feel like I've had a similar conversation when I was talking to someone arguing that the 2nd amendment was imperative for citizens to defend themselves against the government. Like bro, how is your shotgun winning against an aircraft carrier.
The 2nd amendment was meant to allow for guerrilla warfare against a government which didn’t want to wipe out the entire population which was sympathetic to the rebels. This context mirrors what the US states had in their war of independence.
For all other kinds of wars, yes, the government can crush you. That was true even in the 1700s, and has only gotten more true today.
But we still won't be able to afford the processing power. It's ultimately the same as crypto - it promises to democratize (finance/copyright infringement/talking to a computer) but doesn't if you peel away the layers.
The means of production is both the software AND the hardware.
If the models don’t become more efficient to run , then it won’t be sustainable for almost any company to scale
Them up to me smarter, and have them so heavily used no one has jobs.
So we have to assume you’ll be able to run them on much more efficient hardware soon.
Anyway, I’m not necessarily talking about individuals running their own, but groups of people.
If we're saying that only groups of people will be able to own the means of production I have a guess who those groups (maybe we should call it a class?) will be.
Imagine a world where 90% of people are laid off due to AI, do you really think that 90% of people will just sit around and do nothing about it?
No, people, with all their diverse backgrounds, expertise and ingenuity will come together and build what's required for their survival and because of open source technology, they will also have access to the same advanced AI systems as M$.
But what are they using the AI for? Having AI just to have AI will have no value if everyone has it, and the groups that were using developers or writers before (for marketing or software development) will still have no need. What will be valuable about the tool by itself?
Wouldn't the skill in using the tool matter? GH Copilot, so far, has been kind of meh for me - the instances where it produces something actually useful, even after giving it lots of hints, were pretty rare until now. But, two or three generations down the line, it might actually be a tool capable of making a difference. Then, just like with Google when it appeared, people who can work with that tool effectively will be in demand for some time, until either the tool is deliberately broken or something better comes up.
Because the bot would have to be producing something of value. So if now will have access to something that can also produce value without having to pay a gatekeeper? You don’t think that’s important ?
I do think it's important, I'm just not seeing how that helps people who have lost their jobs/home.
I'm not being argumentative here. I'm not understanding, and want to understand how this would be applied to help people in that position.
The only way I can think is if the solution is that they will all be AI operators, in which case the solution on offer is another variant of "retraining" -- but not everyone will be suitable for that sort of work. But that doesn't sound right, so I think I'm not understanding.
it's also The Register which is sort of known for these kinds of fear mongering clickbaity headlines. Just about anything can be misused for nefarious purposes. The implications of a sophisticated generative text program (simplifying it [a lot]) are rather obvious
Now that LLaMa is out of the bag, there is no stopping the scammers. They can just run LLaMa on their own GPUs, fine tune it to whatever nefarious purpose they have, etc.
Not only can scammers do this, but they will prefer to do this purely for cost reasons. Scammers have to turn a profit, too. And nation state actors will do it for traceability reasons.
So the really bad actors won't use OpenAI anyway. So OpenAI should stop lobotomizing their service for the non-nefarious actors who are just trying to make the next generation of software.
Sure, go ahead and put subtle steganographic fingerprinting in the OpenAI output, so long as it doesn't affect quality. This will hinder the low-tech bad actors who can't run LLaMa themselves, and won't hurt the good actors.
But it's the most infuriating thing to try to build on the GPT4 API when its "helpful assistant" chat type mindset occasionally breaks the workflow.
They will be using this angle as a moat, models are after all just a big file of assorted weights. They are losing sleep trying to figure out how to keep their lead
Looking at what happened with stable diffusion, there's no way this can be kept under wraps. Weights WILL leak and trigger a cambrian explosion of open source innovation.
Their moat, if any, will be in the boring things: regulatory capture, integration, whatever you call Oracle's business model of "pay through the nose and blame us if something goes wrong". And at that point it'll be just any boring regular tech company.
A few days ago before it was taken offline I did some tests I did with Alpaca-stanford. If I can run a GPT-3 level LLM in my machine, I won't even bother looking at GPT-4. Yes GPT-4 is impressive, but GPT-3 and Alpaca are already good enough IMO.
You’re probably right, I was thinking of a not I made (not for scams) that only replies once every hour or so and for that it would probably be impractical but if there are multiple people at a time it would make more sense
Everyone has known this; I honestly feel like we're opening a big can of worms with AI at the moment that could be disastrous for various reasons. No one is going to stop it because everyone wants to race to get rich or price people out. From massive job displacement to people being able to fake events, facts, etc on a whole new level, create new scams that can fool even non-naive people.
I'm becoming increasingly wary and anxious, especially when we already seem to live in a pretty fragile world. People already are distrusting of everything, even from our core institutions...add in AI's ability to take millions of jobs and create fiction from real events in a way that looks like reality, and we could be walking into a catastrophe. Having a bunch of newly unemployed people that don't know what to trust seems bad.
States and companies have been successfully creating fiction masquerading as reality for centuries. The only thing AI can change here is it makes it more commonly available. And that might end up reducing the effectiveness of this manipulation by raising people’s awareness.
I don't think that is going to be the case, more people today believe outright lies than just a few years ago. If you dress up something as official (ie good branding, etc) looking, people will believe it. Even a fake Epstein flight log floats around and people believe it, now imagine telling someone a video of a politician doing something that never happened isn't real. Might work at first, but then suddenly there is a real video, but people will say it's fake, etc. There were massive account take downs from fictitious people not that long ago, some had millions of followers. That was without AI, with AI it cost a fraction to do that on a much larger scale.
This is going to be so abused heavily, I think people are absolutely naive if they think this will be a smooth ride into the future. AI is going to cause massive problems in the next decade, and tech companies that create it will not suffer any consequences for it, they'll actually grow to be much more powerful. I don't know the answer, but worrying about it and acknowledging this is a likely outcome isn't a crazy idea. As I said, a bunch of upset and distrustful people that can no longer make ends meet because AI took their job is a great way to cause huge conflicts. We've lived in a relatively peaceful time, we take that for granted, it isn't always like that.
>I don't think that is going to be the case, more people today believe outright lies than just a few years ago.
Vastly fewer people believe in outright lies now than a couple of decades ago. Fringe theories of all kinds used to be commonly accepted, much more than they are now. Religiously motivated homophobia is a good example. And I’d claim that making it easier to spread falsehoods made it so, by training people to be less naive.
I see and hear way more crazy things now than at any point in my life, even from people I consider fairly smart. COVID made people distrustful of everything. So, I am going to strongly disagree. What you're talking about is social evolution, I am talking about people believing things such as the derailing in Ohio being on purpose to distract from x, y and z. People get pieces of information, and then hear it spun in a way that is completely false. There are millions of people that believe this stuff.
That is very new, at least in how far spread it is. There are people that think there is no war in Ukraine at all and this is all a "Wag the Dog" situation. If you aren't hearing these things, go to a local bar and bring it up. I guarantee you will find people you'd not suspect believing something absolutely devoid from reality, even in "blue" areas. You will be shocked at the volume of people that agree or chime in with something even more crazy. It is something I've noticed which started around 2020, and it's very alarming.
With AI this will become even more confusing and widespread, as it's easier and easier to fake things or come up with "facts" written in long form that aren't based in reality at all. Gen Z might become immune to this, but every other generation before it? I don't think so. Now imagine millions of people that believe things like this that now have no job, or if they do it's lower paying. It has the potential to be very bad. This might not happen in 2023 or even 2025, but at the speed AI is currently moving, it can't be more than a decade away from massive displacement and realism in generation that will threaten many threads of society.
When they released ChatGPT, they would have foreseen that the least it could do is forever pollute the Web and all human media with AI-generated content indistinguishable from human output. It would be naive to hand everyone a gun for next to nothing (on the pretext of helping them to hunt) and expect nothing bad will happen. What is the point of a warning if they have not been more judicious with the weapon distribution?
AI is a general-purpose accelerant and force multiplier. It provides a mechanism for automating and deploying at scale, a set of attacks on our society, against which we have little experience and almost no defense—nor even any good means of detecting, at least not until post-mortem forensics.
The most obvious harmful avenue for this is venal criminality (which will be awful) but the real danger is in the political sphere.
There is already widespread use of AI for disinformation purposes in e.g. the Ukraine war.
I have been saying for the last N months or so, my immediate concern with AI is not AGI but augmented intelligence applications which are leveraged enough to be destabilizing.
In specific, I believe the 2024 election cycle in the US will be decided by AI.
Can't you just ask the "entity" chatting with you what they think of the movie that came out last week? If they're trying to convince you that you're a time traveler because that movie hasn't launched yet, they might just be an AI.
> a set of attacks on our society, against which we have little experience and almost no defense... In specific, I believe the 2024 election cycle in the US will be decided by AI
Hell no. You are probably thinking about things like 'misinformation'. Let me tell you that all such concerns are totally unfounded:
People just buy into whatever already fits their existing bias. Even if they are lies or proven lies. They don't care if something was a lie. If there is more stuff that confirms their existing bias, they will shout louder. If there is less, they will shout less. But they will still vote the same way.
So when the 2024 elections happen and there is a lot of misinformation, everyone will just buy into ! whatever ! confirms their existing bias - be it truth or be it a lie - and vote in the exact same manner they were going to do before that misinformation.
While this reduces the concerns about misinformation and/or the effects of the AI, it also suggests that objective politics is difficult because people are not affected by facts, truth, or even their own prior experiences.
“Decided by” is a bit much for me but “greatly influenced by” seems like a done deal. Any digital political organization should be salivating over the potential scaled personalisation options across email, social media, and text messaging.
Any new technology has some benefits and some drawbacks.
Electric cars have no direct emissions but increase mining operations in certain parts of the world. Or they can be used to plow through a public gathering of people. Or with some rewiring to electrocute someone to death. If you think hard enough, you can find nefarious purposes for almost any household item that's made your life easier.
I think what's important to focus on are the "net" benefits but the outliers feed into our emotional response.
I don’t focus on the “net” benefits as mush as I used to, in large part to FB. For the first several years, the “net” benefit of a connected digital world where you can communicate and connect with friends anywhere sounded so fantastic. But it’s become almost consensus that the very real downsides, and this societal consequences, may not have been such a great deal after all.
The reason outliers feed our emotional response is a survival/skin in the game mechanism. Parroting Taleb, all it takes in ruin once and the game stops. It’s not unreasonable to be hyper focused on reducing long tail risks with potentially catastrophic and unknown results. Caution and fear is warranted here.
It's going to be truly exciting when we can run something as powerful as GPT-4 on home hardware. I'll be so interested in trying it out with the brakes removed and no "Sorry Dave, I can't do that" messages.
As we continue to find ways to replicate these models and run them locally, OpenAI will be forced to find some additional value they can sell. For the time being, they can continue to pick low-hanging fruit and develop GPT-3.5 -> GPT-4 -> GPT-5, but at some point we'll probably hit a plateau. What's their business model when that happens? They have no moat.
I think they probably started out worried about safety and misinformation and nefarious actors, but I think this comment reveals their present intentions: "A thing that I do worry about is … we're not going to be the only creator of this technology. There will be other people who don't put some of the safety limits that we put on it."
Translation: "the competition will be able to duplicate our efforts, and we may not have any way to keep ahead of the pack. Please, pretty please, worry about safety so that we can sell you our safety tools."
Maybe their business model can continue to be based on a guarantee of at least medium-level quality with higher convenience? We can make running homebrewed models as turnkey as running any native app but how many people still prefer just hitting a button on the remote from their couch to pick something from netflix, instead of getting the same thing for free with a torrent client or even in some cases just searching for it on youtube?
So this could also be a ploy to play into big government regulations, when they are the first they can set the regulations and make it much harder for other similar products..
The rise of the automobile did cause a whole lot of legal and social ramifications that ought to be a good warning.
Drunk driving was legal for years, I think it's a good thing to be wary of how things can change and to be ready, as a society, to deal with unintended consequences of this new technology, even if we assume it to be a worthwhile net good.
> Some people, however, are confused by the startup's behavior. If the technology is as dangerous as OpenAI claims, why is it readily available to anyone willing to pay for it? Still, Altman added: "A thing that I do worry about is … we're not going to be the only creator of this technology. There will be other people who don't put some of the safety limits that we put on it."
Translation: We are happy to ensure that it only produces right think, right thought, and right information. We will not allow double minus bad things to come from our LLM.
Today I discovered that even if I give the AI a safe prompt 1 time in 4 it decided to generate some "bad" stuff, it tells me it can't comply but it takes my credits. This in an API call, why not use their fucking safe shit and put the robot to work over and over again to create safe output?
Bastards, I suspect the issue was that the prompt contained the word "monkey" and I suspect since this is an USA company they put tons of safe filters in, from all the big groups(conservatives, religious, liberals ) so nobody gets offended and cause a PR issue.
TL:DR they are filtering what we can do, and when they do mistakes we still pay for it.
No, at least not intentionally. And as I said the input past the moderation checks we do before calling the complete APIs. And the respone was most of the time valid but soemtimes it would be moderated.
The stupid shit is that there is no flag for us to know is moderated , and the chatbot response with a similar message but in different wording, so you need to guess that the response was moderated.