Apparently just like OP, you didn't read the article either. Just because the app doesn't ask for permission in the manifest doesn't mean it can't be acquired at runtime. It's very publicly documented [0].
> HRC's secret email server and the leaked Kash Patel emails couldn't be more different.
But it is literally no different than what the Trump administration did [0] after all of their finger pointing. Idiocracy runs deep across both political camps.
I think this is the root of why people defend AI in some circumstances. They feel a give-for-get type of relationship where the AI continuously (and oft incorrectly) reinforces them. And so they enjoy it and subconsciously want to defend that "friendly". No different than defending a friend that you inherently know may be off base.
I don’t know, I think it has to do with people using AI for completely different reasons.
Using AI for coding is different than using it for art generation which is different than using it for conversation. I think many people feel some uses are good and some are bad.
I'm seeing people that are technically savvy defend mediocre code and consumption based output (think technical briefs and reports). When the flaws in the output is highlighted in many cases it's brushed off as "good enough" or "nobody will care / notice".
I think LLMs and more aptly SLMs have use cases. I enjoy using these tools to make quick work of simplifying and faster iteration of these relatively frequent but time consuming tasks. But I'm always correcting and checking. And very rarely, other than simple and focused scripts does any LLM truly get it right every time. Has it gotten better? For sure. Will it keep getting better? Probably. But right now we seem to be topping the "peak of inflated expectations". And LLMs aren't getting much more efficient with respect to the frontier providers. And in fact if you listen to Altman it seems as though the only reason he would be asking for so much capital and finite resources is that he knows if he controls those tangible things he will lock out competition. But I'm hopeful that it spurs real innovation into SLMs that are truly useful, dependable and can be relied on in more of the traditional in the sense of deterministic software operations.
AI for art is dead. It's got some mediocre use cases but true art will not be generated by LLMs in our time. It's ultimately an amalgamation of existing art. I know the argument over what is novel or not keeps being rehashed, but we're not seeing truly new styles of art out of Nano Banana and the like. Coding is the same thing, only we're seeing a resurgence of obviously flawed software being pushed into production on the weekly. And as for conversational AI... Well, that reeks of the worst version of social media we could ever have dreamt. Nobody should trust any provider with personal conversations and we'll keep seeing these models show how truly dystopian they can be over the coming years as leaks and breaches expose how these conversations are being bought and sold to the highest bidders to extract more money and control over its users.
They all have a common thread: deep rooted flaws that cannot be contained in the traditional fences of software. And there guardrails are just that: small barriers that can easily be broken, intentionally or unintentionally.
I am curious to know how you are coming to these conclusions. I have been a computer programmer for over 30 years, and I have pretty solid evidence that I am good at it.
I have been using AI to write some very capable, well written, well tested, novel software projects.
Now, is it easy to use coding AIs to generate really bad code? Yes. Does that mean it is impossible to get them to generate good code? No, I don't think it is.
Coding with AIs is just like any other type of coding, it takes skill and practice. Not everyone is able to create great code with AI, because you need to use it in the correct way.
There are a lot of techniques that people have been discovering to get the AI to output better code. It is a very active field, and people are experimenting and coming up with frameworks and strategies to improve the quality. That work is paying dividends.
You can write very bad code with any language or tool. AI doesn't (yet!) allow non-coders to create great code, but it certainly can create great code in the hands of experts.
> I am curious to know how you are coming to these conclusions.
What I have stated is what I have seen first hand and continue to see. They aren't conclusions, they are observations.
>I have been a computer programmer for over 30 years, and I have pretty solid evidence that I am good at it.
OK.
> I have been using AI to write some very capable, well written, well tested, novel software projects
That's great, I'm sure this is all true with the exception of "novel software projects". Any examples?
> Now, is it easy to use coding AIs to generate really bad code? Yes. Does that mean it is impossible to get them to generate good code? No, I don't think it is.
Sure. This is basically what I already said.
> Coding with AIs is just like any other type of coding, it takes skill and practice. Not everyone is able to create great code with AI, because you need to use it in the correct way.
There is no one correct way because LLMs are architecturally non-deterministic. You don't know how the LLM will respond for any given prompt.
> There are a lot of techniques that people have been discovering to get the AI to output better code. It is a very active field, and people are experimenting and coming up with frameworks and strategies to improve the quality. That work is paying dividends.
I never said LLMs didn't have a level of value, but it's not paying dividends if you take the true cost of LLMs. Frontier models are heavily subsidized at today's prices. Do you think Claude Code is worth $2k per month? $20k? Is increasing energy prices exponentially for people who don't care about software another one of these "dividends"? How do you quantify finite resources utilization vs generation of AI images? I'm curious.
> You can write very bad code with any language or tool. AI doesn't (yet!) allow non-coders to create great code, but it certainly can create great code in the hands of experts.
OK. But so then you're saying that this is a tool you need to have expertise in to use safely and effectively. Basically what I've already stated.
> "...great code in the hands of experts".
Anyone with the Internet who is an expert can create great code already. So your argument is that it saves experts time and you agree that AI can create poor code and insecure systems when left to "non-experts". But the part you're leaving out is that the AI won't tell the "non-experts" anything of the sort. How... Novel!
It's not a power move, it's a cartel and they've done this before. Gamers Nexus did a fantastic piece on how where we're at today is very similar to the DRAM price fixing and market manipulation just a couple decades ago [0]. This is the big players taking full advantage of an opportunity for profit.
The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before.
In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email.
The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent.
LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws.
I'm just going to keep building software mostly traditionally, while using "AI" to help me research things quicker (might as well use it while it's here), survive the shitpocalypse, and then laugh as traditional-minded developers become a scarce sought-after resource again.
Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.
> Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.
I think this is where a lot of freelance contractors could pivot to - basically "last mile" coding, where the LLM does the front end work, and then high hourly pay engineers come in and fix the work. it'd still be cheaper than a lot of the industry niche software that is usually pretty bad.
You should find a new Dentist if that's their response. There's no reason to take out a healthy tooth if it's not impacting your quality of life or there are other issues.
Sounds like your Dentist is chasing $ over sense. My better half is a DDS, and they see quite a few patients wherein others in the field put their opinions and revenue over their hippocratic oath.
In our district phones are banned during the day. Most students don't care about their phones, what they care about is FOMO. And so the ban does great to not only reduce distractions but also the cognitive load of constantly wondering what they're missing.
I think there are many examples throughout history of better performing options not displacing counterparts. I think, really, the only "laughable" thing here is the ignorance on display that's riding atop the arrogance.
Rust is great. But AI isn't displacing Python anytime soon.
Moreso it sucks that Astral's been bought by a company with such a horrible leader at the helm.
I think most homelabbers default to Caddy and/or Traefik these days. Nginx is still around with projects like NPM (the other NPM), but Caddy and Traefik are far more capable.
DevOpsToolbox did a great video on many of the reasons why Caddy is so great (including performance) [0]. I think the only downside with Caddy right now is still how plugins work. Beyond that, however it's either Caddy or Traefik depending on my use case. Traefik is so easy to plug in and forget about and Caddy just has a ton of flexibility and ease of setup for quick solutions.
I agree with you that they're more or less equal. I don't like the idea of my reverse proxy dealing with letsencrypt for me, personally, but that's just a preference.
One tricky thing about nginx though, from the "If is evil" nginx wiki [0]:
> The if directive is part of the rewrite module which evaluates instructions imperatively. On the other hand, NGINX configuration in general is declarative. At some point due to user demand, an attempt was made to enable some non-rewrite directives inside if, and this led to the situation we have now.
I use nginx for homelab things because my use-cases are simple, but I've run into issues at work with nginx in the past because of the above.
I'm not sure why Apache is so unpopular, it can also function as a reverse proxy and doesn't have the weird configuration issues nginx has.
Some people take this way too far, for instance I've send places compiling (end of life) modsec support into nginx instead of using the webserver it was built for
Just as one small example: if you're deploying in k8s and want the configuration external to Nginx, you want built in certificate provisioning and you need to run midleware that can easily be routed in-config...
Traefik is far more capable, for example. If all you're doing is serving pages, sure.
So, no. Not a "hallucination".
[0] https://documentation.onesignal.com/docs/en/location-opt-in-...
reply