Hacker Newsnew | past | comments | ask | show | jobs | submit | germandiago's commentslogin

Reads to me like pushing for the own interests of the user writing the article.

> I wonder how much of this is simply needing to adapt one's workflows to models as they evolve and how much of this is actual degradation of the model,

I also wonder how much people are willing to adapt to non-reliability for the sake of laziness instead of, at some point, do a proper take the lead and solve a problem if you have the knowledge + realiable resoources.

It seems to me, the way you phrase it, that anything a human comes up with when coding must go through an LLM. There are times it helps, there are tasks it performs, but I also found quite often tasks for which if I had done it myself in the first place I would have skipped a lot of confusion, back and forth, time wasting and would have had a better coded, simpler solution.


> It seems to me, the way you phrase it, that anything a human comes up with when coding must go through an LLM.

This seems like a creative interpretation. I never said anything of the sort.


My bet: LLMs will never be creative and will never be reliable.

It is a matter of paradigm.

Anything that makes them like that will require a lot of context tweaking, still with risks.

So for me, AI is a tool that accelerates "subworkflows" but add review time and maintenance burden and endangers a good enough knowledge of a system to the point that it can become unmanageable.

Also, code is a liability. That is what they do the most: generate lots and lots of code.

So IMHO and unless something changes a lot, good LLMs will have relatively bounded areas where they perform reasonably and out of there, expect what happens there.


it won't be creative because it's a transformer, it's like a big query engine.

it's a tool like everything else we've gotten before, but admittedly a much more major one

but "creativity" must come from either it's training data (already widely known) or from the prompts (i.e. mostly human sources)


We don't even know what 'creativity' is, and most humans I know are unable to be creative even when compelled to be.

AI is 'creative enough' - whether we call it 'synthetic creativity' or whatever, it definitely can explore enough combinations and permutations that it's suitably novel. Maybe it won't produce 'deeply original works' - but it'll be good enough 99.99% of the time.

The reliability issue is real.

It may not be solvable at the level of LLM.

Right now everything is LLM-driven, maybe in a few years, it will be more Agentically driven, where the LLM is used as 'compute' and we can pave over the 'unreiablity'.

For example, the AI is really good when it has a lot of context and can identify a narrow issue.

It gets bad during action and context-rot.

We can overcome a lot of this with a lot more token usage.

Imagine a situation where we use 1000x more tokens, and we have 2 layers of abstraction running the LLMs.

We're running 64K computers today, things change with 1G of RAM.

But yes - limitations will remian.


Maybe I do not have a good definition for it.

But what I see again and again in LLMs is a lot of combinations of possible solutions that are somewhere around internet (bc it put that data in). Nothing disruptive, nothing thought out like an experimented human in a specific topic. Besides all the mistakes/hallucinations.


Yes, LLMs have a very aggressive regression towards the mean - that's probably an existential quality of them.

They are after all, pattern matching.

A lot of humans have difficulty with very reality that they are in fact biological machines, and most of what we do is the same thing.

The funny thing is although I think are are 'metaphysically special' in our expression, we are also 'mostly just a bag of neurons'.

It's not 'natural' for AI to be creative but if you want it to be, it's relatively easy for it to explore things if you prod it to.


> A lot of humans have difficulty with very reality that they are in fact biological machines, and most of what we do is the same thing.

I think we are far and ahead from this "mix and match". A human can be much, much more unpredictable than these LLMs for the thinking process if only bc looking at a much bigger context. Contexts that are even outside of the theoretical area of expertise where you are searching for a solution.

Good solutions from humans are potentially much more disruptive.


AI has all of human knowledge and 100x more than that of just 'stuff' baked right it, in pre-train, before a single token of 'context'.

It has way more 'general inherent knowledge' than any human, just as as a starting point.


Yet they never give you replies like: oh, you see how dolphins run in the water taking advantage of sea currents if you are talking about boats and speed.

What they will do is to find all the solutions someone did and mix and match around in a mdiocre way of approaching the problem in a much more similar way to a search engine with mix and match than thinking out of the box or specifically for your situation (something also difficult to do anyway bc there will always be some detail missing in the cintext and if you really had go to give all that context each time dumping it from your brain then you would not use it as fast anymore) which humans do infinitely better. At least nowadays.

Now you will tell me that the info is there. So you can bias LLMs to think in more (or less) disruptive ways.

Then now your job is to tweak the LLMs until it behaves exactly how you want. But that is nearly impossible for every situation, because what you want is that it behaves in the way you want depending on the context, not a predefined way all the time.

At that time I wonder if it is better to burn all your time tweaking and asking alternative LLMs questions that, anyway, are not guaranteed to be reliable, or just keep learning yourself about the domain instead of just playing tweaking and absorbing real knowledge (and not losing that knowledge and replace it with machines). It is just stupid to burn several hours in making an expert you cannot check if it says real stuff instead of using that time for really learning about the problem itself.

This is a trade-off and I think LLMs are good for stimulating human thinking fast. But not better at thinking or reasoning or any of that. And if yiu just rely on them the only thing you will emd up being professional at is orompting, which a 16 year old untrained person can do almost as well as any of us.

LLMs can look better if you have no idea of the topic you talk about. However, when you go and check maybe the LLM hallucinated 10 or15% of what it said.

So you cannot rely on it nayways. I still use them. But with a lotof care.

Great for scaffolding. Bad at anything that deviates from the average task.


First - I'm doubting your assumptions about "What they will do is to find all the solutions someone did and mix and match".

That's not quite how AI works.

Second - You'll have to provide some comparable reference for how 'humans' come up with creative solutions.

Remember - as a 'starting point' AI has 'all of human knowledge' ingested, accessibly instantly. Everything except for a few contemporary events.

That's an interesting advantage.


> First - I'm doubting your assumptions about "What they will do is to find all the solutions someone did and mix and match".

I never, ever got from a LLM a solution that either I could have never thought of or it was available almost verbatim in internet (take this last one with a grain of salt, we know how they can combine and fake it, but essentially, solutions looking like templates from existing things, often hallucinating things that do not exist or cannot be done, inventing parameter names for APIs that do not exist, etc).

When I give some extra thought to a problem (20 years almost in software business) I think solutions that I come up with are often simpler, less convoluted and when I analyze LLMs they give you a lot of extra code that is not even needed, as if they were doing guessing even if you ask them something more narrow. Well, guessing is what they are doing actually, via interpolation.

This makes them useful for "bulky", run fast, first approach problems but the cost later is on you: maintenance, understanding, modifying, etc.


I think the terminology is just dogshit in this area. LLMs are great semantic searchers and can reason decently well - I'm using them to self teach a lot of fields. But I inevitably reach a point where I come up with some new thoughts and it's not capable of keeping up and I start going to what real people are saying right now, today, and trust the LLM less and instead go to primary sources and real people. But I would have never had the time, money, or access to expertise without the LLM.

Constantly worrying, "is this a superset? Is this a superset?" Is exhausting. Just use the damn tool, stop arguing about if this LLM can get all possible out of distribution things that you would care about or whatever. If it sucks, don't make excuses for it, it sucks. We don't give Einstein a pass for saying dumb shit either, and the LLM ain't no Einstein

If there's one thing to learn from philosophy, it's that asking the question often smuggles in the answer. Ask "is it possible to make an unconstrained deity?" And you get arguments about God.


do they reason? Where was a video by AI researcher, that showed, that they do not reason but actually come with the result first and then try to invent "reasoning" to match it.

I mean humans do that too, and I don't think it's very unjustified. The "we deduce from a deep base premise P down a chain of inferences" picture is extremely incomplete and has been challenged all over the place - by normal people, by analytic and continental philosophers, by science itself, etc.

Not trying to say that LLM's are equivalent to humans but that the concept of reasoning is undefined.

And the fact that their performance does increase when using test-time compute is empirical evidence that they're doing something that increases their performance on tasks that we consider would require reasoning. As to what that is, we don't know.


But humans verify things. AI just fools you and I would say it is the biggest problem.I have with AIs.

They give me stuff that I do not know whether to trust or not and what surprises I will find down the way later.

So now my task is to review everything, remove cruft. It starts to compete against investing my time to deep-think and do it thoughtfully from the get go and come up with something simpler, with less code and/or that I understand better.


I mean yeah ultimately it's a tool and I've even leaned off of AI recently for coding because it was exhausting dealing with all its hallucinations

This is a great reason to choose an alternative.

So here I go: if it is so stupid, why it is not done yet?

Try not to blame anyone. Do it rationally if you can, from your message I understand your opinion.

I say this as a person that has lived in a developing country the last 15 years. It is not that simple IMHO...


The economics only changed recently and infrastructure lasts a long time. It’s the same reason EV’s make up a far larger share of new car sales than a percentage of overall cars, EV’s sucked 20+ years ago yet there are a lot of 20+ year old cars on the road.

The US stopped building coal power plants over a decade ago but we still have a lot of them. Meanwhile we’ve mostly been building solar, which eventually means we’ll have a mostly solar grid but that’s still decades away.


> The economics only changed recently and infrastructure lasts a long time

This needs investment also. An investment poorer people cannot or do not want to do. It is reasonable that when someone gives up a couple of things because that person is rich (rich as in a person in the developed world) the sacrifice is more or less acceptable.

Now change environment and think that these sacrifices are way worse. Even worse than that: that has more implications in conservative cultures where, whether you like it or not, showing "muscle" (wealth) is socially important for them to reach other soccial layers that will make their lives easier.

But giving up those things is probably a very bad choice for their living.

America cannot be compared to South East Asia economically speaking, for example. So the comparison of the coal centrals is not even close.

A salary in Vietnam is maybe 15 million VND for many people. With that you can hardly live in some areas. It is around 600 usd.

Just my two cents.


Unlike the US, Vietnam is a net importer of fuel. It imports over 40 million tons of coal per year:

https://statbase.org/data/vnm-coal-imports/

It also started importing liquid natural gas in 2023.

But it has abundant sunlight, access to low cost Chinese solar panels that will produce electricity for decades instead of being burned once, and a substantial domestic photovoltaic manufacturing industry of its own.

"Renewable Energy Investments in Vietnam in 2024 – Asia’s Next Clean Energy Powerhouse" (June 2024)

https://energytracker.asia/renewable-energy-investments-in-v...

In 2014, the share of renewable energy in Vietnam was just 0.32%. In 2015, only 4 megawatts (MW) of installed solar capacity for power generation was available. However, within five years, investment in solar energy, for example, soared.

As of 2020, Vietnam had over 7.4 gigawatts (GW) of rooftop solar power connected to the national grid. These renewable energy numbers surpassed all expectations. It marked a 25-fold increase in installed capacity compared to 2019’s figures.

In 2021, the data showed that Vietnam now has 16.5 GW of solar power. This was accompanied by its green energy counterpart wind at 11.8 GW. A further 6.6 GW is expected in late 2021 or 2022. Ambitiously, the government plans to further bolster this by adding 12 GW of onshore and offshore wind by 2025.

These growth rates are actually much faster than growth rates in the US.


Add cheap labor to the equation.

In developed countries 20-50% of the cost of roof top solar is labor.


> This needs investment also. An investment poorer people cannot or do not want to do.

The general premise of investments is that you end up with fewer resources by not doing them.

It now costs less to install a new solar or wind farm than to continue using an existing coal plant, much less if you were considering building a new coal plant, and that includes the cost of capital, i.e. the interest you have to pay to borrow the money for the up-front investment.

Poorer countries would be at a slight disadvantage if they have to pay higher than average interest rates to borrow money, but they also have the countervailing advantage of having lower labor and real estate costs and the net result is that it still doesn't make sense for anybody to continue to use coal for any longer than it takes to build the replacement.

It just takes more than zero days to replace all existing infrastructure.


That's why it will require a functional government who can use taxes responsibly to make the technology affordable to everyone. The US had a pretty good start until one man decided to stop and try to reverse any progress made.

Not one man, he's financially backed by the wealthiest people in the world and politically supported by millions.

Acting like this blunder is some random stroke of bad luck isn't telling the whole story.


Trump's animus against wind in particular is definitely specific to the man. He was annoyed by a wind farm in Scotland. Trump of course thinks he's one of those old fashioned kings† (and the US has been annoyingly willing to go along with that, how are those "checks and balances" and your "co-equal branches of government" working out for you?) and so he thought the local government would go along with his whims and prohibit the wind farm but they did not.

I'm sure there's some degree of vested interest in protecting fossil energy because it means very concentrated profits in a way that renewables do not. Sunlight isn't owned by anybody (modulo Simpsons references) and nor is the Wind, but I'd expect that, if that was all it was, to manifest as diverting funding to transitional work, stuff that keeps $$$ in the right men's pockets, rather than trying to do a King Canute. Stuff like paying a wind farm not to be constructed feels very Trump-specific.

† The British even know what you do with kings who refuse to stop breaking the law. See Charles the First, though that's technically the English I suspect in this respect the Scots can follow along. If the King won't follow the Law, kill the King, problem solved.


Trump’s campaign had financial backing from a number of oil and gas industry investors. Following the money in this case is not very difficult. He’s just a useful idiot, the whole industry put him there and are profiting at the expense of the rest of us.

But why should American taxpayers be responsible for making the technology affordable for everyone? Why shouldn't Europe or China be expected to shoulder this financial burden?

EDIT: I think people are misunderstanding my response. I fully support local subsidies for solar and renewables. My question is why my tax dollars should go toward making it affordable for everyone, including non-Americans. Either market dynamics will handle that naturally, artificially (i.e., China's manufacturing subsidies), or else it is up to the local government to address the shortfall.


Isn't the American complaint that China did exactly that by subsidizing its solar industry and flooding the global market with panels cheaper than Americans could make?

[1] https://www.bbc.com/news/business-20247734 (2012)


China is, it's subsidies have resulted in a glut of cheap solar panel production which the world has benefited from. European counties subsidise their own citizens switch to solar, the US no longer does at the federal level.

Responding to your edit: A wider version of the same argument might apply. The US has (historically) benefited considerably from global stability and this does seem to help with that because if basically everybody has energy independence and the overheating doesn't get much worse they might chill the fuck out?

Look at it this way: Benefiting everyone is a side effect of benefiting American taxpayers.

Or do you think that US federal investment in solar and battery technology would be bad for the American taxpayer?


The transition is happening rapidly in Pakistan: https://www.theguardian.com/environment/2026/mar/17/pakistan...

We haven't been building much battery storage to go along with that solar power. Perhaps we will eventually, but until that actually happens the base load requirement represents a hard limit on the amount of solar generation capacity that the grid can handle.

We started scaling batteries after solar (because the technology reached the point where they were profitable after solar)... but they're being installed at scale now, and at a rapdily increasing rate.

Batteries provided 42.8% of California's power at 7pm a few days ago (which came across my social media feed as a new record) [1]. And it wasn't a particularly short peak, they stayed above 20% of the power for 3 hours and 40 minutes. It's a non-trivial amount of dispatchable power.

[1] https://www.gridstatus.io/charts/fuel-mix?iso=caiso&date=202...

Batteries are a form of dispatchable power not "base load". There is no "base load" requirement. Base load is simply a marketing term for power production that cannot (economically) follow the demand curve and therefore must be supplemented by a form of dispatchable power, like gas peaker plants, or batteries. "Base load" power is quite similar to solar in that regard. The term makes sense if you have a cheap high-capitol low running-cost source of power (like nuclear was supposed to be, though it failed on the cheap front) where you install as much of it as you can use constantly and then you follow the demand curve with a different source of more expensive dispatchable power. That's not the reality we find ourselves in unless you happen to live near hydro.


I think the mysterious "Misc" electricity which sometimes appears at dawn and then dusk in the UK is likewise BESS†. The raw data doesn't seem to have labels for BESS, a lot of it was oriented around how electricity works twenty five years ago, there's an 850MW power plant here, and one there and one there, and we measure those. So it can cope with a wind farm - say 500MW or 1GW coming ashore somewhere, but not really with the idea that there's 10GW of solar just scattered all over the place on a bright summer's day and the batteries might similarly be too much?

† My thinking is: Dawn because in a few hours the solar comes online, you can refill those batteries at whatever price that is, so sell what you have now for the dawn price, and Dusk because the solar is mostly gone but people are running ovens and so on to make food in the evening, so you can sell into that market. But I might be seeing what I expect not reality.


Thanks for the [1] link, I hadn't seen that before.

> We haven't been building much battery storage to go along with that solar power

That too has pretty recently changed. Even my home state of Idaho is deploying pretty big batteries. It takes almost no time to deploy it's all permitting and public comment at this point that takes the time.

Batteries have gotten so cheap that the other electronics and equipement at this point are bigger drivers of the cost of installation.

Here's an 800MWh station that's being built in my city [1].

I think people are just generally stuck with the perception of where things are currently at. They are thinking of batteries and solar like it's 2010 or even 2000. But a lot has changed very rapidly even since 2018.

[1] https://www.idahopower.com/energy-environment/energy/energy-...


> Batteries have gotten so cheap

Any pointers for a regular Joe Shmoe homeowner looking for a backup battery? The Tesla Power Wall stuff and similar costs are halfway to six figures.


For full house backup, it sort of sucks right now. They are all charging a premium over what you can otherwise get if it's not specifically a whole home product.

What I've done and would suggest is right now looking for battery banks for big ticket important items that you'd want to stay on anyways in terms of an outage. A lot of those can function as a UPS. You can get a 1kWh battery pack for $400 right now. A comparable home battery backup is charging $1300 per kWh of installed storage.

I currently have a 2kWh battery pack for my computer/server/tv and a 500Wh pack for my fridge. Works great and it's pretty reasonably priced. The 500Wh gives my fridge an extra 6 hours of runtime after a power outage.

If I wanted to power shift, I have smart switches setup so I can toggle when I want to.


In the EU €1800 gets you a 10kWh battery (ex install)

That's on the high side, I would guess. Depending on what brand you want, you can get 10kWh of LFP for under a grand right now in the US.

With a BMS and inverter? What brand should I be looking at?

You will get a battery and BMS for that price. Decent inverters are expensive, however, so you won't get a whole 10kWh setup with appropriately sized inverter for under US$2K. Probably twice that.

I hesitate to offer any brand advice, because that is very situational, depends on what you're after, what experience level you have, what trade-offs you want to make, etc.


I don't know if the market has improved but when I looked at this a year or two ago I concluded that the consumer market here was utter crap with hugely inflated prices.

The cheapest per kwh way I could find to buy a home battery (that didn't involve diy stuff) was to literally buy an EV car with an inverter... by a factor of at least two... I ended up not buying one.

Unfortunately cheap batteries doesn't translate to reputable companies packaging them in cheap high quality packages for consumers instantly.


Becoming completely dependent on imported tech for such basic needs is a BAD idea. The West cannot outcompete China on cost for these products at this time. And before you say subsidies, let me remind you that we are all going broke.

Once you have PV panels, they (on average) last 20+ years - that's not being dependant, particularly when PV panels can be mass produced anywhere.

( They do not use rare earths (inverters use trace amounts) )

China cornering rare earths (for now) is an "own goal" by every country that chose to let China (and to a lesser degree Malaysia) take a hit on the toxic by products of processing concentrates.

The US is easily capable of producing it's own rare earths, it's certainly not been backwards in asking Australia to do that for it.


20 years is not great for those. Extreme weather can also shred them. It's fine to have some, but I sure as hell don't want to be dependent on that.

Gotcha, you prefer to be daily dependent on fossil fuel delivery rather than get new panels every 20 years, particularly given you're in a country seemingly incapable of manufacture and minerals processing.

That's certainly a position.


The US and the rest of the West are capable of manufacturing. You just said yourself they can be made "anywhere" so make up your mind. What I think is that manufacturing is not competitive in the US or the West as a whole because of wage requirements and monetary exchange rates, and additionally because we operate a mostly free market and don't penalize foreign state-subsidized products hard enough to make domestic manufacture make sense.

Replacing the solar panels every 20 years at minimum would mean that the panels would always be getting refreshed. Bro we have roads and bridges 50 years past end of life, in need of rebuilding. We can't afford this fragile power grid rebuild that is completely dependent on foreign suppliers. Sorry. Take your snark and shove it.


There was a test on 30 year old panels and they only lost 20% capacity.

That is only marginally better in the scheme of things. They want to take farms for food out of commission in some places to replace with fragile and unreliable solar systems. Imagine installing this stuff on a large scale. If you plan to replace all the panels after 30 years and incur no losses from high winds, hail, vandals, etc., then you would need to overbuild the system by 20% at minimum. This is assuming modern panels are as durable as those old panels from the study too. 30 years ago, solar panels were built in the West and cost 10x as much as the ones we have now. So it seems reasonable to assume that brand new panels might not have the same characteristics, and be less durable. It would make a lot more sense to just put these panels on roofs and in parking lots where the real estate is already consumed, and the power can be a backup source instead of a grid-scale vulnerability.

That's a lot of guessing though, newer panels might as well be more durable and longer lasting. Even if you lose 20% in 20-30 years you don't need to replace the panel unless the cost of replacing them can be recouped within a reasonable amount of years. As long as there is more space for more panels you don't need to replace existing ones unless they stop working, so capacity just increases for decades until you reach some saturation point.

The real estate would be more valuable than the panels, presumably. So it's not like they can just keep expanding forever. As for the vulnerabilities, this is not based on guessing. We've produced solar panels in the West even recently. They are not competitive with China on cost. They are actually fragile. We are facing geopolitical challenges.

There was a test on 30 year old panels and they only lost 20%.

We (literally, I know where some are) have 30 year old panels and 95 year old men, their existence doesn't negate an average.

Also, PV panels are kinda non uniform in performance, long term studies show that one fifth of them perform 1.5 times worse than the rest.

Either way, 20 year lifetimes where you build once and reap the rewards for 20 years is sufficient to put to rest the kind of argument being made about dependancies.

That's more than enough time for any G20 country to be making it's own PV production chain.


>Either way, 20 year lifetimes where you build once and reap the rewards for 20 years is sufficient to put to rest the kind of argument being made about dependancies.

It's not sufficient. We have had plenty of time to start making all of the critical things we import, and that never happened. In most cases, these things used to be made in the West in the first place. Just because you CAN make a thing doesn't mean it makes sense. The economics of solar would be totally different if you had to pay 5x more for solar panels to replace Chinese-subsidized slave-labor-backed imports.

There are other arguments to be made against mega-scale solar. Don't get me wrong, I love the idea of solar because it is on a small scale one of the best ways for an individual to get a bit of electricity without reliance on fuel supplies. But it has a lot of disadvantages at scale which make it unsuitable for many regions. Hail, snow, dust, vandals, and strategic vulnerability all make it look precarious. The supply chain concern is that much worse.


The economics of petroleum would be totally different if you had to pay 5x more for crude to replace Straight of Hormuz blocked imports. Weird that it took way less than 20 years for that to hit.

Perhaps, but solar panels will NOT get cheaper as a result of oil becoming more expensive, even if the price hike is somehow permanent. They are manufactured through an energy-intensive process. Unlike the solar manufacturing cost problem, there are solutions to the oil shortage issue for the rest of our lifetimes, most likely. We depend on oil and gas for our existence in many ways, not just for energy.

In the very long run we need to find alternative sources of energy but I think solar is not going to be that solution. Solar is most likely going to be a fringe backup alternative to nuclear power. Batteries have tremendous disadvantages. In the long run some kind of biofuel or synthetic hydrocarbon might win over batteries.

>Weird that it took way less than 20 years for that to hit.

No, it's not weird. It hit in the 70s and is an actual avoidable problem. Don't kick the hornets nest. The issue of the Strait of Hormuz was known to nearly everyone for decades, and we got a bunch of leaders all over the West who are collectively batshit crazy.


Obviously, money is a factor. But you cannot discount political resistance. If a government in charge is dead set in promoting fossil fuels over renewables, it will never happen. Even if you get a government led by the most gungho green friendly administration, in a democratic government, those opposing can stall any plans to go green. If you live in a less democratic government where leadership decides it's going green, you're going green.

1. Solar panels need a huge capital expenditure up front.

2. Wind power works better for farmers and provide a smaller footprint. Drive on I-80 in Iowa on a clear night and you'll see the wind farms blink their red lights in the distance. Farmers can lease their land for wind turbines, and the generation companies take on the regulatory / capital / politcal risks, etc.

3. Farming is more or less free market based, and often farmers can let their grain sit in a silo until the price is optimal for them to sell. But for a given location, there's only one power company that you can use, and typically the power companies don't like people putting solar panels on the grid. In many states (like in Idaho) there's regulatory capture or weird politics preventing people putting solar panels up on their own land. (Again Idaho)

As a side note, agriculture uses up lots of water in deserts (more so than people), so it seems like in desert spaces like Idaho, solar would make a lot more sense than agriculture would. And we should move the agriculture to where the water naturally falls from the skies.


There was also a huge move by farmers towards growing corn and selling for ethanol because E-85 was seen as some future fuel. Many farmers I know went all in and switched from regional crops (this was in ND), such as sugar beets, soybeans, and spring wheat to corn to fuel this thinking this some kind of energy gold rush.

Then economics, lack of infrastructure and incentives buried it in a few years. Farmers were left holding the bag. Many were not happy they had made a huge move into this new "renewable" energy, only to get burned in the end. The same farmers I know have scoffed at windmills and solar farms.

E-85 really lost a lot of farmers willing to use their land for something that won't pan out. The ones I know went back to growing what sells and grows the best in the market. Trying to tell a farmer that solar panels on his land where he grows food to feed his family is going to be a tough sell now.


> As a side note, agriculture uses up lots of water in deserts (more so than people), so it seems like in desert spaces like Idaho, solar would make a lot more sense than agriculture would. And we should move the agriculture to where the water naturally falls from the skies.

The problem is that in many of those places where enough water naturally falls from the sky the soil and/or the weather isn't as good for growing food.

It is generally much easier to move water to a low water place that has great soil and/or weather than it is to move soil or weather to a high water place that is missing good soil or weather, and so here we are.


In California, PG&E charges you for putting solar on their grid and they'll pay you a penny for your extra electricity.

> why it is not done yet?

Whoa lots to unpack here. I'll summarize:

- It is already happening to some extent (it's cheaper)

- Try explaining to farmers to do away with their livelihood and retrain them to running a solar farm

- Entrenched bureaucracy and gov subsidies


People, especially recent American leaders, do not make rational decisions.

They also have goals other than generating energy effectively


Because externalities screw with incentives.

Theft is stupid from a broad view. It causes more harm to the victim than benefit to the perpetrator. Everyone would be better off if we everyone stopped stealing and we provided the same level of benefit to would-be perpetrators in a more efficient form.

Why hasn't theft stopped yet? Because it's extremely difficult to do from a systems level. In principle it's simple: just don't steal. Convincing everyone to do it is hard.

Likewise, fossil fuels have horrible externalities that kill thousands if not millions of people per year. We'd be better off if we greatly cut back our usage and replaced it with cleaner sources of energy. But the people benefitting from any given use of fossil fuels and the people paying the costs tend not to be the same people. This makes it extremely difficult to organize a halt.


It is happening. It takes time to build and it only became absurdly cheap in the past few years. But it keeps getting cheaper and better (batteries too for anyone who wants to bring that up).

Based on your response timestamp I will conclude you didn't watch the video. He "does it rationally" like you requested. You said "try not to blame anyone" so if you'd rather not hear about the people who actually are to blame for this situation, then skip the last 30 minutes of the video.

had the same question and after reading about it, I found there are multiple layers on each other.

Existing plans are built to run 40-60 years.. retiring creates "stranded assets". pension funds fight hard to avoid that. The renewable projects that wait for permits exceed total existing capacity.. the bottleneck is not tech, but locality.

I found this visual schematic helpful - https://vectree.io/c/why-energy-transitions-are-slow-grid-in...


Time, infrastructure changes take decades

It is being done, just not here.

I think you cannot get an idea of in hiw many ways 2. can break...

That is not even half realistic. Are you going to port all that code out there (autotools, cmake, scons,meson, bazel, waf...) to a "true" build system?

Only the idea is crazy. What Conan does is much more sensible: give s layer independent of the build system (and a way to consume packages and if you want some predefined "profiles" such as debug, etc), leave it half-open for extensions and let existing tools talk with that communication protocol.

That is much more realistic and you have way more chances of having a full ecosystem to consume.

Also, noone needs to port full build system or move from oerfectly working build systems.


Are the perfectly working build systems in the room with us now? Cmake and Conan ain’t it.

> That is not even half realistic.

uv is an existence proof that when you make something that doesn’t suck ass the entire industry will very very rapidly converge.

Claude makes converting any particular configuration from one system to another very very very tractable.


Relfection was a desperate need. Useful and difficult to design feature.

There are also things like template for or inplace_vector. I think it has useful things. Just not all things are useful to everyone.


A bureau from the top call is not the way to do it.

Just beat it. Ah, not so easy huh? Libraries, ecosystem, real use, continuous improvements.

Even if it does not look so "clean".

Just beat it, I will move to the next language. I am still waiting.


But compile-time processing is certainly useful in a performance-oriented language.

And not only for performance but also for thread safety (eliminates initialization races, for example, for non-trivial objects).

Rust is just less powerful. For example you cannot design something that comes evwn close to expression templates libraries.


> And not only for performance but also for thread safety

This is already built-in to the language as a facet of the affine type system. I'm curious as to how familiar you actually are with Rust?

> Rust is just less powerful.

On the contrary. Zig and C++ have nothing even remotely close to proc macros. And both languages have to defer things like thread safety into haphazard metaprogramming instead of baking them into the language as a basic semantic guarantee. That's not a good thing.


Writing general generic code without repetition for Rust without specialization is ome thing where it fails. It does not have variadics or so powerful compile metaprogramming. It does not come even remotely close.

Proc macros is basically plugins. I do not think thos is even part of the "language" as such. It is just plugging new stuff into the compiler.


> For example you cannot design something that comes evwn close to expression templates libraries.

You keep saying this and it's still wrong. Rust is quite capable of expression templates, as its iterator adapters prove. What it isn't capable of (yet) is specialization, which is an orthogonal feature.


Rust cannot take a const function and evaluate that into the argument of a const generic or a proc macro. As far as I can tell, the reasons are deeply fundamental to the architecture of rustc. It's difficult to express HOW FUNDAMENTAL this is to strongly typed zero overhead abstractions, and we see where Rust is lacking here in cases like `Option` and bitset implementations.

> Rust cannot take a const function and evaluate that into the argument of a const generic

Assuming I'm interpreting what you're saying here correctly, this seems wrong? For example, this compiles [0]:

    const fn foo(n: usize) -> usize {
        n + 1
    }

    fn bar<const N: usize>() -> usize {
        N + 1
    }

    pub fn baz() -> usize {
        bar::<{foo(0)}>()
    }
In any case, I'm a little confused how this is relevant to what I said?

[0]: https://rust.godbolt.org/z/rrE1Wrx36


> Rust is quite capable of expression templates, as its iterator adapters prove.

AFAIU iterator adapters are not quite what expression templates are because they rely on the compiler optimizations rather than the built-in feature of the language, which enable you to do this without relying on the compiler pipeline.


I had always thought expression templates at the very least needed the optimizer to inline/flatten the tree of function calls that are built up. For instance, for something like x + y * z I'd expect an expression template type like sum<vector, product<vector, vector>> where sum would effectively have:

    vector l;
    product& r;
    auto operator[](size_t i) {
        return l[i] + r[i];
    }
And then product<vector, vector> would effectively have:

    vector l;
    vector r;
    auto operator[](size_t i) {
        return l[i] * r[i];
    }
That would require the optimizer to inline the latter into the former to end up with a single expression, though. Is there a different way to express this that doesn't rely on the optimizer for inlining?

Expression templates do not rely on optimizer since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST). This guarantees that you get zero cost when you really need it. What you're describing is something keen of copy elision and function folding though inlining which is pretty much basics in any c++ compiler and happens automatically without special care.

> since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST).

Right, I understand that. What is not exactly clear to me is how you get from the tree of deferred expressions to the "flat" optimized expression without involving the optimizer.

Take something like the above example for instance - w = x + y * z for vectors w/x/y/z. How do you get from that to effectively

    for (size_t i = 0; i < w.size(); ++i) {
        w[i] = x[i] + y[i] * z[i];
    }
without involving the optimizer at all?

The example is false because that's not how you would write an expression template for given computation so the question being how is it that the optimizer is not involved is also not quite set in the correct context so I can't give you an answer for that. Of course that the optimizer is generally going to be involved, as it is for all the code and not the expression templates, but expression templates do not require the optimizer in the way you're trying to suggest. Expression templates do not rely on O1, O2 or O3 levels being set - they work the same way in O0 too and that may be the hint you were looking for.

> The example is false because that's not how you would write an expression template for given computation

OK, so how would you write an expression template for the given computation, then?

> Expression templates do not rely on O1, O2 or O3 levels being set - they work the same way in O0 too and that may be the hint you were looking for.

This claim confuses me given how expression templates seem to work in practice?

For example, consider Todd Veldhuizen's 1994 paper introducing expression templates [0]. If you take the examples linked at the top of the page and plug them into Godbolt (with slight modifications to isolate the actual work of interest) you can see that with -O0 you get calls to overloaded operators instead of the nice flattened/unrolled/optimized operations you get with -O1.

You see something similar with Eigen [2] - you get function calls to "raw" expression template internals with -O0, and you need to enable the optimizer to get unrolled/flattened/etc. operations.

Similar thing yet again with Blaze [3].

At least to me, it looks like expression templates produce quite different outputs when the optimizer is enabled vs. disabled, and the -O0 outputs very much don't resemble the manually-unrolled/flattened-like output one might expect (and arguably gets with optimizations enabled). Did all of these get expression templates wrong as well?

[0]: https://web.archive.org/web/20050210090012/http://osl.iu.edu...

[1]: https://cpp.godbolt.org/z/Pdcqdrobo

[2]: https://cpp.godbolt.org/z/3x69scorG

[3]: https://cpp.godbolt.org/z/7vh7KMsnv


Look, I have just completed work on some high performance serialization library which avoids computing heavy expressions and temporary allocations all by using expression templates and no, optimization levels are not needed. The code works as advertised at O0 - that's the whole deal around it. If you have a genuine question you should ask one but please do not disguise so that it only goes to prove your point. I am not that naive. All I can say is that your understanding of expression templates is not complete and therefore you draw incorrect conclusions. Silly example you provided shows that you don't understand how expression template code looks like and yet you're trying to prove your point all over and over again. Also, most of the time I am writing my comments on my mobile so I understand that my responses sometime appear too blunt but in any case I will obviously not going to write, run or check the code as if I had been on my work. My comments here is not work, and I am not here to win arguments, but most of the time learn from other people's experiences, and sometimes dispute conclusions based on those experiences too. If you don't believe me, or you believe expression templates work differently, then so be it.

> If you have a genuine question you should ask one but please do not disguise so that it only goes to prove your point.

I think my question is pretty simple: "How does an optimizer-independent expression template implementation work?" Evidently the resources I've found so far describe "optimizer-dependent expression templates", and apparently none of the "expression template" implementations I've had reason to look at disabused me of that notion.

> My comments here is not work, and I am not here to win arguments, but most of the time learn from other people's experiences, and sometimes dispute conclusions based on those experiences too.

Sure, and I like to learn as well from the more knowledgeable/experienced folk here, but as much as I want to do so here I'm finding it difficult since there's precious little for me to go off of beyond basically just being told I'm wrong.

> If you don't believe me, or you believe expression templates work differently, then so be it.

I want to understand how you understand expression templates, but between the above and not being able to find useful examples of your description of expression templates I'm at a bit of a loss.


Expression templates do AST manipulation of expressions at compile time. Let's say you have a complex matrix expression that naively maps to multiple BLAS operations but can be reduced to a single BLAS call. With expression templates you can translate one to the other, this is a static manipulation that does not depend on compiler level. What does depend on the compiler is whether the incidental trivial function calls to operators gets optimized away or not. But, especially with large matrices, the BLAS call will dominate anyway, so the optimization level shouldn't matter.

Of course in many cases the optimization level does matter: if you are optimizing small vector operators to simd inlining will still be important.


> With expression templates you can translate one to the other, this is a static manipulation that does not depend on compiler level.

How does that work on an implementation level? First thing that comes to mind is specialization, but I wouldn't be surprised if it were something else.

> What does depend on the compiler is whether the incidental trivial function calls to operators gets optimized away or not.

> Of course in many cases the optimization level does matter: if you are optimizing small vector operators to simd inlining will still be important.

Perhaps this is the source of my confusion; my uses of expression templates so far have generally been "simpler" ones which rely on the optimizer to unravel things. I haven't been exposed much to the kind of matrix/BLAS-related scenarios you describe.


Partial specialization specifically. Match some patterns and covert it to something else. For example:

  struct F { double x; };
  enum Op { Add, Mul };
  auto eval(F x) { return x.x; }
  template<class L, class R, Op op> struct Expr;
  template<class L, class R> struct Expr<L,R,Add>{  L l; R r; 
    friend auto eval(Expr self) { return eval(self.l) + eval(self.r); } };
  template<class L, class R> struct Expr<L,R,Mul>{  L l; R r; 
    friend auto eval(Expr self) { return eval(self.l) * eval(self.r); } };
  template<class L, class R, class R2> struct Expr<Expr<L, R, Mul>, R2, Add>{   Expr<L,R, Mul> l; R2 r; 
    friend auto eval(Expr self) { return fma(eval(self.l.l), eval(self.l.r), eval(self.r));}};
  template<class L, class R>
  auto operator +(L l, R r) { return Expr<L, R, Add>{l, r}; } 
  template<class L, class R>
  auto operator *(L l, R r) { return Expr<L, R, Mul>{l, r}; } 

  double optimized(F x, F y, F z) { return eval(x * y + z); }
  double non_optimized(F x, F y, F z) { return eval(x + y * z); }
Optimized always generates a call to fma, non-optimized does not. Use -O1 to see the difference (will inline trivial functions, but will not do other optimizations). -O0 also generates the fma, but it is lost in the noise.

The magic happens by specifically matching the pattern Expr<Expr<L, R, Mul>, R2, Add>; try to add a rule to optimize x+y*z as well.


Hrm, OK, that makes sense. Thanks for taking the time to explain! Guessing optimizing x+y*z would entail something similar to the third eval() definition but with Expr<L, Expr<L2, R2, Mul>, Add> instead.

I think at this point I can see how my initial assertion was wrong - specialization isn't fully orthogonal to expression templates, as the former is needed for some of the latter's use cases.

Does make me wonder how far one could get with rustc's internal specialization attributes...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: