Hacker Newsnew | past | comments | ask | show | jobs | submit | bloppe's commentslogin

Margins require a competitive edge. If productivity gains are spread throughout a competitive industry, margins will not get bigger; prices will go down.

That feels optimistic. This kind of naive free market ideology seems to rarely manifest in lower prices.

"Building software" is a bit too general, though. I believe "Building little web apps for my son's school" has gotten at least 10x easier. But the needle has not moved much on building something like Notion, or Superhuman, or Vercel, or <insert name of any non-trivial project with more than 1000 man-hours of dev work>.

Even with perfect prompt engineering, context rot catches up to you eventually. Maybe a fundamental architecture breakthrough will change this, but I'm not holding my breath.


Yeah, that's not a comparison to the kinds of highly complex internal systems I worked with the Fortune 1xx companies, particularly the regulated ones (healthcare). The whole "my son's school" thing is very nice, and it's cool you can knock that out so fast, but it's nothing at all like the environments I worked in, particularly the politics.

While I hate defending GHA, the docs do include this:

- Using the commit SHA of a released action version is the safest for stability and security.

- If the action publishes major version tags, you should expect to receive critical fixes and security patches while still retaining compatibility. Note that this behavior is at the discretion of the action's author.

So you can basically implement your own lock file, although it doesn't work for transitive deps unless those are specified by SHA as well, which is out of your control. And there is an inherent trade-off in terms of having to keep abreast if critical security fixes and updating your hashes, which might count as a charitable explanation for why using hashes is less prevalent.


On the other hand, this issue has been known to GitHub since shortly after Actions’ release[0]. They added some cya verbiage to their docs, but they never followed up by making version pinning meaningful.

Sure you can implement it yourself for direct dependencies and decide to only use direct dependencies that also use commit sha pinning, but most users don’t even realize it’s a problem to begin with. The users who know often don’t bother to use shas anyway.

Or GitHub could spend a little engineer time on a feasible lock file solution.

I say this as somebody who actually likes GitHub Actions and maintains a couple of somewhat well-used actions in my free time. I use sha pinning in my composite actions and encourage users to do the same when using them, but when I look at public repos using my actions it’s probably 90% using @v1, 9% @v1.2 and 1% using commit shas.

[0] Actions was the first Microsoft-led project at GitHub — from before the acquisition was even announced. It was a sign of things to come that something as basic as this was either not understood or swept under the rug to hit a deadline.


> it doesn't work for transitive deps unless those are specified by SHA as well, which is out of your control

So in other words the strategy in the docs doesn't actually address the issue


There's a repository setting you can enable to prevent actions from running unless they have their version pinned to a SHA digest. This setting applies transitively, so while you can't force your dependencies to use SHA pinning for their dependencies, you can block any workflow from running if it doesn't.

A lockfile would address this issue, with the added benefit that it would work

- Using the commit SHA of a released action version is the safest for stability and security.

This is not true for stability in practice: the action often depends on a specific Node version (which may not be supported by the runner at some point) and/or a versioned API that becomes unsupported. I've had better luck with @main.


Depends what you mean by stability. The post is complaining about the lack of lockfiles, and the problem you describe would also be an issue with lockfiles.

The underlying problem is that you can't keep using the same version, and one way it fails ruins the workaround for a different failure.

Using an SHA is an anti-pattern for me. Because by using one, you kind of modeled "I am getting this fixed/static thing"; when in reality, it is very far from that. I got bitten by it twice that I learned that you either have a lock file or you don't.

Ya, I'm not a big fan of that use case, and I agree it's a problem. But I hold Bitcoin as a hedge against inflation. It tends to do better than gold, and is way easier to transact and store.

To understand the initial arguments, look no further than the Genesis block, which includes this text:

The Times 03/Jan/2009 Chancellor on brink of second bailout for banks

In many rich countries, all signs point to nasty inflation for the foreseeable future. Bitcoin is inflation-proof because there is no central bank that can print more Bitcoin. Having a secure system with that property means accepting some tradeoffs in terms of usability and efficiency, compared to a centralized database.


> there is no central bank that can print more Bitcoin

It turns out this doesn't matter: you can't hear the inflation argument over the volatility. The amount of goods you can buy per Bitcoin changes dramatically on a month by month basis. It's just that everyone loved it while it was going up, but that's not actually guaranteed!

Also, you can't print more Bitcoin, but that doesn't matter: you can fork it (people have, BCH), or you can just endlessly spawn new token chains, or you can have things which both sides regard as abominations but are somehow immensely popular: stablecoins. These give you the legal stability of crypto tied to the price stability of the dollar. It turns out that what people actually wanted was several hundred billion dollars of virtual poker chips.


but there's no compounding interest. no dividend based on future cashflows.

holding stocks of a diverse index seems better on the long run, no?


Check out how $100 put in bank deposits or S&P500 have done versus gold over the last 50 years. You will find that these do not generate real returns when measured against sound currency either.

I'm not sure how you reached this conclusion. I did as you suggested. $100 invested 50 years ago into gold would be worth $3000 today. $100 invested into the S&P500 50 years ago would be worth $6870.

https://www.macrotrends.net/2324/sp-500-historical-chart-dat... https://www.macrotrends.net/1333/historical-gold-prices-100-...


$100 put into S&P 500 in 1975 would be about $7500 today, or about $1250 in 1975 dollars.

$100 put into gold in 1975 would be about $2600 today, or about $440 in 1975 dollars.

S&P 500 would have had 3x the returns of gold.


check your numbers with a purchase date of 1971.

Usually, yes. But the government can still tank the economy in all sorts of ways. It happens more frequently in some countries than others, but it can happen anywhere.

Indeed. The crypto crowd seems to assume that the options for your savings are cash or bank deposits, or crypto. That's nonsense. A balanced portfolio of stocks (with some bonds maybe to reduce vol and improve your Sharpe Ratio) handily beats inflation. Heck, even bonds alone have mostly had positive real yields.

This also supports and funds the productive economy, unlike crypto.


Not using your cash also helps the economy. More exchange of money means more velocity means decrease of value of the money.

By not using your cash (by for example holding crypto) you’re making the money that circulates higher value


Until you hold a passport of one of the US enemy states, which are plenty of and have permanent risk of getting your account frozen and money stolen.

Crypto doesn't have this issue.


If you only care about inflation, real-estate in desirable locations is also inflation-proof. You can't print more land in San Francisco, London or Hong Kong.

Clearly crypto is more accessible to more people than is San Francisco, London, or Hong Kong real estate.

Yeah but you can also have a disaster strike in that place (say, a nuclear accident) that will obliterate your real-estate value. Or general society changes that will make a city much less desirable (see the "rust belt"). Of course, nothing is without risk - so in that sense, it's not surprising that real-estate has risks. But that's what I wanted to underline, nothing is "inflation-proof". There's no guaranteed way to preserve wealth (much less increase it). None.

While there is no bulletproof way to preserve wealth real-estate is one of the most sound one compared to others. A nuclear accident can be insured and general social decline happens over many years or even decades that gives plenty of time to react.

Way less liquidity and way more administrative overhead, but sure

It doesn't run the JVM. It's an ahead-of-time compiler that converts Java bytecode to wasm.

Oh, if you want a full fat JVM, then you want CheerpJ https://cheerpjdemos.leaningtech.com/SwingDemo.html#demo

Takes a few seconds longer to load because it loads all of Java Spring, but it still performs just fine on my phone (though the lack of on screen keyboard activation makes it rather unfortunate for use in modern web apps).


If you have a pension, you're an investor in PE. If you live in a country with a sovereign wealth fund, you're a beneficiary of PE. If you're connected to a school with an endowment, a lot of that money ends up in PE funds, and can fund lots of research and student resources.

So ya, I'd agree the PE is rarely good for anyone but the investors, but you'd be surprised how many people are investors without realizing it.


If all of those things never invested a cent in private equity funds that buy up existing companies to turn the screws on their customers and put the money into new business creation instead, they wouldn't be making any less money and the whole world would be better off, including the investors themselves in their role as customers and employees.

But in your example, it sounds like representative democracy is a choice freely taken. If people actually want representatives to worry about the details of policy for them, then that is real democracy, because the alternative is a form of government that the people don't actually want.

Rust compiles to LLVM IR. I'm pretty surprised that building this transpiler was considered a better use of time than writing an LLVM backend for whatever "weird embedded target" might need this.

In fact there used to be a C backend for LLVM, but it was removed in LLVM 3.1 [0]. JuliaHub has resurrected it as a third-party backend [1], though I have no idea if there is any interest in upstreaming the work from either end.

[0]: https://releases.llvm.org/3.1/docs/ReleaseNotes.html

[1]: https://releases.llvm.org/3.1/docs/ReleaseNotes.html


The refs are duplicated.

Ack, my bad. Can't edit the comment any more, unfortunately. Second ref is supposed to be to https://github.com/JuliaHubOSS/llvm-cbe

Thanks

Sometimes you may need to deal with odd proprietary processors with limited and flawed knowledge about them.

For example, in the 37c3 talk "Breaking "DRM" in Polish trains" by the folks from Dragon Sector (I highly recommend watching it), they needed to reverse-engineer Tricore binaries, however they found the Ghidra implementation had bugs.

As for the PLCs, the IEC 61131-3 functional block diagrams transpile to C code which then compiles to Tricore binaries using an obscure GCC fork. Not saying that anyone would want to write Rust code for PLCs, but this is not uncommon in the world of embedded.


.. there is some humor in the string

"Breaking "DRM" in Polish trains"


Why?

If you write a library in Rust and want to make that library available to other language ecosystems, not requiring a Rust compiler toolchain for using the library is a pretty big plus - instead create a C source distribution of the library, basically using C as the cross-platform intermediate format.

I think you may not be familiar with how embedded development works.

Most teams who write code for embedded devices (especially the weird devices at issue here) don't have the hardware knowledge, time, or contractual ability to write their own compiler backend. They're almost always stuck with the compiler the manufacturer decided to give them.


You're right. But I'm also surprised any device manufacturer would think it's a better use of their time to ship a bespoke C compiler rather than an LLVM backend that would allow a lot more languages to be built against their ISA, making it more valuable.

But ya, I believe this project exists for a meaningful purpose, I'm just surprised.


> But I'm also surprised any device manufacturer would think it's a better use of their time to ship a bespoke C compiler rather than an LLVM backend that would allow a lot more languages to be built against their ISA, making it more valuable.

They aren't usually building a compiler from scratch; they modify their existing compiler[1] for their new hardware, which is fractions of effort required compared to building a new compiler or modifying the LLVM backend.

Unless it's a completely new ISA, they'll spend an hour or less just adding in the new emit code for only the changes in the ISA.

----------------------------------

[1] Usually just the previous gcc that they used, which is why in 2022 I was working on brand new devices that came with a customised gcc 4.something


I think the approach would not be to alter the manufacturer's compiler directly, but to run your Rust code through a separate Rust-to-C compiler then feed that output into the compiler the manufacturer gave you.

There are many kinds of IRs in compilers - I'm not familiar with how Rust works, but for example GCC has an intermediate representation called GENERIC, which is a C-like language that preserves the scoping and loops and branches of the inputted C code.

Lower level representations also exist - since C has goto, you can pretty much turn any SSA IR to C, but the end result won't be readable.


Rustc supports backends other than LLVM btw

And someone has made (https://github.com/FractalFir/rustc_codegen_clr) a backend targeting C (alongside .NET/CIL).

this effectively writes a backend for all the weird targets at once, since pretty much everything out there has c compiler support

I have never seen any guides or blogs about writing LLVM backend

and considering how long compiling LLVM takes... it's reasonable to go for other options


This would cover many different weird embedded targets, and a lot of those targets can be an utter pain to target with a compiler.

I agree about some criticisms of the framework. I think they could do away with the plug modules and just go all in on usb-c. I don't mind the occasional dongle for HDMI. I also would prefer a thinner screen bezel, even if that means it's not swappable either.

But having easy access to internal hardware for upgrades is pretty huge. Rather than blowing 1-2k on a new machine every few years, it's just $200-500 for more RAM and a better CPU (assuming prices go back to normal in a reasonable amount of time)


> I think they could do away with the plug modules and just go all in on usb-c. I don't mind the occasional dongle for HDMI.

Strong agree. After all, their plug modules are really just dongles that are integrated into the body, which makes them worse IMO. More expensive, model-specific, etc.


And a major consumer of space, because they have to accommodate bigger ports even if they're only doing USB-C pass-through.

Agree; the choosing ports thing is a gimmick. These days, one video/charging-capable USB-C on each side, plus a USB-A and maybe an SD slot somewhere are enough for most use cases.

Frameworks (laptops) are all in on usb-c. That's the only port they come with. The modules are just type-c dongles. And they are too small - the official ethernet one hangs outside the body.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: