Hacker Newsnew | past | comments | ask | show | jobs | submit | bob1029's commentslogin


I'd say Unreal > Unity > Godot for feature space.

For performance/practicality, it's Unity > Godot > Unreal. Building something in Unreal that simply runs with ultra low frame latency is possible, but the way the ecosystem is structured you will be permanently burdened by things like temporal anti-aliasing way before you find your bearings. Unreal and Unity are at odds on MSAA support (Unity has it, Unreal doesn't). MSAA can be king if you have an art team with game dev knowledge from 10+ years ago. Unreal is basically TAA or bust.


> Switch

Good luck shipping arbitrary binaries to this target. The most productive way for an indie to ship to Nintendo in 2025 is to create the game Unity and build via the special Nintendo version of the toolchain.

How long do we think it would take to fully penetrate all of these pipeline stages with rust? Particularly Nintendo, who famously adopts the latest technology trends on day 1. Do we think it's even worthwhile to create, locate, awaken and then fight this dragon? C# with incremental GC seems to be more than sufficient for a vast majority of titles today.


Not only Unity, Capcom is using their own fork of .NET in their game engine.

Devil May Cry for the Playstation 5 was shipped with it.


> how much gas they need as an input to generate the 42MW

If you don't have a pipeline, the lower bound is something like 10 LNG tanker trucks per day for each turbine at 42MW. Natural gas is incredibly efficient to transport in liquid form so you could theoretically get away with this for a little while.


The question I’m asking is slightly tangent to how to feed the required gas. It’s “How many MW of gas do I need to feed in to get one MW of electricity.” And they’re pointedly avoiding any statement about this.

> Why is it so much easier to build the pipelines than to bring in electric lines?

It's not necessarily easier to do one or the other. It's about which one is faster.


In Texas the electric grid is regulated by ERCOT and gas pipelines are regulated (by accident of history) by the Texas Railroad Commission. ERCOT has a big network of producers who put energy into the grid, local companies like CenterPoint Energy who distribute it to customer sites, then retail electric companies sell the power and pay a line usage fee to the line owner (which tends to be a fixed monthly cost to the customer, listed separately on the bill from other fees or usage bills). The TRC deals with companies that own their own pipelines and bill the end customer directly.

A lot of the natural gas in the US is in Texas, and a lot of it is flared while pumping out crude. Putting data centers on turbines near the extraction fields out in the Permian Basin makes sense for power. You can build short pipelines or hook into the ones already there.


A mystery meat lithium ion battery pack from Amazon is probably more hazardous than this.

The grocery business has razor thin margins. There is no dry sponge remaining to absorb this kind of massive fixed cost. The business is highly variable.

I think scaling up would be the only way out of this problem. Scaling down only makes it worse.


No true for us at least. Well kind of - the scale I'm mentioning is required if you're doing your own tech like we do. We develop all our core tech - the website, the logistics operation automation, the last mile app and scheduling. If we can do that profitably, what do you think will happen a company like our develops a few FC's of similar scale using the same technology?

The margins are thin, but not as razor thin as you might think. The grocery stores have a lot of overhead that we don't. Additionally, people realize that not only is that the case, but they also save from their own costs - just driving to the store is not free, let alone the time you spend, which is massively cut down.


Or you sell every bit of data you collect.

We don't need to do that at all. Essentially zero. Whether we'll do it in the future - I don't know. It's not really under my control, but right now we can be profitable without needing it. And we're price-competitive with the large grocery stores.

My entire motivation for using GAs is to get away from back propagation. When you aren't constrained by linearity and chain rule of calculus, you can approach problems very differently.

For example, evolving program tapes is not something you can back propagate. Having a symbolic, procedural representation of something as effective as ChatGPT currently is would be a holy grail in many contexts.


In context of B2B SaaS products that require a high degree of customization per client, I think there could be an argument for this figure.

The biggest bottleneck I have seen is converting the requirements into code fast enough to prove to the customer that they didn't give us the right/sufficient requirements. Up until recently, you had to avoid spending time on code if you thought the requirements were bad. Throwing away 2+ weeks of work on ambiguity is a terrible time.

Today, you could hypothetically get lucky on a single prompt and be ~99% of the way there in one shot. Even if that other 1% sucks to clean up, imagine if it was enough to get the final polished requirements out of the customer. You could crap out an 80% prototype in the time it takes you to complete one daily standup call. Is the fact that it's only 80% there bad? I don't think so in this context. Handing a customer something that almost works is much more productive than fucking around with design documents and ensuring requirements are perfectly polished to developer preferences. A slightly wrong thing gets you the exact answer a lot faster than nothing at all.


I have my best successes by keeping things constrained to method-level generation. Most of the things I dump into ChatGPT look like this:

  public static double ScoreItem(Span<byte> candidate, Span<byte> target)
  {
     //TODO: Return the normalized Levenshtein distance between the 2 byte sequences.
     //... any additional edge cases here ...
  }
I think generating more than one method at a time is playing with fire. Individual methods can be generated by the LLM and tested in isolation. You can incrementally build up and trust your understanding of the problem space by going a little bit slower. If the LLM is operating over a whole set of methods at once, it is like starting over each time you have to iterate.

"Dumping into ChatGPT" is by far the worst way to work with LLMs, then it lacks the greater context of the project and will just give you the statistical average output.

Using an agentic system that can at least read the other bits of code is more efficient than copypasting snippets to a web page.


> then it lacks the greater context of the project

This is the point. I don't want it thinking about my entire project. I want it looking at a very specific problem each time.


But why?

Most code is about patterns, specific code styles and reusing existing libraries. Without context none of that can be applied to the solution.

If you put a programmer in a room and give them a piece of paper with a function and say OPTIMISE THAT! - is it going to be their best work?


I do this but with copilot. Write a comment and then spam opt-tab and 50% of the time it ends up doing what I want and I can read it line-by-line before tabbing the next one.

Genuine productivity boost but I don't feel like it's AI slop, sometimes it feels like its actually reading my mind and just preventing me from having to type...


I've settled in on this as well for most of my day-to-day coding. A lot of extremely fancy tab completion, using the agent only for manipulation tasks I can carefully define. I'm currently in a "write lots of code" mode which affects that, I think. In a maintenance mode I could see doing more agent prompting. It gives me a chance to catch things early and then put in a correct pattern for it to continue forward with. And honestly for a lot of tasks it's not particularly slower than "ask it to do something, correct its five errors, tweak the prompt" work flow.

I've had net-time-savings with bigger agentic tasks, but I still have to check it line-by-line when it is done, because it takes lazy shortcuts and sometimes just outright gets things wrong.

Big productivity boost, it takes out the worst of my job, but I still can't trust it at much above the micro scale.

I wish I could give a system prompt for the tab complete; there's a couple of things it does over and over that I'm sure I could prompt away but there's no way to feed that in that I know of.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: