Hacker Newsnew | past | comments | ask | show | jobs | submit | tobz's commentslogin

Hadn't heard of Fabworks until today. Testing out a quote on some parts I was looking at getting from OSH Cut, they're actually much better on pricing _and_ have more capabilities. (!)

Have you used them a lot? Curious what your experience with them is if you're willing to share.


I’ve not personally used them, but the company was founded and is ran by the mentors of a high school robotics team. I volunteer with the program, so I’m familiar with the team.

I have made several quotes for different personal projects, but those projects have all been tabled for the moment.

Here’s the announcement post on the main social forum for the robotics program: https://www.chiefdelphi.com/t/introducing-fabworks-fast-affo...


If it's all sheet metal-esque pieces, where you could suffice with just laser cut features and/or simple bends.. then you should take a look at OSH Cut.

They have a fairly powerful platform -- instant quoting from uploaded design files, 2D/3D views, DFM feedback, good turnaround times, etc -- and the pricing isn't half bad for prototyping... but might not be as good for large-scale runs compared to bigger shops where you can nail down volume pricing discounts.


OSH Cut noted, thanks. Do you know what's their pricing level is like? For a couple of laser-cut steel pieces, about palm-sized each.


Regarding safety, do you have any links around the test results for the Kioma, or other car seats? You've mentioned a lot about the safety scores/test results in comparison to other car seats, but I couldn't seem to find a single mention of that stuff on the website? I also tried to see if something like Consumer Reports had a review of a Kioma car seat (either the current one or the carbon fiber one) but they had nothing.


Test Results > NHSTA used to publish their test results of all car seats, but no longer do so. FMVSS 213 (the US standard) tests for Head Injury Criterion (36 millisecond), Excursion, and Peak Acceleration in a frontal car crash. So keep in mind the utility of the results has limits, and doesn't test for a whole lot of things that are part of real-world usage in and out of a car. *Big grain of salt.*

I'll give you some real numbers and leave the comparison for you to do (lawyers get itchy if we do the comparing directly). Our carbon fiber seat's best result is HIC 197 in FMVSS 213 testing with a Crabi 12-mo old test dummy. Our friend Eli at Magic Bean's reviewed it in a video at https://www.youtube.com/watch?v=fGaU9R6jHCQ The current car seat for sale is of a similar class but doesn't have the $2500+ price tag of a carbon fiber seat.

If you're still curious, we can take this off HN: drop me a line at support@kioma.us and just mention HN and your HN profile name.


It feels more than a little bit coincidental to call it Noria when https://github.com/mit-pdos/noria exists (and has been posted about here on HN)... especially with the whole bit about incrementally computing changes.


At this point, it doesn't matter anymore. Any sane name is already used multiple times for any kind of project or product. Unless you have the exact same domain of functionality, go, use whatever name you want.


That's... entirely a different project, not sure what bow you're drawing, but it's a long one.


You do realise that this project has had broken build since 2021?


One crucial difference I was able to spot is that Demikernel seems to have an approach that can work for OSes other than Linux. To wit, examples of the performance of the Demikernel approach are shown on a Windows system running in Azure.

Of the three examples you listed, none of them seem to support their kernel bypass capabilities in anything other than Linux.


Fair enough. My guess is that the commercial solutions are simply going where the money is. If there was a commercially pressing need to port to other systems, I’m sure it can be done.

With that said, it's one thing to have a legitimate reason for preferring an alternative approach. But the paper doesn't even mention these solutions, which means that the authors are either ignorant of what's already been done (bad), or deliberately avoiding comparisons to alternative approaches (worse).


I would put the reviewers on the spot actually, even if the authors weren't aware of such solutions, the reviewers should.


This is the trouble with submitting to OS conferences. Their networking knowledge is typically limited. I’ve seen the same thing happen in the past.


Doesn't seem very productive to speak on behalf of two folks who you clearly don't have the authority to speak on behalf of. ¯\_(ツ)_/¯


I don't speak on behalf of them, I saw this on Twitter back when it happened, e.g., check Aaron Turon's twitter feed from back then.


Agreed.

Others have mentioned that the article calls out ignoring traditional in-memory cache daemons because of the additional network time, but with a targeted p50 response time (of their HTTP service fronting all of this), and caches like Redis and memcached being able to respond in hundreds of microseconds... it does feel like they didn't actually run the numbers.

The other natural alternative would simply be to run Redis/memcached and colocate this HTTP service on the same box. Now your "network latency" component is almost entirely negligible, and you've deferred the work of managing memory to applications _designed_ around doing so.


This.

We need more community/company interaction in Rust, especially in the networking space, instead of parallel implementations.


Disagree. This happened with tokio, the most used crate that provides an execution runtime for futures for async programming, and it's fucking awful to use. But, the community rallied behind it and is now stuck with it for better or for worse. There's been barely any effort to provide any documentation, even recently the main developer pawned it off to be crowdsourced and it really shows.

So yea, give me choices.


I used to think that as well, but sometimes different projects have different priorities or philosophies. In that case, you can either be the jerk who complains about free open source code you didn't pay for, try to bend the project to your will and probably fail, or roll your own.


Heh, this is funny... I worked on an emulator for Dark Age of Camelot _and_ an emulator for World of Warcraft (WCell) and attribute experience from both projects to getting my first software gig.

I also remember (fondly) going through RunUO with .NET Reflector to pick up tips and tricks. :)


I worked with runuo quite a lot and while I too have lots of fond memories, it's actually a huge mess of a codebase.

It's object-oriented except when it's not. It's modular except where it isn't. It "optimizes" for $number_of_cores_on_your_box but doesn't actually use them for anything but world saves. Almost none of it is commented or documented (doxygen doesn't count as documentation).

I think runuo excelled not because it was significantly better but because it was less shitty than POL and Sphere that came before it.


I generally agree (object serialization was rough, and stop-the-word saves have been a constant issue for the entire life of the project) but the things it did well were done outstandingly better than POL, Sphere, UOX, Wolfpack, etc.

I think the killer feature of RunUO was that it compiled the C# “script” files and linked them at runtime, eliminating the need to attach a scripting language that provides some limited API. It led to some extremely messy code, but it allowed you to achieve anything you wanted with minimum fuss, using the base .NET APIs that were well-documented. And the community was huge and active by UO emu standards.

It doesn’t fall over with 5000+ clients connected to a single server, which is pretty astounding considering it’s TCP based and players tended to clump up pretty majorly during in-game events and such.


It actually falls over around 1000 clients if they're actually playing and encountering one another at any regularity. I have been contributing to a new RunUO based shard (http://www.uooutlands.com) that has become extremely popular and hit 2k clients online. The lag was pretty extreme at first, but after a few weeks of studying CPU profiles I rewrote a lot of the map search algorithms, plus a few other things, and it's scaling easily now. I plan to push the rewrites upstream soon. The emulator still has legs!


My experience is from ~3000 online peak in the mid 2000s, with a more pure T2A ruleset.

Outlands has a breathtaking number of additional systems and enhancements that I’m sure are exposing weak points in the architecture (timers, pathfinding, and distance checking stuff especially).

Amazing shard btw, I logged some hours the week of launch before I got busy over the holidays.


Congratulations on the launch! I’ve heard great things about Outlands and have been meaning to log in and check it out.

It’s been a long time since I was involved in the RunUO community, but as I recall, one of the biggest limiting factors on scalability was activated NPC/AI. Range queries and movement were the two big pieces, so I’m sure your improvements would have a big impact there.

We ran load tests on Hybrid with over 10k clients (not just idle, mind you, but moving and talking and whatever else we were able to throw in to the load generators), and the server was able to keep up just fine. That was on mid-2000 era hardware too, but then again, RunUO wasn’t built to really take full advantage of multiple cores.

There are a lot of things I’d do differently if I had the opportunity to go back and redesign it from the ground up, but the simple single-threaded concurrency model is not something I’d want to change without great care. For all the scalability problems RunUO had, I think the concurrency model contributed significantly to its approachability, and I’d be very cautious of making any changes which would complect game logic with concurrency control.

I’ve heard quite a few stories (and now I’m hearing a few more) of folks whose path into software development started by tinkering around with RunUO. In fact, a few of my closest friends (and some now colleagues) took that same path. I am filled with a weird mix of pride and abject humility whenever I have the opportunity to see how the project has touched people all over the world, often in ways I could never have anticipated.

Please do share your changes back. I’d love to take a look at them, even after so many years.

-krrios


I'll get those patches out soon. Outlands generally is running much more complex AI with much faster response targets, so in conjunction with the far more detailed map it is stressing RunUO much harder than previous shards. But as noted the primary CPU consumer is definitely the map searching. My changes don't entirely change the algorithm (I've adjusted the sector size), but rather take advantage of more recent C# features that are much friendlier to the JIT and shift several allocations to the stack.

I'd also like to move away from timers for mobiles and simply call a function on a subset of them (sector by sector) each tick. This is advantageous because it groups all of the processing for a set of nearby mobiles in game space together in time, so it should greatly improve the CPU cache hit rate during the map searches. That would also require moving RunUO to a constant tick rate, which I also have patches for.

If anything, my changes have made RunUO more single threaded (and eliminated some locks in doing so). This has proven to be faster than some of the previously highly parallel code because the contention was so bad. That's not to say that it couldn't be done in a way that did scale well, but I agree with you that it would put the code out of reach of hobbyists entirely. I think the code today strikes the right balance of approachability and performance. Thanks for all of your effort on this project!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: